AI Use Disclosure
Last updated: April 5, 2026. This page describes how Tavi uses artificial intelligence in its content intelligence pipeline. We believe transparency about AI use is essential for trust.
1. Overview
Tavi is a community intelligence and content workflow platform. Parts of the Service use artificial intelligence to help your team work faster — scanning public signals, drafting content, checking compliance, and scoring quality. This page explains exactly where AI is involved, what data it sees, and where humans stay in control.
We do not use AI to make autonomous publishing decisions. Every piece of content that reaches your audience passes through human review first.
2. Where AI appears in the pipeline
Tavi's content pipeline has four agent stages. Each stage that uses AI is marked below.
- Scout — scans configured RSS feeds, Google Trends, and community sources for trending signals. Uses sentiment analysis (VADER, running locally) to score relevance. No external AI inference calls.
- Generator — takes a scout signal and your brand voice settings and sends an organizational content prompt to an external large language model (LLM) via OpenRouter to produce a first draft. External AI inference call.
- Reviewer — checks the generated draft against your organization's compliance rules (regex-based pattern matching, locally executed). Can flag hard blocks for immutable rules or soft blocks for review. No external AI inference calls.
- Grader — scores draft quality on a 0–100 scale using heuristic evaluation (readability, keyword density, length). Drafts below threshold are flagged for revision. No external AI inference calls.
Of the four pipeline stages, only the Generator sends data to an external AI provider. All other stages run locally on our servers.
3. Strategic Intelligence (Layer 2)
Tavi's intelligence layer identifies narrative voids — topics where community demand exists but no competitor is speaking. This layer uses:
- Embedding generation — signal text and Brand DNA pillars are converted to vector embeddings (text-embedding-3-small, 1536 dimensions) to enable semantic similarity search. These embeddings are generated via external API call.
- Brief generation — when a narrative void is confirmed by a human reviewer, Tavi generates strategic briefs using an external LLM via OpenRouter with stance injection (educator, disruptor, or humanist framing).
4. Human oversight
AI-generated content is never published automatically. The following controls are in place:
- Every generated draft enters a review queue where a human team member must approve, edit, or reject it before it can proceed.
- Compliance rules (including immutable regulatory rules for industries like financial services) are checked before human review — hard blocks cannot be overridden without an audited exception.
- All state changes — generation, review, approval, rejection, override — are recorded in an append-only audit trail with actor identity and timestamp.
- Narrative voids in the intelligence layer require human confirmation before brief generation begins.
5. What data is sent to AI providers
When the Generator or intelligence layer calls an external LLM, the prompt typically contains:
- The signal title and summary (derived from public community sources your organization configured)
- Your organization's brand voice description and tone settings
- For intelligence briefs: Brand DNA pillar names and descriptions, and the narrative void context
We design prompts to exclude personal data. Prompts do not include user emails, passwords, internal user IDs, or account credentials. Do not paste personal information into signal descriptions or brand voice fields unless your organization accepts that risk.
The external LLM provider used is OpenRouter, which routes requests to underlying model providers. We select OpenRouter for its provider-agnostic approach and its contractual commitments on data handling.
6. Data retention by AI providers
Tavi uses API-based inference, not consumer chat products. Under OpenRouter's API terms, prompt and completion data sent via API is not used to train models. We do not opt in to any training data sharing programs.
On our side, generated drafts are stored in your organization's database (scoped by row-level security) for as long as your account is active. You may delete drafts at any time. Audit records of generation events are retained per our Privacy Policy retention schedule (up to 365 days for audit records, 90 days for routine operational records).
7. Limitations and accuracy
AI-generated content may be inaccurate, incomplete, or inappropriate for your audience or regulatory context. Large language models can produce plausible-sounding text that is factually wrong (sometimes called “hallucination”). This is why human review is mandatory before any content is approved.
Tavi's compliance checks catch pattern-based regulatory issues (such as prohibited claims in financial services), but they do not guarantee legal compliance. Your organization is responsible for final review and approval of all content before publication.
8. Questions
If you have questions about how Tavi uses AI, contact us at privacy@heytavi.ca or through the Contact page. For details on how we handle personal information, see our Privacy Policy. For service terms, see our Terms of Service.