Britannica and Merriam-Webster want their names out of your hallucinations. Encyclopaedia Britannica and Merriam-Webster sued Perplexity in the Southern District of New York, alleging that Perplexity’s AI products generate and display incorrect “citations” and attributions alongside the plaintiffs’ brands.
Beyond familiar copyright issues, the complaint targets trademark misuse tied to live retrieval and output presentation, e.g., logos, source panels, and summaries in AI generated outputs that imply endorsement or accuracy when the underlying content is wrong or not from the cited source.
AI hallucinations have been raised as a trademark risk by others including online news networks and this case signals:
- The risk has moved from model training to how AI presents results. UI and citation design can create trademark and consumer deception exposure.
- Misattribution can dilute brands, erode user trust, and raise regulatory issues.
- Media and security reports have raised broader concerns about unauthorized scraping and crawler behavior, heightening scrutiny of data sourcing and brand use (see findings in: Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives).
What to do now
For business teams and AI users:
- Treat AI citations as leads, not proof. Verify any quoted text, facts, or attributions before using them in reports, marketing, or customer communications.
- Build a review step into workflows. Require human validation for any external citation, logo, or brand reference before publication.
For product, data, and engineering leaders:
- Design for attribution. Avoid placing third‑party marks, logos, or publisher names next to generated text unless paired with high‑confidence, verifiable links to the source.
- Suppress low‑confidence claims. Gate summaries and citations behind confidence thresholds and make retrieval paths visible to the user; log when and how sources are used.
- Audit crawlers and data intake. Confirm that bots honor robots.txt and site terms; document data sources and permissions; test for undeclared crawler behavior and disable it.
For legal, compliance, and procurement:
- Update terms and policies. Prohibit unauthorized scraping, AI training, and retrieval-augmented use of your content; add brand‑use restrictions and takedown mechanisms.
- Tighten vendor contracts. Require compliance with robots.txt and site terms, attribution controls, indemnities for IP, and brand claims.
- Monitor brand use. Set alerts and run periodic test prompts to see how your brands and content are surfaced across AI tools; document misuse for escalation.
Bottom line: As AI answers become branded and citation-heavy, trademark, accuracy, and reputation risks rise. No matter how this case is resolved, companies in the best position are those that verify outputs, redesign attribution UI, harden data sourcing, and tighten contractual controls.