AI SearchApril 20, 2026·8 min read

How to get your brand cited by ChatGPT in 2026

← Back to all posts

Roughly 200 million people now use ChatGPT each week to ask product questions, compare options, and get recommendations. When you ask "what’s the best CRM for a small consulting firm?", ChatGPT doesn’t hand you ten links — it gives you an answer, and inside that answer it names specific brands. If your product isn’t one of them, you’re effectively invisible in a search channel that didn’t exist five years ago.

The mechanics of how ChatGPT chooses which brands to cite are partly mysterious and partly knowable. We’ve tested hundreds of category prompts across the major LLMs at Casa, and certain patterns emerge clearly. Here’s the practical playbook.

**The two paths to a citation**

When ChatGPT generates a brand recommendation, it’s drawing from one of two sources — sometimes both:

1. *Training data.* The model was trained on a snapshot of the internet up to a cutoff date. Brands that were heavily discussed, reviewed, and referenced in that snapshot have a strong baseline presence. 2. *Real-time web search.* When ChatGPT Search is invoked (or the user is on a plan with web access), the model performs a live search and synthesises citations from the results.

For most consumer queries, both paths are active. This means there are two distinct optimisation surfaces: how your brand was represented in the training corpus, and how your brand performs in real-time search retrieval.

**The training corpus path**

You can’t directly edit GPT-4’s training data, but you can influence how your brand is represented in the next training cycle. The signals that matter:

*Volume of mentions in authoritative sources.* If your brand has been written about on TechCrunch, The Verge, NYT, Bloomberg, Reuters, in academic papers, or in widely-read industry publications, you have a presence in any reasonable training corpus. This is fundamentally a PR and content marketing investment, not a technical SEO one.

*Wikipedia presence.* Wikipedia is heavily weighted in LLM training. A well-maintained, neutral-tone Wikipedia article about your company is one of the highest-ROI investments for AI visibility. We’ve seen brands that meet notability requirements get cited 3-4x more often after a Wikipedia article goes live.

*Reviews and comparisons on G2, Capterra, TrustRadius, Reddit, Hacker News, and Stack Exchange.* These are crawlable, persistent, and heavily mentioned in discussions of "what’s the best X." They feed both training data and real-time search.

*Long-form content from your own domain.* Your blog matters, but mostly as a citation source for third-party writers. A truly original research piece on your blog that gets quoted in a TechCrunch story does double duty.

**The real-time retrieval path**

This is the path you have most direct control over.

*Page-level structure.* ChatGPT Search does a live web search and selects pages whose content can be cleanly extracted. This rewards pages that:

- Have a clear, declarative summary near the top - Use proper headings (H1 → H2 → H3) that mirror the question structure - Include explicit comparisons or pros/cons in structured form - Have FAQ sections that directly answer common questions - Use Schema.org markup, especially FAQPage, HowTo, Product, Article, and BreadcrumbList

*Page speed and accessibility.* Slow pages get deprioritised. Pages behind login walls or with aggressive rendering complexity often don’t get crawled. Server-rendered HTML that exposes content without JavaScript is dramatically more reliable than client-rendered SPAs.

Research powered by
Perplexity+Claude
AI-assisted research and analysis, reviewed and edited by the Casa team.

*Source diversity.* When ChatGPT cites multiple sources for a claim, it favours pages from different domains. Having your point of view reflected on third-party domains (guest posts, podcast appearances, partner content) compounds.

*Recency.* Real-time retrieval favours recently-updated content. A 2018 review of CRM software won’t be cited if there’s a 2025 alternative covering the same ground. Refresh your top commercial pages quarterly.

**The 30-day plan**

If you’re starting from zero on AEO, here’s a concrete sequence:

*Week 1.* Audit your visibility. Run [Casa’s AEO tool](/seo/aeo) on your three most commercial pages. It’ll give you an AEO score, list missing schema, and surface specific brand-mention gaps in ChatGPT and Perplexity.

*Week 2.* Fix the highest-impact technical gaps. Add FAQPage schema to your top three pages. Add a TL;DR summary block at the top of each. Add structured comparison tables where relevant. Make sure your H1s match the queries you want to be cited for.

*Week 3.* Audit your third-party presence. Are you on G2, Capterra, TrustRadius? Are there recent reviews? Is your Wikipedia article (if you have one) up to date? Are you mentioned in any "best X for Y" listicle? Identify gaps.

*Week 4.* Publish. Write one piece of original research — ideally a quantitative study with proprietary data — that you can pitch to industry publications. Original research is the single highest-ROI content format for AEO, because it gives writers something concrete to cite.

After 30 days, re-audit. You should see improvements in your AEO score and, if you’ve done the third-party work, the start of new mentions in real-time search results.

**What doesn’t work**

A few things that get pitched as "AEO best practices" but don’t actually move the needle:

*Stuffing prompts with your brand name.* Hidden brand mentions in alt text or HTML comments don’t help. The LLMs see what users see.

*Buying mentions.* Paid placements that aren’t disclosed get filtered. Sponsored content that’s clearly disclosed can help, because the LLM still sees the brand mention in the context.

*Generic content farms.* Publishing 50 thin AI-generated articles a month tanks your domain authority and confuses the topical clustering. Quality over quantity, especially since LLMs are increasingly good at detecting low-quality AI text.

*Schema spam.* Adding schema for things that aren’t actually on the page (e.g., faking aggregate ratings) gets caught and can earn manual penalties.

**The honest version**

There’s no silver bullet for ChatGPT citations. The brands that get cited consistently have invested years in the underlying assets: a respected brand presence, a maintained corpus of expert content, third-party relationships, original research, and a website built so that automated crawlers can extract value cleanly. Six months of focused work moves the needle. Six weeks of focused work creates a measurable lift in real-time-search-driven citations.

If you want to track your progress without writing the tooling yourself, [Casa’s AEO platform](/seo) monitors brand mentions across ChatGPT, Perplexity, Claude, and Google AI Overviews, with weekly reports on share of voice and competitor benchmarking. The free [AEO audit](/seo/aeo) is a good first move — it tells you exactly where you stand today and what to fix first.

Check your AI visibility

Is your brand visible to AI search engines?

Run a free AEO analysis and find out how Perplexity, ChatGPT, and other AI tools see your brand — in under 60 seconds.

Try the AEO tool →
← Back to all posts

More from the journal

AEO

What is AEO? The complete guide to Answer Engine Optimisation

Answer Engine Optimisation is reshaping how brands get discovered online. Here's everything you need to know about AEO and why it matters for your business.

March 27, 2026·7 min read
AI Search

How Perplexity decides which brands to recommend

Perplexity AI has become a major discovery channel for consumers. Understanding its citation model is key to getting your brand recommended.

March 25, 2026·6 min read
Industry

SEO vs AEO: why Google rankings aren't enough anymore

Ranking number one on Google no longer guarantees that AI search engines will recommend your brand. Here's what's changed and what to do about it.

March 22, 2026·6 min read