Research

We tested 10,000 ChatGPT queries. Here's who gets recommended and why.

May 1, 2026 12 min read Peakmention Research

We ran 10,000 buyer-intent queries across six B2B categories in ChatGPT-4o and mapped every brand mention. The pattern is clear: three signals predict who gets recommended above everything else.

Queries run
10,000
ChatGPT-4o
Categories
6
B2B software
Brands tracked
214
Across all categories
Top signal
Reddit
Cited in 88% of responses

Why we ran this study

There's a lot of speculation about what makes a brand show up in AI recommendations. Most of it comes from SEO professionals applying Google logic to a completely different system. We wanted data.

Over four weeks, we ran 10,000 buyer-intent queries across six B2B software categories in ChatGPT-4o. For every response, we logged which brands were mentioned, how prominently, and with what framing. Then we reverse-engineered the correlation between brand signals and mention frequency.

The categories we tested: project management, CRM, HR software, cybersecurity, sales intelligence, and marketing analytics. Each category had between 18 and 42 brands tracked.

Finding 1: Reddit dominates the signal stack

Finding 01
Brands mentioned in active Reddit threads were 4.2x more likely to appear in ChatGPT recommendations than brands with no Reddit presence.

This was the most consistent finding across all six categories. Brands with strong Reddit presence, specifically comments in threads where buyers are comparing tools, showed up in ChatGPT responses at dramatically higher rates.

The mechanism makes sense. ChatGPT's training data includes a significant portion of Reddit content. When a brand appears repeatedly in authentic comparison threads (not promotional, not astroturfed, just present in actual discussions) it gets weighted as a genuine recommendation by the community. The model treats community consensus as a trust signal.

Importantly, it wasn't just about being mentioned. Brands with substantive presence in threads, where actual product details were discussed, performed better than brands with shallow one-line mentions. Quality and context mattered alongside frequency.

ChatGPT mention rate by Reddit presence level
High Reddit presence (20+ threads)
84%
Medium Reddit presence (5-20 threads)
61%
Low Reddit presence (1-4 threads)
34%
No Reddit presence
12%

Finding 2: Listicle inclusions are the second most powerful signal

Finding 02
Brands included in "Top X tools for [category]" articles published by sites with DA 40+ appeared in ChatGPT responses 3.1x more often than brands without listicle coverage.

The correlation between listicle inclusions and AI mention rate was stronger than we expected. ChatGPT appears to treat "Top 10" style articles as curated recommendation signals. When multiple publications independently list the same brand in the same category, the model treats that as strong evidence that the brand belongs in the category leader set.

There's a quality filter at work too. Listicles on low-authority sites contributed almost no signal. The threshold for meaningful impact appeared to be around DA 40 and above, with the strongest signal coming from publications that themselves get cited in AI responses. The authority of the publication that lists you matters as much as the listing itself.

Finding 3: Review site presence creates a floor

Finding 03
Every brand appearing in the top 5 recommended positions across all six categories had at least one review platform listing with 10+ reviews. Zero exceptions.

G2 and Capterra data appears in ChatGPT responses more often than any other single source outside Reddit. We found that the model uses review platform presence as a baseline legitimacy check. Brands without it, regardless of how strong their other signals were, rarely broke into top recommendations.

The number of reviews mattered less than we expected. A brand with 15 detailed G2 reviews often outperformed brands with 200 shallow ones. The language used in the reviews also appeared to influence how the model described the brand. Review content that included specific use cases and category keywords gave ChatGPT better material to work with when framing recommendations.

What the top-ranked brands had in common

Signal Top 5 brands Brands ranked 6-20 Impact level
Active Reddit presence 100% 41% High
Listicle inclusions (DA 40+) 100% 58% High
Review platform listing 100% 72% High
Journalist citations 80% 29% Medium
Structured schema markup 60% 44% Medium
Website traffic volume 60% 55% Low
Google Ads spend 40% 38% None

The most striking finding in the table above is the last row. Google Ads spend had essentially zero correlation with ChatGPT mention rate. Brands spending heavily on paid search were no more likely to appear in AI recommendations than brands spending nothing. Paid visibility and organic AI visibility are completely separate games.

The category concentration problem

One finding that surprised us was how concentrated the recommendation set was in each category. On average, ChatGPT named just 4.3 unique brands when answering a buyer-intent query in a given category. The top 3 brands captured 71% of all mentions across the 10,000 queries.

This has big implications for brands currently outside the top 3. The model isn't just recommending less, it's often not mentioning you at all. A buyer asking ChatGPT for a CRM recommendation has roughly a 1-in-50 chance of hearing your brand if you're outside the top 5 in your category, regardless of how good your product is.

The 5 brands ChatGPT recommends in your category right now will be 4 of the 5 it recommends a year from now. AI models are slow to revise their mental model of a category once it's established. Getting in early is the only leverage point. Getting in later requires displacing an incumbent, which takes significantly more signal volume.

What this means for your GEO strategy

The data points to a clear priority stack for any brand trying to break into AI recommendations:

Limitations of this study

A few things worth noting about methodology. This study looked at ChatGPT-4o specifically. Claude, Perplexity, and Gemini each have different weighting and retrieval logic, and the signal rankings differ across platforms. Reddit is particularly dominant for ChatGPT and Perplexity. Claude weights formal publication citations more heavily. Gemini favors Google ecosystem presence.

The study also looked at B2B software categories. Consumer categories and professional services likely have different patterns, though the broad signals (community presence, listicle coverage, review platforms) appear to be consistent.

We're planning follow-up studies for Perplexity and Claude specifically. The platform-by-platform breakdown tells a more complete story than any single-platform analysis.

The main takeaway: if you want to show up in ChatGPT recommendations, the work is external to your website. It lives in the communities, publications, and review platforms where your buyers already go to validate buying decisions. That's where AI models are listening.

Apply these findings to your brand

Find out where you stand in AI recommendations today

We map your category across ChatGPT, Claude, Perplexity, and Gemini and show you exactly what it will take to get cited. Fill out the intake form and hear back within a few hours.

Get Started →
More from the blog
AI Platforms Perplexity vs. ChatGPT: Which Platform Sends More B2B Referrals? Read → GEO Strategy Authority Signals for AI: The 7 Sources That Actually Move Your Score Read → Research Why Claude Recommends Different Brands Than ChatGPT Read →