What is A.I. reading – May 2026 Edition by Muck Rack

About the paper

What is AI Reading? by Muck Rack’s Generative Pulse examines which sources generative AI systems cite when answering realistic consumer prompts.

The report is a modelling/data-pack style citation analysis based on a large prompt set submitted to ChatGPT, Claude and Gemini, with more than 25 million cited links analysed across multiple industries; the geographic scope is not clearly specified in the report.

Length: 43 pages

More information / download:
https://generativepulse.ai/report/

Core Insights

1. What is the central finding of the report about the sources AI systems cite?

The report’s central argument is that generative AI citations are overwhelmingly shaped by non-paid, earned, third-party sources rather than paid media or advertising. About 99% of links cited by AI come from non-paid media, while paid and advertorial content accounts for only 0.3% of all citations. Press releases account for 1.1%.

Within that non-paid universe, journalism remains a major foundation of AI visibility. The report finds that about 27% of all links cited by AI are journalistic. This is framed as a consistent pattern across Muck Rack’s previous studies, where journalism has generally accounted for 20–30% of AI citations.

However, journalism is not the only important source category. The report’s pie charts on pages 5–7 show a broader mix: corporate blogs and content at 24%, aggregators and encyclopedic sources at 17.4%, owned media at 13.7%, government/NGO sources at 8.6%, academic/research sources at 4%, social/UGC at 2.9%, tech platforms at 0.9%, press releases at 1.1%, and paid/advertorial at 0.3%.

The implication is clear: visibility in AI-generated answers is not mainly bought through advertising. It is earned through the kinds of sources AI systems treat as credible, relevant or useful: journalism, reference sources, third-party content, government data, academic material, user-generated platforms and some owned content.

2. How do ChatGPT, Claude and Gemini differ in their citation behaviour?

The report argues that each AI provider has a distinct citation ecosystem. ChatGPT, Claude and Gemini do not simply cite the same sources at different volumes; they appear to rely on meaningfully different source environments.

ChatGPT is described as a “near-universal citer”. It includes citations in 96% of responses, but averages only about five sources per response. Claude is more selective: only 55% of its responses include citations, but when it does cite, it averages 13 sources per response. Gemini sits between the two, citing in 82% of responses and averaging eight sources per response.

The differences become even sharper when looking at the actual domains. ChatGPT’s top cited domain is Wikipedia, followed by Axios, YouTube, Kiplinger and Forbes. Claude’s top domain is PubMed Central, followed by Wikipedia, Quora, ScienceDirect and NerdWallet. Gemini’s top cited domain is Reddit, followed by YouTube, Quora, Wikipedia and NIH.

The report’s interpretation is that these are “three effectively separate information environments”. For PR and communications teams, this matters because AI visibility cannot be reduced to a single generic “AI search” strategy. A brand, journalist, outlet or source may matter a great deal in one model and be nearly invisible in another.

3. What determines whether journalism, press releases or other content gets cited?

The report finds that the type of question asked strongly shapes the type of source cited. Industry trend queries are especially likely to draw on journalism: 46% of industry trend responses cite journalistic sources, more than twice the rate for how-to and comparative queries.

By contrast, how-to queries are less journalism-driven. The report says AI tends to rely more on reference content and brand-owned material when users ask how to do something. Comparative evaluation and best-of queries also behave differently, often pulling in review content, platforms, maps, rankings or consumer advice sources depending on the category.

Press releases are most likely to appear in industry trend responses, but even there they remain a relatively small part of the citation mix. Around 1.16% of industry trend citations are press releases, compared with 0.33% for best-of queries, 0.27% for risk/due diligence, 0.25% for problem/discovery, 0.13% for comparative evaluation and 0.09% for how-to.

Recency also matters, especially for journalism. Among journalism citations with known publish dates, 57% were published within the previous 12 months. The report notes a sharp peak in the first month after publication, followed by a decline through month six and then a long tail. Older articles still matter, but the bias toward recent coverage is clear.

4. Which sources, platforms and outlets stand out most in the report?

Several sources stand out because they behave differently from the broader pattern.

Wikipedia is especially important for ChatGPT and Claude. It appears among the top three cited domains in 12 of 17 industries for ChatGPT and 8 of 17 industries for Claude. For Gemini, it appears in the top three in only 3 of 17 industries, because Gemini is more strongly shaped by Reddit, brand domains and Q&A platforms such as Quora.

Reddit is particularly important for Gemini. The report says Reddit is Gemini’s single most-cited domain, accounting for about 2.4% of all Gemini citations. By contrast, ChatGPT cited Reddit only 16 times and Claude cited it zero times in the study.

YouTube also differs by provider. It accounts for about 2.1% of Gemini citations and about 2.0% of ChatGPT citations, while Claude returned zero YouTube citations across the study. Claude does cite other video platforms such as TikTok and Vimeo, but not at the same level.

Among journalism outlets, the standout is Axios. The report says journalism citations are spread across more than 20,000 distinct outlets, with no single publication generally dominating. Axios is the exception: it appears in ChatGPT’s top three cited domains across 13 of 17 industries. The Associated Press appears in the top three for one industry, while The New York Times and Reuters do not appear in the top three most cited sources for any industry in the report.

5. What are the main implications for PR and communications teams?

The report’s main implication is that AI visibility is increasingly connected to earned authority across multiple information environments. Traditional media relations still matters, but not in a simple “get mentioned in top-tier media” way.

First, journalism remains important because it accounts for about 27% of all AI citations and is especially influential for industry trend queries. For brands that want to shape how AI explains what is happening in a sector, credible media coverage appears to be particularly valuable.

Second, the report suggests that citation strategy must be model-specific. A communications team optimising for ChatGPT would pay close attention to Wikipedia, Axios, YouTube and sector-specific sources. A team concerned with Gemini would need to understand Reddit, YouTube, Quora and other user-generated or community-driven environments. For Claude, academic, research, personal finance and reference-style sources appear more prominent.

Third, owned media still matters, but it is not enough on its own. Owned media accounts for 13.7% of citations, while corporate blogs and content account for 24%. The distinction in the report is important: third-party corporate/blog content is categorised as earned when it is not owned by the company or product targeted in the query, while first-party corporate/blog content is owned media. This suggests that corporate content can influence AI, but third-party validation remains highly significant.

Fourth, communications teams need to think beyond classic media lists. Depending on the sector and query type, AI may cite Google Maps, TripAdvisor, Reddit, Quora, YouTube, PubMed Central, NIH, government websites, academic databases, review sites, ranking platforms or trade publications. The relevant source ecosystem changes by industry.

Finally, the report implies that AI visibility is dynamic. The methodology section explicitly notes that generative AI systems are rapidly evolving and opaque, and that observed behaviours may shift as models are updated or retrained. So the findings should be treated as a snapshot of AI citation behaviour in May 2026, not as a permanent rulebook.