top of page

Why your AI visibility strategy can't be one-size-fits-all

  • Writer: Conor Woodhall
    Conor Woodhall
  • Apr 20
  • 2 min read

Visibility within Large Language Models (LLMs) like ChatGPT is shaping how companies get discovered, described and evaluated daily. One thing is becoming clear from where we sit: a one-size-fits-all approach to discoverability won't work. Each model retrieves, prioritises and processes differently. Getting your company noticed in ChatGPT works differently from how it might appear in Claude and Gemini. As marketers scramble to adapt, understanding those differences is where smarter visibility begins.


Understanding how LLMs work

LLMs are trained on large datasets of text, such as books, articles and conversations. According to IBM, LLM equals three things: data, architecture and lastly training. The components that make up an LLM analyse vast amounts of data to identify patterns and generate contextually relevant responses that predict the coherent answers a user would accept.


Each model runs this process in a slightly different way and has been developed to generate responses based on different biases and training datasets. For example, they can draw on factors such as:

  • Trusted domains and authoritative sources

  • Frequently cited or widely referenced source material based on authority signals

  • Structured, clear, and context-rich web pages

  • Recency and freshness of information

  • Well-structured owned or earned content


For marketers, it’s not just about ranking content for keywords; it’s also about being part of the information ecosystem AI trusts - this builds on the fundamentals developed through strong and consistent SEO, PR and marketing practices.


Where LLMs are getting their information

Each AI system retries and prioritises sources differently. Muck Rack recently reported that, on average, Claude tends to cite smaller, more niche outlets than ChatGPT does. Claude’s top 100 most-cited media outlets attracted around 50% lower unique monthly visitors than ChatGPT’s top 10, suggesting a preference for depth and specificity, such as industry features, over broad, high-traffic publications.


Separately, a SEMrush report found that LinkedIn ranks second among the most-cited pages across AI models, appearing in roughly 11% of AI responses.

 

AirOps analysis showed that ChatGPT left 85% of retrieved pages uncited, with Google’s top spot cited 3.5x more often than those outside the top 20.   


Visibility within AI-generated responses is shaped not only by discoverability but also by the authority, relevance, and format of the source being retrieved.


The objective remains the same: ensuring your business is visible across the right sources, topics, and narratives. To do this effectively, we need to understand where your company currently sits within AI-generated responses, which sources are being cited, and where competitor or industry voices are outperforming you. Once we can establish this baseline, it will help determine where future strategy, content development, and investment should be focused.


Bridging the visibility gap

Most companies still lack clarity about how their business is showing up in AI models and how they can influence these models without fully submitting to the algorithm to still deliver meaningful content to a human reader.


The development of our OSCAR AI report helps organisations understand their current visibility across AI and LLM models, and assess how these engines describe their brand, products, and competitors to better understand their overall AI impact.


If you’re interested in learning more, speak to our team, and we can uncover how AI platforms describe and position your brand.

bottom of page