Google Repeats Advice On Not Using AI-Generated Answers

In a recent LinkedIn post, Google’s Gary Illyes advises users of large language models (LLMs) to check authoritative sources against AI-generated answers. 

Gary Illyes repeated recent advice from Google’s John Mueller on using authoritative sources and personal knowledge to cross-check LLM answers, saying: 

  • “Based on their training data, LLMs find the most suitable words, phrases, and sentences that align with a prompt’s context and meaning.”
  • “This allows them to generate relevant and coherent responses. But not necessarily factually correct ones.”

Large language models recap

Large language models (LLMs) are pre-trained machine-learning models that use vast amounts of data (hence the name large) to understand and generate text, such as answering questions. 

LLMs provide this data using a neural network (called a transformer model) that uses an encoder and decoder to create self-attention capabilities. 

The Grounding strategy

Illyes’s answer to the LinkedIn question explains how LLMs find suitable text that aligns with the question to generate relevant and coherent responses. Still, he emphasizes those responses are only sometimes factually correct.

Gary also mentions the grounding technique that can increase an LLM’s factual accuracy but advises that whilst grounding helps reduce imperfection by connecting an LLM with real-world knowledge and authoritative facts, it isn’t perfect. 

Gary posted on LinkedIn:

Gary finished his answer with a bit of humor, saying,

  • Alas. This post is also online, and I might be an LLM. Eh, you do you.

Google repeats caution with AI-generated answers

Gary’s reply came a few days after Google’s senior search analyst, John Mueller, recommended that we shouldn’t use LLMs for SEO advice.

Mueller wrote on LinkedIn:


Looking forward

Gary and John’s answers reminds publishers that relevance and usefulness come before authoritativeness regarding content.  

Mueller’s recent admission that Google ranks Redditors over industry experts based on the usefulness of the information, not the authoritativeness, confirms how Google is (in one way) currently ranking content.  

So, when using LLMs to rank in Google, publishers must double-check their content to ensure it complies with real-world relevant knowledge based on facts, not AI fiction.

Picture of Terry O'Toole

Terry O'Toole

Terry is a seasoned content marketing specialist with over six years of experience writing content that helps small businesses navigate where small businesses meet marketing - SEO, Social Media Marketing, etc. Terry has a proven track record of creating top-performing content in search results. When he is not writing content, Terry can be found on his boat in Italy or chilling in his villa in Spain.

Read by 10,000+ world-class SEOs, CEOs, Founders, & Marketers. Strategy breakdown: