In a recent episode of Google’s Search Off The Record podcast, team members got hands-on with Gemini to explore creating SEO-related content.

However, their experiment raised concerns over factual inaccuracies when relying on AI tools without proper vetting.

The discussion involved Lizzi Harvey, Gary Illyes, and John Mueller taking turns utilizing Gemini to write sample social media posts on technical SEO concepts.

As they analyzed Gemini’s output, Illyes highlighted a limitation shared by all AI tools:

“My bigger problem with pretty much all generative AI is the factuality – you always have to fact check whatever they are spitting out. That kind of scares me that now we are just going to read it live, and maybe we are going to say stuff that is not even true.”

Outdated SEO Advice Exposed

The concerns stemmed from an AI-generated tweet suggesting using rel=”prev/next” for pagination – a technique that Google has deprecated.

Gemini suggested publishing the following tweet:

“Pagination causing duplicate content headaches? Use rel=prev, rel=next to guide Google through your content sequences. #technicalSEO, #GoogleSearch.”

Harvey immediately identified the advice as outdated. Mueller confirms rel=prev and rel=next is still unsupported:

“It’s gone. It’s gone. Well, I mean, you can still use it. You don’t have to make it gone. It’s just ignored.”

Earlier in the podcast, Harvey warned inaccuracies could result from outdated training data information.

Harvey stated:

“If there’s enough myth circulating or a certain thought about something or even outdated information
that has been blogged about a lot, it might come up in our exercise today, potentially.”

Sure enough, it took only a short time for outdated information to come up.

Human Oversight Still Critical

While the Google Search Relations team saw the potential for AI-generated content, their discussion stressed the need for human fact-checking.

Illyes’ concerns reflect the broader discourse around responsible AI adoption. Human oversight is necessary to prevent the spread of misinformation.

As generative AI use increases, remember that its output can’t be blindly trusted without verification from subject matter experts.

Why SEJ Cares

While AI-powered tools can potentially aid in content creation and analysis, as Google’s own team illustrated, a healthy degree of skepticism is warranted.

Blindly deploying generative AI to create content can result in publishing outdated or harmful information that could negatively impact your SEO and reputation.

Hear the full podcast episode below:


FAQ

How can inaccurate AI-generated content affect my SEO efforts?

Using AI-generated content for your website can be risky for SEO because the AI might include outdated or incorrect information.

Search engines like Google favor high-quality, accurate content, so publishing unverified AI-produced material can hurt your website’s search rankings. For example, if the AI promotes outdated practices like using the rel=”prev/next” tag for pagination, it can mislead your audience and search engines, damaging your site’s credibility and authority.

It’s essential to carefully fact-check and validate AI-generated content with experts to ensure it follows current best practices.

How can SEO and content marketers ensure the accuracy of AI-generated output?

To ensure the accuracy of AI-generated content, companies should:

  • Have a thorough review process involving subject matter experts
  • Have specialists check that the content follows current guidelines and industry best practices
  • Fact-check any data or recommendations from the AI against reliable sources
  • Stay updated on the latest developments to identify outdated information produced by AI


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, April 2024. 



Source link

Avatar photo

By Rose Milev

I always want to learn something new. SEO is my passion.

Leave a Reply

Your email address will not be published. Required fields are marked *