Wednesday, February 8, 2023
At Google, we’ve long believed in the power of AI to transform the ability to deliver helpful
information. In this post, we’ll share more about how AI-generated content fits into our
long-standing approach to show helpful content to people on Search.
Rewarding high-quality content, however it is produced
Google’s ranking systems aim to reward original, high-quality content that demonstrates qualities
of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness. We share
more about this in our How Search Works site.
Our focus on the quality of content, rather than how content is produced, is a useful guide that
has helped us deliver reliable, high quality results to users for years.
For example, about 10 years ago, there were understandable concerns about a rise in mass-produced
yet human-generated content. No one would have thought it reasonable for us to declare a ban on
all human-generated content in response. Instead, it made more sense to improve our systems to
reward quality content, as we did.
Focusing on rewarding quality content has been core to Google since we began. It continues today,
including through our ranking systems
designed to surface reliable information
and our helpful content system. The helpful
content system was introduced last year to better ensure those searching get content created
primarily for people, rather than for search ranking purposes.
How automation can create helpful content
When it comes to automatically generated content, our guidance has been consistent for years.
Using automation—including AI—to generate content with the primary purpose of manipulating
ranking in search results is a violation of our spam policies.
Google has many years of experience dealing with automation being used in an attempt to game search
results. Our spam-fighting efforts—including our SpamBrain system—will
continue, however spam is produced.
This said, it’s important to recognize that not all use of automation, including AI generation, is
spam. Automation has long been used to generate helpful content, such as sports scores, weather
forecasts, and transcripts. AI has the ability to power new levels of expression and creativity,
and to serve as a critical tool to help people create great content for the web.
This is in line with how we’ve always thought about empowering people with new technologies. We’ll
continue taking this responsible approach, while also maintaining a high bar for information quality
and the overall helpfulness of content on Search.
Our advice for creators considering AI-generation
As explained, however content is produced, those seeking success in Google Search should be looking
to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.
Creators can learn more about the concept of E-E-A-T on our
Creating helpful, reliable, people-first content
help page. In addition, we’ve updated that page with some guidance about thinking in terms of
Who, How, and Why
in relation to how content is produced.
Evaluating your content in this way, whether you’re using AI-generated content or not, will help
you stay on course with what our systems seek to reward.
FAQ
To further help, here are some answers to questions you may have about AI content and Google Search:
Is AI content against Google Search’s guidelines?
Appropriate use of AI or automation is not against our guidelines. This means that it is not used
to generate content primarily to manipulate search rankings, which is
against our spam policies.
Why doesn’t Google Search ban AI content?
Automation has long been used in publishing to create useful content. AI can assist with and
generate useful content in exciting new ways.
How will Google Search prevent poor quality AI content from taking over search results?
How will Google address AI content that potentially propagates misinformation or contradicts
consensus on important topics?
These issues exist in both human-generated and AI-generated content. However content is produced,
our systems look to surface high-quality information
from reliable sources, and not information that contradicts well-established consensus on important
topics. On topics where information quality is critically important—like health, civic, or
financial information—our systems place an even greater emphasis on signals of reliability.
How can Search determine if AI is being used to spam search results?
We have a variety of systems, including SpamBrain,
that analyze patterns and signals to help us identify spam content, however it is produced.
Will AI content rank highly on Search?
Should I use AI to generate content?
If you see AI as an essential way to help you produce content that is helpful and original, it
might be useful to consider. If you see AI as an inexpensive, easy way to game search engine
rankings, then no.
Should I add author bylines to all my content?
You should consider having accurate author bylines when readers would reasonably expect it, such
as to any content where someone might think, “Who wrote this?”
As a reminder, publishers that appear in Google News should use bylines and author information.
Learn more on our Google News policies
page.
Should I add AI or automation disclosures to my content?
AI or automation disclosures are useful for content where someone might think “How was this created?”.
Consider adding these when it would be reasonably expected.
Can I list AI as the author of content?
Giving AI an author byline is probably not the best way to follow our recommendation to
make clear to readers
when AI is part of the content creation process.