This week’s Ask An SEO question comes from Abhinav in New Delhi, who writes:
“How can article aggregator sites outperform the original source?
Google has various guidelines on aggregator copy not performing well, and much is said of not showing duplicate content across the web.
Yet RSS aggregator sites URLs are indexed, even knowing it’s 100% duplicate content.
Why does Google index them, knowing their duplicate content and how can they possibly outperform the original sources?”
It’s frustrating when a site we deem as “not worthy” ranks above our site.
I suspect SEO pros spend millions of hours each month trying to determine why a competitor’s site is outranking them for a specific query.
In the past, I’ve counseled that if you are spending more time worrying about what your competitors are doing than optimizing your own site, you’re doing it wrong.
But I also understand the frustration of watching another site outrank you when you feel like your site is better and your optimization scheme is superior.
It’s Probably Not What You Think
I’ve been analyzing sites from an SEO perspective for 23+ years.
I am more than happy to provide an educated opinion on why one site ranks better than another.
But more than half the time, my guess is wrong.
Many variables go into ranking a site for a specific query.
And while sometimes it’s obvious what’s putting one site on top of another, other times it’s impossible to determine the exact cause of a ranking with Google or Bing data.
We all know that “I don’t know” isn’t an acceptable answer to most clients or bosses – at least not for long.
Trust me, I’m the king of guesses when it comes to ranking – and you’ll swear I know exactly what is causing a site to rank even if I’m providing an educated guess in most cases.
For example, what you describe as an RSS site may actually be a popular hub in a specific industry.
This hub may have a significant backlink profile, even if they are just aggregating content.
Remember, content aggregators are popular for a reason.
Aggregators, by definition, use duplicate content – in most cases, duplicate content that did not originate from the aggregator site.
Google knows this.
But that doesn’t mean that Google automatically classifies all content on an aggregator site as inferior.
In fact, most experienced SEO experts can tell horror stories of their site being outranked by another site that simply scraped their site’s content.
It’s frustrating that Google doesn’t always know the source of content.
Schema and other tools can help Google conclude that your original content needs to rank – but sometimes, it’s not possible.
What To Do
I advocate for being a decent human when dealing with other webmasters – at least until they give me a reason to bring out my more aggressive alter-ego.
Find the content information for the site using your duplicate content and send them a note asking them to take it down.
Experience has shown me that they will remove the content around half of the time.
Most folks running these sites may be cutting some corners, but they frequently don’t think of the impact their actions can have on other sites.
Once they are aware of their transgressions, many will make things right.
The other way to ensure your site ranks ahead of a copycat is to work on all SEO aspects of your own site.
Simply be better than the site stealing your content.
You may not know the exact reason a site is outranking you, but if you continue to create quality content, build quality links, ensure your technical SEO is up to date, and create an effective keyword and content strategy, you might eventually outrank the aggregate site.
Have patience and have faith in your SEO program.
SEO is a grind.
Embrace the grind and keep pushing forward.
Good things will come.
More resources:
Ask an SEO is a weekly SEO advice column written by some of the industry’s top SEO experts, who have been hand-picked by Search Engine Journal. Got a question about SEO? Fill out our form. You might see your answer in the next #AskanSEO post!
Featured Image: Vector Juice/Shutterstock