Google Search Advocate John Mueller responded to a question about the “Page Indexed without content” error in Search Console, explaining the issue typically stems from server or CDN blocking rather than JavaScript.
The exchange took place on Reddit after a user reported their homepage dropped from position 1 to position 15 following the error’s appearance.
What’s Happening?
Mueller clarified a common misconception about the cause of “Page Indexed without content” in Search Console.
Mueller wrote:
“Usually this means your server / CDN is blocking Google from receiving any content. This isn’t related to anything JavaScript. It’s usually a fairly low level block, sometimes based on Googlebot’s IP address, so it’ll probably be impossible to test from outside of the Search Console testing tools.”
The Reddit user had already attempted several diagnostic steps. They ran curl commands to fetch the page as Googlebot, checked for JavaScript blocking, and tested with Google’s Rich Results Test. Desktop inspection tools returned “Something went wrong” errors while mobile tools worked normally.
Mueller noted that standard external testing methods won’t catch these blocks.
He added:
“Also, this would mean that pages from your site will start dropping out of the index (soon, or already), so it’s a good idea to treat this as something urgent.”
The affected site uses Webflow as its CMS and Cloudflare as its CDN. The user reported the homepage had been indexing normally with no recent changes to the site.
Why This Matters
I’ve covered this type of problem repeatedly over the years. CDN and server configurations can inadvertently block Googlebot without affecting regular users or standard testing tools. The blocks often target specific IP ranges, which means curl tests and third-party crawlers won’t reproduce the problem.
I covered when Google first added “indexed without content” to the Index Coverage report. Google’s help documentation at the time noted the status means “for some reason Google could not read the content” and specified “this is not a case of robots.txt blocking.” The underlying cause is almost always something lower in the stack.
The Cloudflare detail caught my attention. I reported on a similar pattern when Mueller advised a site owner whose crawling stopped across multiple domains simultaneously. All affected sites used Cloudflare, and Mueller pointed to “shared infrastructure” as the likely culprit. The pattern here looks familiar.
More recently, I covered a Cloudflare outage in November that triggered 5xx spikes affecting crawling. That was a widespread incident. This case appears to be something more targeted, likely a bot protection rule or firewall setting that treats Googlebot’s IP addresses differently from other traffic.
Search Console’s URL Inspection tool and Live URL test remain the primary ways to identify these blocks. When those tools return errors while external tests pass, server-level blocking becomes the likely cause. Mueller made a similar point in August when advising on crawl rate drops, suggesting site owners “double-check what actually happened” and verify “if it was a CDN that actually blocked Googlebot.”
Related: 8 Common Robots.txt Issues And How To Fix Them
Looking Ahead
If you’re seeing the “Page Indexed without content” error, check the CDN and server configurations for rules that affect Googlebot’s IP ranges. Google publishes its crawler IP addresses, which can help identify whether security rules are targeting them.
The Search Console URL Inspection tool is the most reliable way to see what Google receives when crawling a page. External testing tools won’t catch IP-based blocks that only affect Google’s infrastructure.
For Cloudflare users specifically, check bot management settings, firewall rules, and any IP-based access controls. The configuration may have changed through automatic updates or new default settings rather than manual changes.
See also: Google Explains Reasons For Crawled Not Indexed