Stanford’s Human-Centered Artificial Intelligence Institute published its 2026 AI Index Report. The report runs over 400 pages across nine chapters covering technical performance, investment, workforce effects, and public sentiment.

The number getting the most attention is that Generative AI reached 53% adoption among the global population within three years of ChatGPT’s launch. That’s faster than either the personal computer or the internet reached comparable levels.

For anyone working in search, the report contains data that connects directly to the changes you’ve been navigating all year.

What The Report Found

This is the ninth annual AI Index, and it covers a lot of ground. A few findings matter most for the search industry.

In terms of capability, frontier models now exceed human performance on PhD-level science questions and in competitive mathematics. AI agents handling real-world tasks improved from a 20% success rate in 2025 to 77% today. Coding benchmarks that models struggled with a year ago are now nearly solved.

On investment, global corporate AI investment hit $581 billion in 2025, up 130% from the prior year. US private AI investment reached $285 billion. More than 90% of frontier models now come from private companies, not academic labs.

Regarding workforce effects, employment among software developers aged 22 to 25 has dropped by nearly 20% since 2024. A similar pattern appeared in customer service and other roles with higher AI exposure.

Transparency is declining. The Foundation Model Transparency Index fell from 58 to 40. The most capable models now disclose the least about their training data, parameters, and methods. Of the 95 most notable models launched last year, 80 were released without their training code.

The Adoption Number Everyone Is Citing

Understanding the 53% figure, what it includes, and what it doesn’t, matters for how you interpret it.

The comparison to PCs and the internet is based on research by the St. Louis Fed, Vanderbilt, and Harvard Kennedy School. The team compared adoption rates by years since each technology’s first mass-market product. The IBM PC launched in 1981. Commercial internet traffic opened in 1995. ChatGPT launched in November 2022.

At comparable points after launch, generative AI adoption runs well ahead of both earlier technologies.

But the comparison isn’t apples-to-apples, and the researchers said so themselves. Harvard’s David Deming pointed out that AI is built on top of PCs and the internet. People already had the hardware and the connectivity. Nobody needed to buy new equipment or wait for connectivity to reach their area. AI adoption rode on decades of prior technology investment.

Adoption numbers also vary depending on who’s counting and how. The Stanford report puts US adoption at 28%, ranking the country 24th globally. The St. Louis Fed’s own tracker puts US adoption at 54% as of August 2025. Same country, nearly double the rate, measured differently. The Fed team even revised its earlier estimate upward from 39% to 44% after changing the order of its survey questions.

“Adoption” also doesn’t distinguish intensity. Someone who signed up for a free ChatGPT account and tried it once counts the same as someone who uses it eight hours a day. The Stanford report notes that most users access free or near-free tiers. That’s a different picture than the one the headline number implies.

None of this means the adoption data is wrong. Generative AI is spreading faster than comparable technologies did at the same stage. But the speed of adoption alone doesn’t tell you how deeply it’s embedded in workflows or how much it’s changing search behavior specifically.

The Jagged Frontier

The report’s most useful concept for search professionals might be its “jagged frontier” of AI capability.

The same models that win gold at the International Mathematical Olympiad read analog clocks correctly only 50% of the time. IEEE Spectrum reported that Claude Opus 4.6 scores at the top of Humanity’s Last Exam while reading clocks at just 8.9% accuracy. Models that ace PhD-level science questions still struggle with video understanding and multi-step planning.

Ray Perrault, co-director of the AI Index steering committee, told IEEE Spectrum that benchmarks don’t map cleanly to real-world results. Knowing a model scores 75% on a legal reasoning benchmark “tells us little about how well it would fit in a law practice’s activities,” he said.

Search professionals have seen similar unevenness in AI search products. Ahrefs research showed that AI Mode and AI Overviews cite different URLs for the same queries, with only 13% overlap. Google’s Robby Stein acknowledged that the system pulls AI Overviews back when people don’t engage with them. Those signals suggest AI search performance is uneven across contexts, even if Google hasn’t fully explained where those differences are most pronounced.

Stanford’s data suggest that strong benchmark performance doesn’t guarantee reliable results across all tasks or query types. Whether that unevenness improves with future models is an open question the report doesn’t answer.

What’s Happening To Transparency

What the report says about transparency connects directly to search.

The Foundation Model Transparency Index dropped from 58 to 40 in a single year. The most capable models score lowest. Google, Anthropic, and OpenAI have all stopped disclosing dataset sizes and training duration for their latest models. 80 of the 95 most notable models launched in 2025 shipped without training code.

TechCrunch noted a disconnect between expert optimism about AI and public anxiety about it. The US reported the lowest trust in its government’s ability to regulate AI among the countries surveyed, at 31%.

For context on the index itself, a drop from 58 to 40 could indicate that companies are becoming more secretive. It could also reflect that the index penalizes closed-source models by design, and the most capable models happen to be closed-source. Both explanations can be true at the same time.

What matters for practitioners is the implication. The models powering AI Overviews, AI Mode, and ChatGPT Search are getting more capable and less explainable simultaneously. You’re optimizing for systems where the companies building them are sharing less about how they work, not more.

The report’s acknowledgments disclose that Stanford HAI receives financial support from Google, OpenAI, and others, and that the report was produced with assistance from ChatGPT and Claude.

The Entry-Level Question

Employment among software developers aged 22 to 25 dropped nearly 20% since 2024, according to the report. Older developers’ headcounts grew over the same period. A similar pattern appeared in customer service roles.

At first glance, that looks like AI replacing entry-level work. But the report included a caveat that complicates that conclusion. Unemployment is rising across many occupations, and workers least exposed to AI have seen it rise more than those most exposed.

That doesn’t rule out AI as a factor. It means the 20% decline could reflect AI displacement, broader hiring slowdowns, companies restructuring their entry-level hiring, or all three at once. The report presents correlation, not causation.

For search and content teams, the signal is directional even if the cause is mixed. The Stanford data is consistent with what the Tufts AI Jobs Risk Index showed earlier this year. Roles that involve assembling information from existing sources face more pressure than roles that require judgment, experience, and original analysis.

Why This Matters For Search Professionals

Even with its caveats, the adoption speed explains the pace of what you’ve been seeing.

Google expanded AI Overviews to 1.5 billion monthly users by Q1 2025. AI Mode reached 75 million daily active users by Q3 2025, then went global. Google expanded Search Live to 200+ countries. Personal Intelligence rolled out to free US users this year.

The adoption curve helps explain why Google has been expanding AI search features at this pace. It doesn’t tell us how much of that usage is happening inside search rather than standalone AI tools.

The “jagged frontier” means you can’t make blanket assumptions about AI search quality across query categories. A query type that returns accurate AI Overviews today might hallucinate with slight variations. Monitoring needs to happen at the query level, not the category level. Search Console doesn’t currently separate AI Overview or AI Mode performance from traditional search metrics, which makes this harder.

The decline in transparency affects how well you can understand why your content appears or doesn’t appear in AI-generated answers. When Google shares less about the models powering its search features, the feedback loop between what you publish and what gets surfaced becomes harder to read.

Shelley Walsh spoke at SEJ Live and referenced Grant Simmons, “golden knowledge” is content built on original data, firsthand experience, and depth that AI summaries can’t replicate from training data. The Stanford report’s data on adoption speed and model limitations support that position. The models are fast and widely used, but they’re uneven. Content that fills the gaps where AI is unreliable has a structural advantage.

What The Report Doesn’t Tell Us

The Stanford report doesn’t break out search-specific adoption data. We don’t know what percentage of that 53% uses AI via search specifically, rather than via ChatGPT, Gemini, or other standalone tools.

Google’s AI search usage numbers are limited. The company reported that AI Overviews reached 1.5 billion monthly users in Q1 2025, and AI Mode reached 75 million daily active users in Q3 2025. Updated figures should be included in the next earnings call.

The report also can’t tell us whether the jagged frontier problem is improving or worsening in search applications. The benchmark data shows models improving overall, but the clock-reading example shows that improvement isn’t uniform. Whether AI Overviews and AI Mode are getting more reliable for the specific queries that matter to your business requires your own monitoring, not aggregate benchmark data.

Looking Ahead

The Stanford report lands one week after Google’s March core update completed. Alphabet’s next earnings call will likely include updated AI search usage numbers.

The adoption data doesn’t predict what search will look like by year-end. But it does confirm that AI-first behavior isn’t speculative anymore. The question is whether Google’s AI search products will get reliable enough to match the pace of adoption.

Read More Resources:


Featured Image: n_a vector/Shutterstock



Source link

Avatar photo

By Rose Milev

I always want to learn something new. SEO is my passion.

Leave a Reply

Your email address will not be published. Required fields are marked *