Canadian fiddler Ashley MacIsaac has filed a civil lawsuit against Google, alleging an AI Overview falsely identified him as a convicted sex offender. The lawsuit could test how courts treat liability for false AI-generated search summaries.

The statement of claim, filed in February with the Ontario Superior Court of Justice, seeks at least $1.5 million in damages from Google LLC. None of the claims have been tested in court.

What The Lawsuit Alleges

MacIsaac, a Juno Award-winning musician, says he learned of the false summary in December 2025 after the Sipekne’katik First Nation confronted him with it and cancelled one of his concerts. The First Nation later issued a public apology.

According to the filing, the AI Overview falsely stated MacIsaac had been convicted of sexual assault, internet luring involving a child, and assault causing bodily harm, and wrongly claimed he’d been listed on the national sex offender registry.

The lawsuit argues Google is liable for the output its AI system generated, stating that Google “knew, or ought to have known, that the AI overview was imperfect and could return information that was untrue.”

It also alleges Google didn’t admit responsibility, didn’t reach out to MacIsaac, and didn’t offer an apology or retraction.

The filing makes a direct argument about AI liability:

“If a human spokesperson made these false allegations on Google’s behalf, a significant award of punitive damages would be warranted. Google should not have lesser liability because the defamatory statements were published by software that Google created and controls.”

MacIsaac said Google must take responsibility for what AI Overviews display. “This was not a search engine just scanning through things and giving somebody else’s story,” he said.

Google’s Response

Google hasn’t commented on the lawsuit. In December, spokesperson Wendy Manton said AI Overviews are “dynamic and frequently changing” and that when the feature misinterprets web content, Google uses those cases to improve its systems. The false summary tying MacIsaac to criminal offences no longer appears.

Why This Matters

AI Overviews can appear in Google search results as AI-generated snapshots with links to more information. Google’s Search Help documentation says AI responses may include mistakes.

When those summaries display false claims about real people, the consequences can extend beyond a bad search result. In MacIsaac’s case, the lawsuit alleges the AI Overview led to a cancelled concert and reputational harm.

MacIsaac’s case isn’t the first time AI-generated content has led to defamation allegations. In 2023, an Australian mayor threatened legal action after ChatGPT falsely claimed he’d been imprisoned for bribery. The lawsuit targets Google’s AI Overviews directly and argues the product had a defective design.

The case adds to a growing legal question around AI-generated content: whether platforms are responsible when automated summaries present false claims as search results.

Looking Ahead

The case is at the statement-of-claim stage, and Google hasn’t filed a response. Until then, the core questions are unresolved: whether Google will contest liability, how it will characterize AI Overview output, and how the court will treat automated summaries in a defamation claim.



Source link

Avatar photo

By Rose Milev

I always want to learn something new. SEO is my passion.

Leave a Reply

Your email address will not be published. Required fields are marked *