YouTube creators will soon need to comply with new platform policies around content generated or altered with AI.

The requirements, detailed in the following sections, aim to balance opportunities presented by AI with user safety.

Mandatory Labels & Disclosures

A major change requires creators to inform viewers when content contains realistic AI-generated alterations or synthetic media depicting events or speech that didn’t occur.

This includes deepfakes showing someone appearing to do or say something they didn’t.

Labels disclosing altered or artificial content will be mandatory in the description panel. YouTube provided mockups demonstrating how these descriptions could look.

For sensitive subjects like elections, disasters, public officials, and conflicts, an additional prominent label may be required directly on the video player.

YouTube says creators consistently failing to comply with disclosure requirements may face consequences ranging from video removal to account suspensions or expulsion from the YouTube Partner Program. The company promised to work closely with creators before rollout to ensure complete understanding.

New Removal Request Options

YouTube will allow people to request the removal of AI-generated content featuring an identifiable individual’s face or voice without consent. This includes deepfakes imitating unique vocal patterns or appearances using AI generation.

Music partners will soon be able to request takedowns of AI music imitating an artist’s singing or rapping voice. When evaluating removal requests, YouTube stated it would consider factors like parody, public interest, and subject newsworthiness.

Improved Content Moderation With AI

YouTube disclosed it already uses AI to augment human reviewer moderation, including leveraging machine learning to rapidly identify emerging abuse at scale.

Generative AI helps expand training data, allowing YouTube to catch new threat types faster and reduce harmful content exposure for reviewers.

Responsible Development Of New AI Tools

YouTube emphasized responsibility over speed in developing new AI creator tools. Work is underway on guardrails preventing policy-violating content generation from its AI systems.

The company is focused on learning and improving protections through user feedback and adversarial testing to address inevitable abuse attempts.

New Policy Enforcement

While specifics around enforcement weren’t revealed, YouTube has several options to ensure compliance with the new requirements.

The company will likely employ a mix of human and automated enforcement.

One way YouTube could enforce this policy is by training its existing content moderation systems to flag videos with characteristics of AI-created media that lack proper disclosures.

Random audits of partner accounts uploading AI content could also catch violations.

Crowdsourcing enforcement by allowing user-reporting of undisclosed AI material would be another way to uphold the policy.

However YouTube goes about it, Consistent enforcement will be essential in setting expectations and norms around disclosure.

Looking Ahead

YouTube expressed excitement about AI’s creative potential combined with a wariness of risks. The company intends to create a mutually beneficial AI future with the creator community.

The full policy update provides creators with additional details on what to expect. Staying informed on YouTube’s evolving rules is vital to keep your account in good standing.


Featured Image: icons gate/Shutterstock



Source link

Avatar photo

By Rose Milev

I always want to learn something new. SEO is my passion.

Leave a Reply

Your email address will not be published. Required fields are marked *