Google published a policy agenda to promote a collaborative approach between private sector AI companies and governments to promote a legal framework for regulations that maximize AI opportunities while also addressing preventing harm to civil society.
Pro-AI Legal Framework
Flexibility in legal frameworks that are pro-AI innovation is a strong focus of Google’s Opportunity Agenda.
The call for a pro-innovation legal framework encourages innovation that’s relatively unencumbered by patchwork regulations from diverse agencies.
Google’s pro-innovation legal framework is based on four principles:
- Governments need interagency coordination for creating regulations to avoid a patchwork approach that isolates different goals across different agencies without coordination.
- Governments need to support a copyright framework that allows AI developers to create AI models on publicly available data
- Adopt a risk-based approach to regulation that identifies which entities (such as developers, deployers, or users) are liable for specific harms
- Governments should create privacy regulations that allow AI developers to use publicly available information.
AI Integration Within Government
The policy agenda not only suggests a legal framework for AI-friendly regulations, it also suggests ways that governments can integrate AI into each agency, including inviting AI experts to advise government agencies.
This part of the agenda encourages embedding private-sector AI experts at every level of government as advisers as well as encouraging the integration of AI technology within the government itself.
The document encourages integration of AI into public services:
“Technology competitions are often won – not by the first to invent – but by the best to deploy.
…Governments, working closely with the private sector and civil society, can advance this goal by adopting AI to enhance public services and by helping small businesses access AI.”
Google’s agenda also encourages growing AI expertise at the government level:
“…every agency will need some AI expertise, governments should consider establishing a centralized resource of experts that can advise agencies across the government.”
Risk-based Approach To AI Regulation
An interesting feature that benefits AI developers is a risk-based approach to regulations that assigns liability to the entity that is directly responsible for harms or is in a position to control it.
This means that rather than holding those who created the technology responsible for harms that happen downstream from them, regulations should focus on the developers, deployers, or users of the technology who are directly responsible, rather than indirectly responsible.
The agenda recommends:
“…adopting a risk-based approach to AI regulation…allows regulators to identify which parties (developers, deployers, or users) are most likely to have control over harm prevention and mitigation and therefore should be held accountable.”
Calls For Support For Cross-Border Data Flows & Investment
A global approach to privacy and regulation has resulted in situations where privacy and data retention laws vary not just by country but also by region.
What the Opportunity Agenda promotes is a global approach to regulating cross-border data flow so that every step of AI development isn’t hampered by country-by-country regulations regarding data.
The ability to maximize cross-border trade and investment is also a component of Google’s encouragement of the globalization of AI regulation.
The policy agenda suggests:
“…because AI is by its nature a cross-border technology, individual policy efforts must be tethered to strong trade and investment policies that support trusted international collaboration on AI, including cross-border data flows essential to AI development and deployment.
One of the most meaningful steps that trade policymakers can take to support responsible AI is by committing to support trusted cross-border data flows.
…Given the cross-border nature of AI, enabling trade and investment frameworks will be essential for the development, deployment, and governance of AI.”
Global AI Regulation
Google’s call to governments to focus on the benefits of AI and how it can be harnessed for the good of citizens on a global scale is a balanced approach to regulations. Regulations shouldn’t all be about what can’t be done.
Governments should also take care to find a balance to protect citizens from harm while avoiding needlessly stifling innovation.
Perhaps the big takeaway from Google’s agenda is the focus on opportunities that are on a global scale.
Read Google’s policy statement: