EY report sheds new light on global AI regulatory enviroment

A new EY analysis illuminates the global regulatory landscape for AI.

0 48

The global AI regulatory landscape, as reported by Ernst & Young (EY), one of the Big Four accounting firms, is attracting renewed interest following President Biden’s signing of a broad executive order on Monday that seeks to monitor and regulate the risks associated with artificial intelligence while also maximizing its potential.

The Artificial Intelligence (AI) worldwide regulatory landscape: Policy trends and considerations to foster trust in AI is the title of an EY paper that was released last month. Its objective is to make the global AI regulatory environment more understandable for businesses and politicians, giving them a road map for navigating this challenging terrain.

Based on an examination of eight key jurisdictions—Canada, China, the European Union, Japan, Korea, Singapore, the United Kingdom, and the United States—that have demonstrated notable AI legislation and regulatory action, the study was produced.

Aerial view of the colorful buildings in the European city in the morning sunlight. Cityscape with multicolored houses, cars on the street in Kiev, Ukraine.

It indicates that these nations have many of the same goals and strategies for AI governance, but having distinct cultural and legislative environments. The OECD AI principles, which prioritize human rights, transparency, risk management, and other ethical issues, are in line with their goal of minimizing possible harms from AI while optimizing benefits to society. The G20 accepted these principles in 2019.

The regulatory environment for AI worldwide is dynamic and changing.
Notably, the executive order signed by President Biden earlier this week has rendered the EY’s research of US AI regulations obsolete. The majority of experts believe that the executive order is the most important government action on AI to date, and it goes beyond the industry-specific regulations and voluntary guidelines that the study characterized as the US approach.

The executive order expands upon 15 tech firms’ voluntary earlier this year agreements, which included Microsoft and Google, to allow third-party testing of their AI systems prior to public release and to create explicit methods for identifying material created by AI.

Additionally, the White House released a non-binding “AI Bill of Rights” last year that offered businesses rules for safeguarding customers through automated systems.

The executive order will compel creators of robust AI systems to notify the federal government of the findings of their safety tests prior to public release and to notify the government if their AI models pose risks to national security, the economy, or public health, as our lead AI reporter Sharon Goldman detailed earlier this week. Other topics including immigration, biotechnology, labor, and content control will also be covered by the directive.

The worldwide regulatory landscape for AI is changing due to other significant events that have occurred since the publication of the EY study last month. For instance, the UK government’s proposed framework for regulating AI is outlined in an AI White Paper that was released. The UK framework aligns with the EU’s approach and is founded on four principles: proportionality, accountability, openness, and ethics.

These events demonstrate how dynamic and quickly changing the global AI regulatory landscape is, and how important it is for businesses and governments to be informed about the most recent trends and best practices. The EY report is still a useful tool for comprehending and managing the regulatory landscape, but if new laws and initiatives are implemented, it might need to be updated with fresh data.

Leave A Reply

Your email address will not be published.