What will AI regulation look like for businesses?

What will AI regulation look like for businesses?

Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong. 

This is about to change. 

As the EU finalizes its AI Act and generative AI continues to rapidly evolve, we will see the artificial intelligence regulatory landscape shift from general, suggested frameworks to more permanent laws. 

The EU AI Act has spurred significant conversations among business leaders: How can we prepare for stricter AI regulations? Should I proactively design AI that meets this criterion? How soon will it be before similar regulation is passed in the US?

Continue reading to better understand what AI regulation may look like for companies in the near future.  

How the EU AI Act will impact your business 

Like the EU’s General Data Protection Regulation (GDPR) released in 2018, the EU AI Act is expected to become a global standard for AI regulation. Parliament is scheduled to vote on the draft by the end of March 2023, and if this timeline is met, the final AI Act could be adopted by the end of the year. 

It’s highly predicted that the effects of the AI Act will be felt beyond the EU’s borders (read: Brussels effect), albeit it being European regulation. Organizations operating on an international scale will be required to directly conform to the legislation. Meanwhile, US and other independently-led companies will quickly realize that it’s in their best interest to comply with this regulation.

We’re beginning to see this already with other similar legislation like Canada’s Artificial Intelligence & Data Act proposal and New York City’s automated employment regulation

AI system risk categories

Under the AI Act, organizations’ AI systems will be classified into three risk categories, each with their own set of guidelines and consequences. 

  • Unacceptable risk. AI systems that meet this level will be banned. This includes manipulative systems that cause harm, real-time biometric identification systems used in public spaces for law enforcement, and all forms of social scoring. 
  • High risk. These AI systems include tools like job applicant scanning models and will be subject to specific legal requirements. 
  • Limited and minimal risk. This category encompasses many of the AI applications businesses use today, including chatbots and AI-powered inventory management tools, and will largely be left unregulated. Customer-facing limited-risk applications, however, will require disclosure that AI is being used. 

What will AI regulation look like? 

Because the AI Act is still under draft, and its global effects are to be determined, we can’t say with certainty what regulation will look like for organizations. However, we do know that it will vary based on industry, the type of model you’re designing, and the risk category in which it falls. 

Regulation will likely include scrutiny with a third party, where your model is stress tested against the population you’re attempting to serve. These tests will evaluate questions including ‘Is the model performing within acceptable margins of error?’ and ‘Are you disclosing the nature and use of your model? ‘

For organizations with high-risk AI systems, the AI Act has already outlined several requirements: 

  • Implementation of a risk-management system. 
  • Data governance and management. 
  • Technical documentation.
  • Record keeping and logging. 
  • Transparency and provision of information to users.
  • Human oversight. 
  • Accuracy, robustness, and cybersecurity.
  • Conformity assessment. 
  • Registration with the EU-member-state government.
  • Post-market monitoring system. 

We can also expect regular reliability testing for models (similar to e-checks for cars) to become a more widespread service in the AI industry. 

How to prepare for AI regulations 

Many AI leaders have already been prioritizing trust and risk mitigation when designing and developing ML models. The sooner you accept AI regulation as our new reality, the more successful you will be in the future. 

Here are just a few steps organizations can take to prepare for stricter AI regulation: 

  • Research and educate your teams on the types of regulation that will exist, and how it impacts your company today and in the future.  
  • Audit your existing and planned models. Which risk category do they align with and which associated regulations will impact you most?
  • Develop and adopt a framework for designing responsible AI solutions.
  • Think through your AI risk mitigation strategy. How does it apply to existing models and ones designed in the future? What unexpected actions should you account for?  
  • Establish an AI governance and reporting strategy that ensures multiple checks before a model goes live. 

In light of the AI Act and inevitable future regulation, ethical and fair AI design is no longer a “nice to have”, but a “must have”. How can your organization prepare for success?

(Photo by ALEXANDRE LALLEMAND on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What will AI regulation look like for businesses? appeared first on AI News.

文 » A