The UK government has unveiled a new regulatory framework for AI, aimed at promoting innovation while maintaining public trust.
Michelle Donelan, Science, Innovation, and Technology Secretary, said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
The framework, set out in the AI regulation white paper, is based on these five principles:
- Safety – Ensuring that applications function in a secure, safe, and robust manner.
- Transparency and explainability – Organisations that deploy AI should communicate when and how it’s used. Furthermore, they should be bale to explain a system’s decision-making process.
- Fairness – Ensure compatibility with the UK’s existing laws, including the Equality Act 2010 and UK GDPR.
- Accountability and governance – Introducing measures to ensure appropriate oversight of AI.
- Contestability and redress – Ensure that people have clear routes to dispute outcomes or decisions generated by AI.
The principles will be applied by existing regulators in their sectors rather than through the creation of a single new regulator. The government has allocated £2m ($2.7m) to fund an AI sandbox, where businesses can test AI products and services.
Over the next year, regulators will issue guidance to organisations and other resources to implement the principles. Legislation could also be introduced to ensure the principles are considered consistently.
A consultation has also been launched by the government on new processes to improve coordination between regulators and to evaluate the effectiveness of the framework.
Emma Wright, Head of Technology, Data, and Digital at law firm Harbottle & Lewis, commented:
“I do welcome industry-specific regulation rather than primary legislation covering AI (such as the EU is proposing). However, I am concerned that this is essentially another consultation paper calling for regulators to produce more guidance when entrepreneurs and investors are looking for greater regulatory certainty.
The use of AI is becoming mainstream with the arrival of ChatGPT and not enough attention has been given to the need for capacity building within the existing regulators who will now be tasked with driving responsible innovation whilst not stifling investment.
Building trustworthy AI will be the key to greater adoption and setting basic frameworks for entrepreneurs and investors to operate is not at odds with this. Although regulatory sandboxes have been successfully used in the past in other tech verticals, such as fintech, the issue is that lots of the AI tools currently being released have unintended consequences when made available for general use – it seems hard to see how a true sandbox environment will be able to replicate such scenarios and risks damaging any trust users place in an AI tool that has been sandboxed but produces discriminatory results or output.
It is possible to have a pro-innovation approach while setting basic frameworks to be followed such as the UNESCO Recommendation on Ethical AI (that the UK is a signatory to) and it feels like a little bit of a missed opportunity to have missed aligning a pro-innovation environment with what responsible AI use means today rather than at some point in the future.”
The UK’s AI industry currently employs over 50,000 people and contributed £3.7bn to the economy in 2022. Britain is home to twice as many companies offering AI services and products as any other European country, with hundreds of new firms created each year.
Behind the US and China, the UK’s tech sector overall has the third-highest amount of VC investment in the world – more than Germany and France combined – and has produced more than double the number of $1 billion tech firms than any other European country.
However, concerns have been raised that AI could pose risks to privacy, human rights, and safety, as well as the fairness of using AI tools to make decisions that affect people’s lives, such as assessing loan or mortgage applications.
The proposals in the white paper aim to address these concerns and have been warmly welcomed by businesses, which previously called for more coordination between regulators to ensure effective implementation across the economy.
Lila Ibrahim, COO at DeepMind, commented: “AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.
“The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation, and mitigate future risks.”
Grazia Vittadini, CTO at Rolls-Royce, added: “Both our business and our customers will benefit from agile, context-driven AI regulation.
“It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility, and trust that society demands from AI developers.”
The new framework aims to provide protections for the public without stifling the use of AI in developing the economy, better jobs, and new discoveries.
You can find a full copy of the UK’s AI regulation whitepaper here.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post UK details ‘pro-innovation’ approach to AI regulation appeared first on AI News.