By Claire Trachet, 20 November 2023
Reflecting on the recent UK AI Safety Summit, hosted by Rishi Sunak at Bletchley Park and initially hailed as a diplomatic success, I find myself grappling with concerns within the tech industry.
This is in regard to the clarity and effectiveness of UK regulations in comparison to its competitors in this space like the EU and the US. Despite the collaborative announcement by the UK and the US regarding the establishment of AI safety institutes, there is a pressing need for more concrete regulatory measures to support the burgeoning AI sector. This debate unfolds against the backdrop of the potential economic impact of AI, estimated by PwC to be $15.7 trillion globally by 2030, which is currently overshadowed by the uncertainty surrounding regulatory frameworks hindering the growth of AI startups in the UK.
Vital for success
A Deloitte survey reveals that 94% of businesses and IT executives consider AI vital for their success in the next five years. This urgency becomes even more apparent when considering the pressing need for a delicate equilibrium between fostering innovation and implementing effective regulation. While the EU has surged ahead in providing clarity for AI businesses through its imminent AI Act, the UK risks falling behind due to the lack of a comprehensive regulatory framework.
In stark contrast to the UK's approach, the EU has taken significant legislative strides with its AI Act, expected to be adopted in June 2024. This comprehensive legislation outlines requirements for both AI providers and users based on the perceived level of risk. Moreover, the EU's AI Act will impose penalties on businesses responsible for data breaches, providing a clear stance on accountability. The predicted 40% annual growth of the European AI market up to 2028, indicates a thriving environment where investors, startups, and businesses receive explicit support through regulatory frameworks such as this.
The UK, on the other hand, has only presented its AI whitepaper, a document that places substantial responsibility on regulators. Research coming out of the Ada Lovelace Institute suggests that there are significant gaps within this whitepaper. With no concrete AI law in the pipeline, UK MPs have issued warnings about the nation's potential lag in the regulatory race. Even with a £100 million investment in an AI task force, the UK must address these concerns to safeguard its position as a global tech leader.
The discrepancy becomes more concerning considering the UK's substantial role in the global tech race. Presently boasting an AI market worth over £16.9 billion, the UK anticipates growth to a staggering £803.7 billion by 2035. Notably, the UK hosts the largest number of AI startups in Europe, with 334 companies already established. However, in order to maintain this competitive edge, the UK government must establish effective and forward-looking regulation.
For the industry to thrive in the UK, its government must demonstrate its commitment to regulating AI safely, ensuring this commitment does not stifle innovation and investment. As the UK grapples with regulatory uncertainties and the looming threat of falling behind, the need for actionable legislation has never been more acute. The future of the UK's global tech leadership hangs in the balance, contingent upon its ability to enact effective AI regulations that will cultivate a thriving marketplace. The race is on, and the time for the UK to act is now.
The dichotomy between the EU's proactive approach and the UK's current reliance on a whitepaper underscores the need for the latter to expedite the formulation and implementation of robust AI regulations. As AI continues to evolve and integrate into various aspects of our lives, the regulatory landscape must adapt to foster innovation, protect consumers, and maintain global competitiveness. The AI Safety Summit, while a diplomatic triumph, must catalyse concrete action, ensuring that the UK not only keeps pace but leads in the race for effective AI regulation.
The urgency surrounding AI regulation in the UK is not just a matter of keeping up with global standards, but also about safeguarding the nation’s unique standing in the tech world. The UK's diplomatic prowess was showcased at the AI Safety Summit, yet the tech community remains ensnared in ambiguity regarding AI governance. This is especially stark when juxtaposed with the EU's comprehensive regulatory framework and the commendable strides made by the US in this domain.
The UK must not merely be focused on catching up with its global counterparts; it's about carving a distinctive path that balances innovation with responsible governance. The regulatory landscape must not only be reactive but also anticipatory, capable of accommodating the rapid advancements in AI technology.
The AI Safety Summit, while a commendable platform for diplomatic achievements, needs to generate tangible actions. The UK government must not only respond to the concerns raised by industry leaders and experts, but also proactively shape a regulatory framework that fosters growth, innovation, and ethical practices. The trajectory of the UK's AI landscape is pivotal, not just for the nation but for the global tech ecosystem. A robust regulatory framework is not a hindrance to innovation; rather, it should function as a support that ensures sustainable growth.
Claire Trachet is CEO and Founder of business advisory firm Trachet.
You may also like to read:
Discover more on AI and its development in our e-book Compliance and AI: Balancing the risks and opportunities, which brings together a diverse range of industry voices to offer expert perspectives.