© 2020 – 2024 AEA3 WEB | AEAƎ United Kingdom News
AEA3 WEB | AEAƎ United Kingdom News
Image default
IT

What the EU AI Act means for startups like mine

As the EU Parliament celebrates being the first to pass a pervasive global framework for AI regulation, the global startup community cautiously awaits the interpretation and implementation of many unclear aspects of the AI Act.

The danger that these laws – which will apply to all companies that deploy AI in EU member states – may ultimately push startups out of the bloc is real. This might then achieve the very opposite of what the Act was designed to do.

Like many recent initiatives regarding AI safety, the passing of the AI Act reflects a broader trend: major political powers – Europe, the UK, the US and China – are vying to establish themselves as the dominant force in AI regulation. The stated aim of establishing a mature, safe and internally consistent market for AI companies is intertwined with states’ political goals of being the first to regulate AI.

Since the explosion of public interest in AI in 2023, virtually all major geopolitical powers have sought to address – and, above all, be publicly seen as addressing – the question of “controlling” AI. Most governments have not been shy, for better or worse, about their desire to claim the title of being the “first” to act, whether that is organising the first summit on AI safety (UK), bringing forward the first AI legislation (US), or being the first to introduce a wider AI legislative framework (EU).

It would be naïve not to acknowledge that the topic of AI safety has essentially become a diplomatic and geopolitical battleground. Meanwhile, AI startups, which produce the majority of new AI technologies, are rarely included in the dialogue.

Safety classifications

Some industry voices, and some EU voices, claim the Act should not worry innovators and startups like mine and those I represent through the AI Founders Association, as it only restricts “specifically risky” applications.

In my view, this is at best predicting the Act will be interpreted and implemented in the most favourable way possible for startups operating in the EU, or at worst significantly underestimates the latent scope of the Act.

Under the EU AI Act, startups operating in the edtech, health tech, fintech, transport, recruitment and employment sectors are considered potentially “high risk”, and each of these may have to go through complex documentation, classification and auditing processes (requiring funds, time and resources most startups will not have).

Those six sectors cover the majority of AI startups that successfully raise funding each year. While it seems unimaginable and unlikely that Europe would request the majority of AI startups to observe the stringent and heavy administrative and regulatory burden set out under their “high risk” AI classification (and indeed the EU will likely seek to assess startups within these sectors on a case-by-case basis), this sword of Damocles won’t make them sleep better at night.

EU startups building AI for healthcare could be particularly hard hit, even though the use of AI in medical contexts is generally acknowledged as one of the few ways to solve the fundamental unsustainability of modern healthcare systems.

The sector already filters for the toughest startups that can defend and differentiate their tech to compete with resource-heavy corporates in a costly and strongly regulated environment. Under the AI Act, however, EU health tech startups will need to fund not just expansive clinical studies, regulatory consultants and long regulatory approval processes on the medical side, but somehow also find resources to simultaneously pay regulatory consultants, seek classification and regulatory approval to comply on the AI side.

It seems inevitable these two regulatory processes will need to be harmonised or streamlined into a single regulatory pathway – with some calling the current situation a “regulatory lasagne”. In the meantime, however, uncertainty regarding the implementation of the Act may well lead to more health tech startups fleeing the EU for the US or UK.

UK’s AI tortoise to Europe’s AI hare

A sustainable regulatory framework that enables a healthy, safety-conscious, internally consistent and well-delineated market for AI companies to operate and compete in, cannot, in my opinion, be built exclusively top-down.

While virtually all governments – including the EU’s in its announcement of the AI Act – stress the importance of not stifling innovative startups, there is far too little actual engagement with the startup sector when designing regulation.

A bottom-up approach, working directly with startups, that addresses urgent current issues rather than primarily imagined future dangers, is, I believe, the only way to strike the right balance between maintaining AI safety and fostering innovation.

Many AI startups in my circle are encouraged by the UK’s more cautious approach, which since the AI summit has seemingly moved from being “first”, towards seeking to regulate methodically, even slowly. They fear regulatory costs and burdens that mean only the largest can play.

And if we describe a future where only a handful of established Big Tech companies control all applications of AI in Europe, aren’t we describing the very nightmare the EU is trying to avoid?

If the careful approach remains a commitment of successive UK governments, it may well become the deciding factor in the UK becoming an AI powerhouse.

Roeland P-J E Decorte is the founder & CEO of Decorte Future Industries and the president of the Artificial Intelligence Founders Association.

The post What the EU AI Act means for startups like mine appeared first on UKTN.

Related posts

Element Ventures announces $130M fund for future fintech leaders in UK and EU

AEA3

London-based Smart ropes in DWS as part of £165M funding to transform retirement across the world

AEA3

The real-life consequences of ransomware attacks

AEA3