© 2020 – 2024 AEA3 WEB | AEAƎ United Kingdom News
AEA3 WEB | AEAƎ United Kingdom News
Image default
IT

Over-regulating Big Tech is a risk to British AI startups 

At the UK’s inaugural AI Safety Summit last November, it felt like the government was staking a claim for national AI leadership. That was followed up some months later when the government set out a “pro-innovation” approach to regulating AI. The mood music was one of enthusiasm for AI.

There’s been something of a shift in recent months. An eagerness to nurture the nascent AI ecosystem has given way to increased scrutiny and caution with the Competition and Markets Authority (CMA) launching multiple investigations into potential anti-competitive activity in AI’s development – warning of an “interconnected web” of tech firms shaping the market.

This isn’t as simple as one-way regulatory pressure. The CMA’s recent decision to drop an investigation into the partnership between Mistral and Microsoft demonstrated flexibility.

In truth, however, this partnership is far smaller than the partnerships between Anthropic and Amazon or Microsoft and Open AI, which are still being investigated. Both involve billions of dollars and long-running scrutiny of them risks setting a dangerous precedent.

That’s because Big Tech has a pivotal role to play in scaling cutting-edge AI companies. The UK boasts a world-class AI talent ecosystem. Whichever party makes up the next government, attracting investors to the UK will need to be high on their tech agenda.

Critically at this early stage, that means not creating a hostile environment for innovation and keeping pace with other nations staking a claim for AI leadership.

Strong tech foundations

The UK has a strong startup ecosystem with global recognition. Our world-class universities produce alumni behind companies like self-driving AI technology startup Wayve, which recently raised $1bn, and DeepMind, which remains in London despite being acquired by Google, is at the cutting edge of AI development today.

Despite these strong foundations, the trajectory of the UK’s tech ecosystem rests on a knife’s edge. UK late-stage funding dropped by 75% between 2022 and 2023. And only a little friction can be enough to put investors off a market.

This has to be avoided for AI and its capital inflows. Big Tech is the biggest investor in AI startups at the growth stage today. Unlike VC funds, the Big Tech players aren’t as sensitive to valuations and future business prospects. They’re focused on the underlying AI technology and, therefore, using deep pockets to invest hundreds of millions of dollars in AI upstarts that may still have little to no revenue on the books.

CMA scrutiny of Big Tech’s involvement in the nascent AI ecosystem therefore risks creating a capital vacuum at the later stages of investment for UK AI startups, which can be more difficult to raise.

Flawed logic and strategy

This, of course, isn’t to say that the AI market should be free of regulatory oversight. We have important expectations and standards around the treatment of data, for example, and a clear interest in understanding the implications of new AI applications in this context. Recent concerns over Chat-GPT4o’s data privacy protections are a good example.

But the CMA’s current angle is rooted in anti-competition concerns and that’s fundamentally incorrect. Anti-competitive behaviour is hard to prove, and the negative consumer impact of AI market movements isn’t clear. AI is getting better, not worse, and there’s no sign of a market monopoly.

Put it this way, it’s not against the interest of consumers for Big Tech companies to plough billions into improving AI models that have a clear case of benefitting society.

Furthermore, UK regulators should consider the free market’s ability to self-regulate. Shareholders are wise enough to scrutinise the partnerships and investments of Big Tech stalwarts and punish their share price if the economics don’t add up.

There’s also a broader question on strategy here. Other governments are playing a much more active role in their AI ecosystems. The Saudi government is developing its own Arabic Large Language Model and France is seeing massive public sector investment in its tech ecosystem.

The UK state isn’t currently playing an equivalent role, which the next government will need to build a long-term strategy around.

UK has time to correct course

The CMA’s conclusion not to progress its investigation into Microsoft’s Mistral partnership suggests it understands the careful tightrope it’s walking in scrutinising Big Tech involvement in AI startups. It needs to take more steps in the right direction to avoid the UK becoming an intense regulatory outlier.

The market is wise enough to police itself and the UK needs to prioritise an AI ecosystem that will attract the right talent and the capital to foster it.

Keeping the market open for Big Tech involvement in AI startups is vital to keep the UK relevant within the emerging AI space. A regulatory environment that demonises AI’s biggest investors will only push talent and investors out of the country and risk the UK becoming a mere spectator in AI’s development.

Tom Sheridan is VP of investment at RTP Global.

The post Over-regulating Big Tech is a risk to British AI startups  appeared first on UKTN.

Related posts

EU AI Act explained: What AI developers need to know

AEA3

Post Office and Fujitsu had tense relationship, but were joined at hip when protecting their brands

AEA3

How William Hill’s IT copes with big sporting events

AEA3