© 2020 – 2024 AEA3 WEB | AEAƎ United Kingdom News
AEA3 WEB | AEAƎ United Kingdom News
Image default
Tech

Dr. Taras Firman: EU regulations on AI – and what this might mean for the UK

Written by Dr. Taras Firman, Data Science Manager, Eleks

Although the UK is no longer part of the EU the sentiment behind the new and proposed changes to EU regulation on AI is one that should resonate globally. As the Executive Vice President for Europe said ‘On AI, trust is a must, not a nice to have’. The EU proposals suggest AI should be a tool for people and a force for good in society with the ultimate aim of increasing human well-being.

These rules, as only applicable to the EU, means that countries such as the US, China and indeed the UK will fall outside of this potentially allowing for discrepancies in AI codes of conduct and ethics globally. Under the proposed EU rules for example China’s social credit scoring would be prohibited. Other examples such as the Amazon recruiting system that discriminated against women leads to serious reflection about the need to adopt a legal framework in other jurisdictions.

 

So what is the proposed regulation?

The proposed regulation follows a risk based approach and AI use would fall into categories of unacceptable risk, high or limited/minimal risk. Unacceptable risk is where there is a threat to either people’s fundamental rights or security, for example uses of AI that manipulate human behaviour and systems that allow social credit scoring. These products would be prohibited. High risk AI systems would be subject to scrutiny before they are placed on the market or put into service. This would include a mandatory risk management system, strict data and data governance requirements, technical documentation and record –keeping requirements, transparency and the provision of information to users.

Limited or minimal risk products would need some level of transparency requirements (users should be aware they are interacting with a machine) or if minimal risk would be subject to existing legislation without additional legal obligations. Providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

 

How are companies in the UK regulated and what are we advising our UK clients

In the UK there are currently no specific AI regulations. There are laws and regulations which apply to AI, but for most types of AI businesses are deploying, the applicable laws and regulations are not specific to AI.There is some speculation however that we may see action being taken by UK regulators with UK-specific AI regulation. We are advising our clients to:

  1. Verify the data – Before implementing artificial intelligence systems you will need to verify and check the authenticity and source the data used to teach the models. Data subjects will need to be aware of how their data will be used and the data fed into any AI model will be limited by data rights. That’s why models cannot use personal data without a specific lawful reason, such as contract or legitimate interest. Using AI to process data from “personal” into “semi-personal”, such as gender, race, political opinion or data concerning health, also requires ensuring that you adhere to regulations closely.
  2. Data protection – If you store personal data, be careful that no one can use this data for their own purposes. A couple of years ago, the loss of personal customer information was not as critical as it is now. Today, AI can allow you to receive not only “raw” information, but also additional information which can be extracted from historical data itself. This makes data potentially dangerous if it falls into the wrong hands
  3. Experience -Instruct experienced professionals to write AI systems – They can help make sure that you adhere and understand the regulations but also that the AI you are building uses personal data in an ethical way and without bias
  4. Flexible design -Design AI systems that are easily able to delete or replace certain parts of the data. You need to be ready to respond to limitation and changes to regulations. That’s why it is critical for your AI to still work even if you have to change or limit some parts of the data it uses. For example, in the future we may be restricted from using human faces to unlock mobiles. You therefore need to have a fallback mechanism that will use some other information to identify a person
  5. Explainable AI – Ensure your systems are able to give the best possible explanation of its system decisions. Now, modern AI is not only about the decisions, but also about explanations. If you are building AI systems you will need to spend time developing the explainability part. People will trust systems that explain their decisions. For example, when Netflix shows you what film to watch, it always shows you what films you already liked and which are similar to what it suggests. Explainability == More Trust
  6. Monitor changes in regulations and respond quickly to them

The post Dr. Taras Firman: EU regulations on AI – and what this might mean for the UK appeared first on .

Related posts

iland Awarded its Fifth Veeam Impact Cloud & Service Provider Partner of the Year Award

AEA3

New survey reveals slow Right to Work checks hold back recruitment ambitions

AEA3

Datadobi Announces Data Mobily Engine

AEA3