© 2020 – 2024 AEA3 WEB | AEAƎ United Kingdom News
AEA3 WEB | AEAƎ United Kingdom News
Image default
Tech

John Spooner: Your Next Security Headache – Stopping your New Machine Learning Model being Undermined by Third Parties

Written by John Spooner, Head of Artificial Intelligence, EMEA, H2O.ai

There’s something new to worry about: the security of the AI model you’ve just worked so hard to build.

We’re all very familiar with the problems of cybersecurity and traditional business IT. But here’s a new one: what would the risk be to your business if your new Machine Learning model’s integrity was compromised?

Here’s the problem: No-one knows.

There’s one set of checks you’re probably already making: if decisions emerge as discriminatory/biased, or you’ve got failures in place because you’re not assessing how your data is changing over time. But Machine Learning algorithms are just as prone to external cyber security attacks as your credit scoring or loan application.

And once it’s been tampered with, it could end up useless to you, and all your investment and data scientist hard work wasted. Back in 2016, for example, Microsoft released an AI Twitter bot, Tay, a (Machine Learning) model all around automatically tweeting certain activities. Less than 24 hours later, that very innocent chatbot had been corrupted by cynical hackers and turned into something very racist and sexist. A perfectly good model had been manipulated and turned into something that was then used to do damage, rather than the intention that Microsoft built it with, which was with good intentions.

‘A potential for impairing the integrity of the AI model’

Tay is definitely a warning from the future of how vulnerable a model can be if it’s not properly protected. And as AI/ML becomes more and more part of the business mainstream, CIOs need to consider protecting them from either, as in this case, deliberate destruction or accidental failure. If you think Tay is a one-off, think again: model integrity compromisation is starting to happen a lot more than you’d think, it’s just that companies are good at keeping it out of the public eye, so far, at least.

That integrity can be breached in two ways: around failures internally within the organisation to put good governance around the model building, productionisation, and model monitoring processes. However, there’s also the potential for impairing the integrity of the AI model through the model itself being attacked.

Many of the firms we work with, especially in financial services and healthcare, are getting painfully conscious of the fact these models are not bulletproof. One red flag is 21st century industrial espionage: competitors could look inside your new model to reverse engineer how you make business decisions to steal a competitive advantage by hacking your decisioning process. If you know how Amazon’s algorithm is working, there will be points within Amazon’s pricing engine that you could exploit, and undercut its pricing, for example.

As a result, we need to be aware of some of the ways that they could be attacked and manipulated. Essentially, if you don’t have a process in place that’s checking for whether or not your models are still functioning as you intended, you run a big risk in opening up a tempting back door here.

Model-tampering techniques

What could be the key to that door is so-called data poisoning. If you want to manipulate a model, the first thing you would do is start to manipulate the data being fed into it. Models are generated, on the whole, off operational data; people going about their business, buying stuff, doing credit card transactions, and AI aims to pick out patterns within that data. Someone that wants to attack that model would simply start by artificially creating data and patterns within that data to enable the model to make the decisions that person, not you, want it to make.

Another attack surface is model tracking. This can be people both inside and outside the organisation who may be looking at putting certain things in place that affect the model itself to then enable the decisions to be favourable to whatever they are doing. There is the ability, especially if you’re turning your model into a set of business rules, to manipulate them in a way that would give a favourable decision, or to make the decision that favours them as individuals. And it can be a really hard challenge to spot people editing that code. Finally, a bad actor can feed in data to the model scoring engine to then get predictions and then build a surrogate model to try and work out how the target company’s model makes its actual predictions.

Clearly, then, we need ways to deal with these attempts to game our models. That has to start with process, not the technology. The first thing you have to do is to work out how you will be able to identify whether an incident has happened: you’ll need a method to monitor those systems for behaviour that is seen as unusual, based on the data coming into that process, but also any data coming out of that process.

Next up, once an incident has been identified, you’ll need a methodology in place to manage that incident: who’s the first responder? How do they communicate it to the business stakeholders, and what do they do once that incident has been established? To help them do that effectively, they will need to be sure that the team has a fully up-to-date inventory of all of the AI currently deployed. And ultimately, you’re going to need robust documentation of all the organisational AI and Machine Learning assets.

Ideally conjoined with IT

A possible problem: if you put all of this great structure around all of these processes, is the speed that it takes to move models into production, could potentially slow down, so you have to maintain a very nimble approach and flexibility around Machine Learning model integrity protection.

Emerging best practice suggests one central place to be creating all of the system documentation and inventory that can be (securely and audit trail) accessed by stakeholders. Think of it as a holistic approach where you have input from your data science experts who can put in the controls and balances with regards to how they built the model and the things that they’ve done to make sure that the model has been built correctly. This is ideally conjoined with IT to ensure that all models move from this development environment into the production environment correctly, with all the right checks and balances.

It would also be highly desirable to have representation from the business because it needs to understand, too, how the model arrives at the decisions, to build trust. Finally, you then need to have compliance involved to ensure all the checks and balances have been made that are needed from a regulatory perspective.

Summing up, don’t let your hard work get corrupted, as Microsoft’s initiative was. By taking equivalent steps to what you’re already doing around protecting your enterprise applications, you will get the most value out of your AI investment. By ensuring proper governance, you will also reduce your risk profile and deter unwelcome attention to what are often performance-changing new technologies, too.

The post John Spooner: Your Next Security Headache – Stopping your New Machine Learning Model being Undermined by Third Parties appeared first on .

Related posts

Amazon Kindle Vulnerabilities could have led Threat Actors to Device Control and Information Theft

AEA3

NashTech makes strategic acquisition of North American cloud and data solutions provider, Knoldus

AEA3

Personio appoints ex-Google and Twitch executive Lenke Taylor as Chief People Officer

AEA3