Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Attackers want to exploit and abuse your AI

AI is just software and data, and like other software and data, it can be attacked for profit and malice.

As the importance of AI to an organization goes up, so do the stakes for security problems.

"If AI is a core part of your product [or] manages a portion of your financial system or business strategy, it automatically becomes a target by anyone looking to make a shady buck off your company," says Ariel Herbert-Voss, research scientist at Open AI in a recent report on trustworthy AI.

But the special roles and complexity of AI and ML have led to special categories of attack against AI. As such, AI implementers and chief information security officers need to be on the lookout for them.

Just like any other kind of emerging technology, cybercriminals will increasingly be drawn to AI and ML models as another attractive attack surface to exploit for money. The more AI becomes intertwined with the business, the more attractive it becomes to attack. In fact, Gartner expects that by next year, 30% of cyberattacks will involve emerging AI threats such as adversarial AI, data poisoning, and model theft.

Many security principles still hold true

The CIA triad of cybersecurity—confidentiality, integrity, and availability—still very much holds true in the domain of AI, which means forward-thinking security leaders must start planning their strategies to uphold each of those three fundamentals. The following are the types of questions security leaders, technologists, and business stakeholders should ask about their AI infrastructure:

  • Confidentiality: How well are the organization's AI systems protecting and encrypting the data used for modeling, and how well are the algorithms themselves protected from theft—particularly if those algorithms provide a competitive advantage?
  • Integrity: How is the business and IT ensuring the integrity of the AI models and the data they rely on? Are AI algorithms and the underlying code protected from tampering and subversion that would distort their output?
  • Availability: Are AI systems and infrastructure built ruggedly enough to withstand denial-of-service attacks that could slow them down or even bring them down?

Some of the broadest security issues enmeshed with AI technology are the same data privacy issues that enterprises face every day across all their other systems. AI needs extraordinary amounts of training data to create learning models and to use testing and working data to utilize the models. In many use cases, that includes personally identifiable information and regulated data about people—for example, think of the amount of patient PII that runs through healthcare AI used in clinical trials for new drugs.

Please read: Advancing medicine with AI at the edge

As with any other sensitive data, this AI data must be safely stored through measures like encryption at rest and in transit, as well as secure access and authentication. So, while data privacy issues will only be exacerbated and expanded in scope by AI, they're not all necessarily AI-specific. In most instances, they are the same software security and network security issues that cybersecurity professionals already grapple with.

The most common AI threats emerging now

As Rohini Chavakula, AI and data security lead at HPE Pointnext Services, explains, what would keep a data scientist or AI engineer up at night would be things that threaten the functionality or accuracy of the model. The bulk of such threats boils down to the following three major categories.

Adversarial AI

Adversarial AI is a category of attack techniques that seeks to subvert working AI models through malicious data input. A great simple example of this was an attack devised by security researchers back in 2019 against, ironically enough, an AI-based security product.

Researchers at the firm Skylight managed to figure out how to subvert the CylancePROTECT endpoint security AI model by appending a bit of code into malicious files. That bit of code acted as a universal bypass, tricking the AI into thinking the bad files were clean and allowing them to operate on what the user considered a protected system.

There are any number of ways adversarial AI can be employed. Some other proven theoretical examples include taking over digital assistants like Alexa or Siri with specially encoded audio files, to cause self-driving AI to malfunction in very targeted ways by putting stickers on the road or on street signage.

The point for defenders to understand is that just as they must be wary of bad input in software design, they also have to bring that wariness to bear when it comes to how AI models interact with data. Attackers are going to be looking for any way they can to profitably abuse those interactions.

Please read: From the front lines to the enterprise: The ethical considerations of AI

Chavakula believes that AI teams need to start developing in-built defenses in their models to fight against threats to their functionality, including methods such as adversarial training.

"Since the risk is high, there is a need to add in-built defense capabilities in AI models, which is still missing. Mid- and low-sized attack intentions like adversarial AI need solutions which only a defense layer can control," Chavakula says, adding that innovators need to advance the set of tools available to check that these defenses are working. "There are a lack of tools in the market to test AI models in reality."

Data poisoning

Data poisoning is a term often used interchangeably with adversarial AI. But there is a subtle distinction: Data poisoning typically occurs within the training data not just to subvert an existing AI model in one-off instances but to completely change the way the model works.

The level of poisoning attackers can accomplish will depend on how much access they have to the underlying model and the training data—it could involve injecting or manipulating data within training sets. One amusing illustration of how data poisoning can work happened more than five years ago with Microsoft's unsuccessful marketing ploy to train its Tay chatbot in the conversational arts by exposing it to the Twitterverse. The training data in this case were tweets from the general public, which was encouraged to feed Tay's learning with witty and fun interactions. Twitter users answered by poisoning the heck out of that stream. It only took about a day for Tay to start spewing hate speech and vulgarity in response to user questions.

While this might elicit a chuckle, the implications are serious for any AI use case and could cause serious business malfunction in so many realms. For example, a poisoning attack against supply chain analysis AI could cause the system to order the wrong number or type of widgets and grind production or sales to a halt.

According to Glyn Bowden, CTO of the AI and data practice at Hewett Packard Enterprise, data scientists and security pros need to find ways to ensure that the model they deployed is the one they actually trained.

"This is true for any environment, but the obvious vertical paying attention here is financial," Bowden says. "If you can force your competitors to deploy a manipulated model into their automated trading environment, you open up the opportunity to 'trade ahead' or simply trade against what you know the model will be predicting."

Some ways to protect against this could be by hashing and check-summing, or even digitally signing the model, he says.

Model theft and replication

Another big concern that should be on the radar of CISOs and CIOs is the threat of model theft and replication. Attackers are inevitably going to be reverse engineering and seeking ways to get AI models to leak information so they can extract either sensitive data or how the model works.

"There is a movement to attempt to reverse engineer models to see if even a synthesized dataset can be produced, which might hint at patterns in the actual training data," Bowden explains.

Please read: How data and AI will shape the post-pandemic future

This could be used as the first step in an adversarial attack like the one against Cylance. Or it could be used just to steal the model outright.

"Another risk here is as more models are deployed to the edge, the systems they are deployed on might be more susceptible to theft," Bowden says. "Think of a video camera with an embedded inference engine. The model will run on that device and the theft would potentially compromise it."

A counter to this threat could be to explore encryption of the model where decryption happens at runtime, he says.

AI threat modeling

The good news for defenders is that there's been a lot of research and progress made in the past couple of years in the field of adversarial training, AI threat modeling, and AI risk assessment. In fact, earlier this month, Microsoft released an open source tool called Counterfit that's meant to help developers test the security of their AI and machine learning systems. And last fall, MITRE introduced a collaborative project called the Adversarial ML Threat Matrix, which is a framework for modeling threats to AI/ML systems.

According to Diana Kelley, a researcher and analyst at SecurityCurve and former Microsoft cybersecurity field CTO, security pros are going to need to do a whole lot of threat modeling to wrap their arms around the emerging threats to their AI/ML systems.

"The biggest problem with companies with AI and ML is that there's a lack of understanding," she says. "It's really going to come back to understanding the threat model, understanding the use cases, and understanding what the impact to the company is where AI and ML are being used and doing risk assessments and models based on that."

According to Kelley, this means security teams are going to need to get more training on how these AI systems are being used. Accepting a black-box mentality from either vendors or in-house data scientists won't be acceptable, "because right now a lot of companies are really just flying blind," she says.

Lessons for leaders

  • IT security needs to scrutinize AI systems with the same best practices that apply to all others.
  • AI models are valuable company assets and need to be protected carefully.
  • Careless AI administration could lead to all manner of disaster for organizations, from mere embarrassment to legal exposure.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.