AI and Cybersecurity: Threats and Opportunities

AI and Cybersecurity: Threats and Opportunities

AI and Machine Learning Tools Are Changing Cybersecurity.

Joaquin Paredes

Director of Offensive Security Practice

Artificial Intelligence (AI) has a definitive place in cybersecurity. The experts at McKinsey define AI as a machine’s ability to perform cognitive functions associated with human minds–such as perceiving, reasoning, learning, and problem-solving. This enables AI applications to solve or alleviate common business problems.

For cybersecurity professionals, with the ability to automate tasks and reduce time-intensive operations, AI can save overworked security teams countless hours of labor. IBM found that AI tools, used judiciously in the AI realm, can save some 14 weeks in threat detection and response.

AI use is proliferating—almost every industry either uses or can benefit from AI applications, and the industry is exploding. In fact, According to McKinsey, some 47% of organizations worldwide had embedded AI into their operations, and another 30% were investigating its use.

AI technologies have significant potential, but as their use becomes more prevalent, so do the opportunities for threat actors. Enterprising hackers can use AI tools to breach systems and wreak almighty havoc.

In the article below, we’ll discuss how AI can be used by both sides, the risks and rewards, and how you can protect yourself against the potential misuse of AI.

How Can Threat Actors Use AI?

Data poisoning, also called model poisoning, is one of the newer threats in the cybersecurity world, but it’s also one of the most insidious. The term refers to hackers feeding false data into an AI or machine learning (ML) system and creating vulnerabilities. Data is the foundation for AI—it’s how the system continues learning. The data sets that the system acquires allow the AI to make decisions, analyze trends, find problems, and make predictions.

When used as it’s meant to be, it can save security teams hours of time and effort. However, AI cannot make accurate predictions and find issues if that data becomes corrupt. The data has become “poisoned.”

Many companies use AI to some degree through customer service bots, banking apps, and the like. Hackers can infiltrate the database and add incorrect data so that as the algorithm learns, it returns erroneous or harmful information to its users. This means that the AI we are dependent upon works against us. The dangerous part about this type of attack is that it requires minimal effort from the hacker.

Data poisoning attacks can target availability, which means that the threat actor gets in and introduces bad data. After this type of attack, the ML algorithm is entirely inaccurate and, therefore, useless. Hackers use an integrity attack to avoid virus or malware classification or to bypass network anomaly detection. Integrity attacks concentrate on a small portion of the database. They create a hidden “back door” through which attackers can access and control the system. Threat actors can also launch a confidentiality attack, exposing the data used to train the AI model.

What Threats Does It Pose? Real-World Data Poisoning Examples

In a real-world example of data poisoning, look at the advent and ubiquitous nature of fake news bombarding social media. The algorithms in social media get corrupted, and incorrect and inaccurate information, or “fake news,” replaces genuine news sources in a social media news feed.

One of Google’s experimental neural networks accidentally identified a 3-D printed turtle as a rifle. Chinese hackers discovered a way to make a Tesla drive into oncoming traffic using a simple system of stickers. The hackers staged the attack under controlled conditions, but it gave us a very sobering idea of how easily Machine Learning (ML) tools can be manipulated. Microsoft’s Twitter chatbot, Tay, was corrupted in less than 24 hours.

The most prevalent example of AI use for malicious purposes has been the use of ChatGPT. The AI-driven natural language processing tool has taken the tech world by storm. Nonetheless, cybersecurity experts discovered that ChatGPT could be manipulated easily into writing legitimate-looking phishing emails. They have also noted instances where unskilled hackers have leveraged ChatGPT to help them write malicious code, which would effectively democratize cybercrime.

Even worse, AI data poisoning takes little time or effort. According to Bloomberg, a simple backdoor code could bypass defenses by poisoning less than 0.7% of the data submitted to the machine-learning system. This means it only takes a few malicious samples of open-source data to make an ML tool vulnerable.

How Does AI Work for Cybersecurity Defense?

The same tools threat actors use to expedite attacks can also be used to defend against them. Security teams can leverage the same AI tools and techniques to combat the attacks. It simply takes a bit of creativity. Did you know that companies with a fully deployed and automated AI security program saved over $3.05 million in 2022? This is due to AI’s ability to detect and respond to threats instantly.

With traditional cybersecurity techniques, most can only detect known threats. AI has the possibility of preventing new attacks using autonomous systems and learning patterns–and AI tools can do so at a speed that outstrips traditional methods. This speed is crucial as attacks become more sophisticated, and tools like ChatGPT make it easy for hackers with no coding experience to create complex code. Essentially, AI and ML tools can extend the scope of threat detection and enhance security. Since machines are capable of lightning-speed calculations far superior to the human brain, it’s possible that ML-enabled defenses would raise the bar on detection capabilities. The hope is for cyber threat mitigation at machine speed, with the creativity of an ethical hacking team to back it.

In cybersecurity defense, machine learning tools have shown great promise in areas like automated vulnerability testing. This could translate to detection capabilities at vast speeds that would help define a baseline of normal behavior. Thus, defenders could spot anomalies more quickly and accurately.

In the case of ChatGPT, it showed how easy it is for unscrupulous threat actors to manipulate new technology. Both red and blue security teams can use this same technology to mitigate attacks. Machine learning and AI hold the potential to automate detection and defensive responses to attacks. We have the potential to create AI tools that can dynamically adapt and stop attacks. Decoy “honeypots,” made through machine learning, could be tailored as new actors enter. They could sweeten the pot and lure them into revealing their capabilities and possibly their identity. 

So What’s Next?

As with all new technology, AI presents both threats and opportunities. ML tools will enable machine-speed responses to cyberattacks, which will, in turn, allow human security teams to undertake more complex investigations. Using AI tools, security experts can analyze, study, and understand new and ongoing attacks. While it can be weaponized for cybercriminals to improve their techniques, the great boon of AI is that it constantly learns. Similar to the human genome, AI tools become self-healing and adapt their defense strategies as each new attack comes in. AI and ML tools could change system engineering and defense architectures as the advent of self-configuring networks become more prevalent. Regardless, the practical implementation of AI cyber security systems is almost limitless.

The Process for Attack Simulation and Threat Analysis

PASTA Threat Modeling: The Process for Attack Simulation and Threat Analysis

VerSprite leverages our PASTA (Process for Attack Simulation and Threat Analysis) methodology to apply a risk-based approach to threat modeling. This methodology integrates business impact, inherent application risk, trust boundaries among application components, correlated threats, and attack patterns that exploit identified weaknesses from the threat modeling exercises.