Artificial Intelligence (AI) is supposed to change our lives for the better. It should help us make empirical decisions based on countless data points. It should see the unseen. And in the coming decade, we will definitely see leaps in AI use and efficiency; new research from global professional association ISACA on the next decade of tech identifies AI/machine learning as the anticipated most important enterprise technology of the 2020s. In our realm of cybersecurity, AI will revolutionize the way we combat cybercrime. However, I also see use of AI as a potential weakness in our cybersecurity posture.
To me, AI is like a child, in that it comes with great potential, but in order to achieve its full potential, it needs to learn. Unlike humans, AI needs significantly more examples and well-defined rules on how to deal with one situation versus another. This child has the potential to grow up doing evil as much as it does the potential to grow up doing good. And even today, there are vulnerabilities in the AI implementations that cybercriminals are abusing.
Let’s take, for example, a pattern recognition. When monitoring countless logs, AI will find good patterns of standard behavior as well as be on the lookout for anomalies, especially malicious anomalies. Thus, a potential attacker with a good understanding of AI functionality may introduce a pattern of behavior that would normalize a malicious behavior by gradually introducing a behavior change from normal to malicious. Or, taking an opposite direction, the same attacker can vilify a normal behavior, making AI block legitimate patterns. This would not take place overnight, but a determined and skillful attacker would definitely stand a chance not only to taint but also to fundamentally change AI functionality based on his or her desired need.
We are still very far away from needing something similar to Isaac Asimov's Three Laws of Robotics. Our current foray into artificial intelligence is not very close to a level of sentient AI. Thus, the cybercriminals are using AI for their ill-gotten gains without a second thought about the side effects. They can teach their AI to steal, abuse or impersonate with lightning speed, making decisions much faster than defenses, even powered by another AI, can adjust to.
Today, botnets are not devoid of intelligence, and while a lot of them are rule-driven, there are automatic adoptability settings allowing systems to log and learn from the logs. Here is an example related to ransomware use that already is in the works. A typical cycle of ransomware involves breaking into a remote system, leveraging access and vulnerabilities to gain access to a connected network, evaluating the value of an organization and then encrypting the data. There are already talks among the bad guys to “automate” all these parts and let the virus decide what to do. Imagine that in the not-so-distant future, with polymorphic viruses, we will have a malicious payload that looks completely normal during its installation, and then hours or days later, it will start manifesting malicious behavior. Part by part, it will assemble a malicious instruction set disguised as legitimate functionality, and then it will start exploring the local machine and network. It will methodically disable security safeguards, delete backups and start evaluation of system data for its potential value and exfiltration as a collateral. It will look at the company profile and come up with an amount to extort. It will encrypt the data, create a dedicated payment method and be flexible enough to negotiate—all as a single AI function without the need for human guidance.
If you think this is science fiction, think again—most of these components are already implemented by cybercriminals to ease their “hard” work. Do we have easy and affordable AI solutions to counter this type of an attack? Not even close.
We are starting the 2020s with great hope that artificial intelligence will solve a lot of our problems and will rely on solutions that claim to have AI somewhere inside. It is important to realize that we are still in a cyber arms race, where our adversary is not afraid to use new technology and adopt AI. We need to be proactive and not shy away from leveraging new AI technology for our defenses.