AI and Machine Learning Are Taking Off

Posted on by Robert Ackerman

Two years ago this month, a Microsoft Windows customer in North Carolina became the first would-be victim of a new cyberattack campaign by Emotet, a Russian-based malware strain and financial cybercrime operation. In the next 30 minutes, the campaign had tried to attack another 1,000-plus potential victims, and far more shortly after that.

Fortunately, Microsoft stopped the attack almost immediately, thanks to its built-in Windows Defender cybersecurity software, which happens to be equipped with an array of machine learning models. One examined the Emotet’s behavior and effortlessly stopped the file’s ability to execute.

This helps illustrate why AI and machine learning, a subset of AI that allows a machine to automatically learn from past data, including the nature of previous attacks and possible attacks of a similar vein, is gathering momentum. According to Market Research Future, machine learning alone will grow to a $31 billion global market by 2024, reflecting a compound annual growth rate of 43 percent.

When a new form of malware appears—either tweaked or brand-new—the system can check it against the database, examining the code and blocking the attack on the basis of similar malicious events. And as the Microsoft example illustrates, it typically does this almost immediately, eliminating the growth of festering attacks.

Highly advanced digitation in cybersecurity is marching forward aggressively. For many years, the traditional approach to thwart cyberattacks was to focus on the perimeter to repel intruders. Unfortunately, the perimeter has become a porous screen. “If you assume the perimeter is broken, you have to figure that everything behind it is vulnerable,” says Ellison Anne Williams, the Founder and CEO of Enveil, a pioneering Maryland-based data security company.

A growing number of sizable companies couldn’t agree more and so have begun embracing AI and machine learning to strengthen their cybersecurity.

Perimeter protection still has value. But without the help of machine learning technology, cyber pros are forced to spend a lot of time analyzing an array of threats or waiting until an attack happens before diagnosing the issue. Meanwhile, many security systems are tuned to react to many known issues with a barrage of purely reflexive alerts, leaving human teams to parse them and determine whether any action is necessary.

Ultimately, decision fatigue—i.e., burnout, and an accompanying increase in mistakes—becomes inevitable.

This shouldn’t suggest that human cyber experts should be replaced with technology. Traditional security techniques use signatures or indicators of compromise to identify threats, and studies have shown that signature-based techniques can detect roughly 90 percent of threats. Replacing traditional techniques with machine learning can increase this, but not without an explosion of false positives. The best solution by far is to combine both traditional methods and machine learning.

In the big picture, AI and machine learning do have their downsides. The biggest is that cybercriminals have started using the same techniques in a bid to make attacks easier and faster to execute and more effective.

Hackers, for example, have begun using AI to crack passwords faster and to make operations more efficient and profitable. For instance, they have begun using AI to conceal malicious codes in benign applications. They program the codes to execute at a specific time—say six months after the application has been installed—or when a targeted number of users have subscribed to the applications, maximizing the impact of attacks. These steps require the application of AI models.

Down the road, the biggest concern about the use of AI and machine learning in malware is that new strains might be able to learn from detection events. If a strain of malware was able to determine what caused its detection, the same behavior or characteristics could be avoided the next time around. If a worm’s code was the reason for its compromise, for instance, automated malware authors could rescript it. More encryption will be required to stop this.

The good guys have issues as well, mostly financial, which is why most adopters of AI and machine learning at this point are large organizations. Companies and other entities need to invest a lot of time and money in resources such as computing power, memory and data to build and maintain AI and machine learning systems. In addition, security teams must unearth many different data sets of malicious codes, malware codes and anomalies. This is typically beyond the reach of medium-sized entities.

For those who can foot the tab, however, plenty of help is out there. Just a few of the AI and machine learning vendors include:

+ CrowdStrike. CrowdStrike’s Falcon platform largely focuses on preventing endpoint attacks with real-time protection and 24/7 managed threat hunting. CrowdStrike is among the most popular cybersecurity companies.

+ Darktrace. Darktrace has injected AI in its platforms to identify a diverse range of threats at their earliest stages, including cloud-based vulnerabilities, insider attacks and state-sponsored espionage. The company says that AI gives security teams unpatrolled threat visibility and the ability to respond to them faster than could any legacy system.

+ Vade Secure. Vade Secure deploys AI and machine learning to protect more than 600 million mailboxes in 76 countries from a host of threats, including spear phishing, other malware and ransomware. With funding from venture firm General Catalyst Partners, Vade is capitalizing on the market disruption sparked by the industry shift from on-premise hosted email to the adoption of cloud-based email platforms.

Whether or not companies can afford to invest in AI and machine learning, they need to make sure that whatever cybersecurity they have doesn’t fall behind the times. The cost of being breached due to outdated technology will only grow as threats become more elaborate. Meanwhile, companies already using AI and machine learning need to make sure they continue to supplement this technology with human teams to support the infrastructure. No system is foolproof.

It remains the rare company or other sizable entity that has yet to be breached or will be. That’s why the goal of AI and machine learning is the further mitigation of cyber risk, not its dissolution. They represent additional arrows in the security quiver, although they stand out as unusually important ones.

Robert Ackerman

Founder/Managing Director, AllegisCyber, AllegisCyber Capital

Machine Learning & Artificial Intelligence Privacy

encryption strategy & trends

Blogs posted to the website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.

Share With Your Community

Related Blogs