AI and Machine Learning Is Growing Briskly Despite Cybercriminal Obstacles

Posted on by Robert Ackerman

There are a number of reasons why cyberattacks and breaches continue to grow and inflict ever more damage despite substantial improvements in cybersecurity. But a big one, in particular, is the number of global malware attacks annually. This number varies, yet it’s always in the millions, and this is simply too much volume for humans to handle.

This is why artificial intelligence (AI) and its key subfield, machine learning, have entered the cybersecurity landscape in recent years and are growing at a crisp pace. According to IT market research firm International Data Corporation, worldwide revenues for the IT market, including services, as well as software and hardware, grew more than 16 percent last year to $328 billion and is on course to grow even faster and exceed $550 billion in 2024.

Not many companies detail just how much they already benefit from AI. But one notable exception is Microsoft, which a few years ago was attacked by cyber crooks using Trojan malware in a bid to install malicious cryptocurrency miners on hundreds of thousands of computers using Microsoft operating systems. Within a matter of minutes, Microsoft blocked more than 80,000 instances of Trojans and, a day later, an additional 400,000 instances.

All the credit went to Microsoft’s Windows Defender Antivirus—software that employs multiple layers of machine learning to identify and block perceived threats.

Microsoft is hardly alone in the AI arena. Today, for instance, Google is using machine learning to analyze threats against mobile endpoints. Other companies are examining how to embrace this technology to better protect the security of bring-your-own mobile devices.

A growing number of companies cannot deploy effective cybersecurity technology without relying heavily on machine learning, which can analyze patterns and learn from them to help prevent similar attacks and respond to changing behavior. This helps cybersecurity teams be more proactive in preventing threats and responding to active attacks in real time. In a nutshell, machine learning can make cybersecurity simpler, more proactive, less expensive, and substantially more effective.

Nonetheless, the AI and machine learning landscape must be viewed in perspective.

While AI is a big leap forward, it’s also imperfect and far from its full potential. Moreover, in some key ways, it’s a curse of sorts, as well as a blessing. Its weakness is that cybercriminals have also found ways to embrace AI and have begun using it to more effectively attack organizations. This isn’t difficult because the technology requires massive quantities and diverse types of digital data, making it vulnerable to data breaches. Meanwhile, pernicious AI-powered attacks are growing, and the cost of developing applications is falling.

Because AI is a positive overall but a mixed bag in some ways, here are some pros and cons. Let’s start with an issue that sits roughly in the middle—the expense of AI.


+ There is a misconception that leveraging AI technologies is very costly. This was true in the past when only giant technology companies such as Microsoft could afford to develop AI-powered software and applications. Things have changed with modern technology.

Pricing still varies, depending on variables such as the performance of AI algorithms and the complexity of the AI solutions. Overall, however, AI software generally doesn’t exceed $300,000 for a third-party solution or a platform developed by freelance data scientists, including product development and rollout. There are also typically expensive AI consulting services and extra costs for additional computing power and memory. The tab is minimal for big companies but cost-prohibitive for many smaller businesses.


+ Impressive AI tools. Composite fraud-detection engines, for instance, have shown stellar results in recognition of complex scam patterns. This includes advanced dashboards that provide comprehensive details about incidents.

+ Improved anti-malware protection. AI helps this software detect bad files, even new, not previously seen malware. The number of false positives also tends to increase, however.

+ Easier vulnerability management. AI tools work hard to find potential vulnerabilities. They analyze baseline user behavior and endpoints and sometimes even discussions on the dark web that may suggest upcoming attacks.


+ AI fuzzing. This uses machine learning and similar techniques to find vulnerabilities in an application or system. Enterprises and software vendors use this to find and fix vulnerabilities. But some hackers also have access to this technology and sometimes use it to ease finding vendor zero-day vulnerabilities—i.e., security flaws that have yet to be patched to protect users.

+ Hackers can use machine learning to hide malware. Specifically, they can track network connection points and endpoint behavior and build patterns that mimic legitimate network traffic on a victim’s network. And algorithms they develop can extract data faster than a human, making it harder to prevent an attack.

+ Machine learning also enables cybercriminals to more effectively break into human-recognition systems. ML, for example, empowers hackers to analyze vast password data sets to better target password guesses.

Given this variety of threats, the information technology sector is understandably wary of the possibility that government regulation of AI will become a reality. Last year, US federal financial regulators formally asked banks how they use AI. Nothing has happened yet. But it suggests the financial sector may eventually face government guidance—a roadblock for enterprises, while AI-empowered hackers continue to do whatever they want. Let’s hope this scenario doesn’t unfold.

Robert Ackerman

Founder/Managing Director, AllegisCyber, AllegisCyber Capital

Machine Learning & Artificial Intelligence

artificial intelligence & machine learning artificial intelligence & machine learning software integrity data lakes anti malware security analytics

Blogs posted to the website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.

Share With Your Community

Related Blogs