Widely anticipated, perhaps, a colossal development—artificial intelligence (AI)—is breathtakingly changing the digital world. It’s blossoming everywhere, especially at work, steadily expanding, improving, and changing companies and lives for the better.
Unlike traditional cybersecurity approaches that often rely on manual intervention and predefined rulesets, AI introduces a new era of automation and intelligence-driven cybersecurity. At the heart of this transformation lies techniques such as machine learning and deep learning, a subset that uses neural networks to simulate the decision-making power of the human brain. Both endow AI systems with the capacity to analyze vast amounts of data at unprecedented speeds and complexities.
In particular, machine learning has arguably become the most powerful digital technology today, training computers to learn from data and thereby enabling them to rapidly make predictions or decisions without being explicitly programmed. AI technology is also being used to turn big data into actionable information and, more recently, has intersected cybersecurity and become useful on both defensive and offensive security.
Defensively, AI is now used to reverse engineer zero-day exploits, therefore allowing developers to create patches for known vulnerabilities before they become public knowledge. Offensively, AI can detect and analyze anomalies from network traffic or user behavior patterns that may reveal unauthorized access to a major system.
This crisscrossing of two digital technologies is on course to revolutionize cybersecurity, although this will require ongoing improvement and will depend on a critical balance between automation and human expertise. Humans bring a plethora of unique talents and inputs that AI has yet to emulate fully. A customer service representative, for instance, can empathize with a frustrated client and offer personalized solutions in a way that an AI chatbot cannot.
The challenge here, of course, is the enormous and continuing shortage of cybersecurity professionals. There are more than 500,000 open positions in the US and more than four million globally, a record.
Yet this problem isn’t keeping companies on the sidelines of AI. According to a recent survey by IBM, 34% of companies surveyed are using AI and another 42% are exploring it. Other research by Capgemini, a French multinational IT consulting firm, found that 63% of organizations plan to employ AI this year. And 69% said they believed they would not be able to respond to cyberattacks without it. Accordingly, another report by New York-based Zion Market Research recently said the market for cybersecurity-related AI is growing at a 23%annual rate and expected to reach $31 billion by next year.
The implementation of AI in cybersecurity provides multiple benefits. AI, for instance, also enables the efficient automation of many manual tasks currently performed by humans, resulting in less time spent on these tasks and ultimately less disruption to business operations.
This isn’t to say that AI models don’t come with sizable challenges, in addition to significant advantages.
The accuracy of these models depends heavily on the quality and quantity of training data—the dataset used to teach an AI model how to perform a specific task and that helps AI to learn more effectively and, hopefully, generalizes its knowledge to new situations. Another issue is that adversaries can employ techniques to deceive AI models, necessitating continuous model training and updating to maintain effectiveness.
Yet another challenge is that generative AI—a type of artificial intelligence that can create new content, such as text, images, music, and videos—is developing too quickly for the good of many companies. Some find themselves rushing to figure out if the technology introduces new challenges or magnifies existing security weaknesses. In addition, some vendors have apparently inundated businesses with too many AI-based features and offerings—making an AI bill of materials too hard to manage.
"The concern that most security leaders have is that there is no visibility, monitoring or explainability for some of these features,” Jeff Pollard, a cybersecurity analyst at Forrester Research recently told the media.
Still, the dearth of experienced cyber pros is likely the biggest AI issue of all and one that will not be corrected anytime soon. Without human oversight, AI systems can perpetuate biases present in the data they are trained on. AI systems today also often lack the nuanced understanding of human context and emotions—crucial in many situations for effective decision-making. In addition, cyber pros are often unavailable to prevent unintended consequences sometimes produced by AI systems.
Short-staffed cyber pros are doing what they can to strengthen AI, and here are some of their key tasks:
+ Regular Audits and Assessments: Regularly evaluate the effectiveness of AI security systems and identify areas for improvement.
+ Careful Human Oversight: Ensure that human experts are involved in monitoring and responding to security incidents.
+ Continuous Training: Make sure security staff receive ongoing training and hence keep them updated on emerging threats and best practices.
+ Embrace Diversified Security Measures: Implement a combination of technical, administrative, and physical controls to create the most robust defense possible.
Fortunately, the intersection of AI and cybersecurity is clearly offering highly promising solutions to some of the most challenging problems confronting organizations. Yet the success of AI-driven security measures depends on continually overcoming challenges related to data, model complexity, and ever- evolving cyber threats. This is why AI and cybersecurity experts must keep pushing the envelope to create more vigorous defenses against the cyber threats of tomorrow.