Only a few years ago, big data security analytics was not an arrow in most quivers in the fight against cybercrime, now estimated to have ballooned to more than $450 billion a year globally.
In fact, protecting a corporate enterprise and its data was roughly akin to securing a medieval castle – in essence, a big wall was created, surrounded by a deep moat. Guards determined who could cross a rope-drawn drawbridge and who could not. Once an approved guest crossed the bridge and it was lifted, the castle was once again presumed secure.
To say that this metaphor has become obsolete is a monumental understatement.
Now that we realize that it’s virtually impossible to prevent all intrusions, traditional perimeter protection tools have been deemed insufficient, upstaged by the need to monitor and detect malicious anomalous activities within enterprise networks. The advent of cloud computing technology, bring-your-own device initiatives and the Internet of Things enable far too many people to access data from outside a firewall.
Security Analytics Generates Finger Prints
Conversely, big data security analytics provides a fingerprint of the intruder, marking each nefarious step he takes in the network and can lead to stopping him in his tracks far more quickly than has been the case. Rather than using basic approaches to identify attacks -- such as search and query and digital signatures -- this technology speedily enables analysts to take a more sophisticated approach based upon behavioral patterns and predictive analytics.
In general, the use of data is nothing new. Take, for example, a log file – a decades-old method for documenting system events. It’s often a good source to track down the details behind a breach – but, unfortunately, only after the fact. And the process takes much longer today because networks have more connections and more people are accessing these systems.
Enterprises now realize that context, as well as analytics, has become crucial in successfully confronting hackers. Data – by itself – is insufficient. How, for example, is a particular machine acting compared to its peers?
Buying Flashy Technology Had Been the Name of the Game
This evolution is healthy. As recently as two or three years ago, most organizations kept spending big after identifying a new category of threat, purchasing a technology to address that and then plugging it into their protective dam. The unfortunate result was a series of siloed solutions that didn’t talk to each other. This set the stage for the introduction of the first stage of security analytics through SIEM (security information and event management) software, which aggregated the information from all these security tools to provide a more holistic view.
The problem thereafter– and it turned out to be huge -- is that SIEM couldn’t keep up with the escalating rate of sophisticated new cyberattacks and was also insufficiently proactive. So today, enterprises have begun moving up to more sophisticated security analytics platforms, enriched with context and boasting threat detection, analysis of historical data, monitoring tools, incident response tools and multiple forms of threat intelligence.
This should not suggest that more work isn’t needed. While analytical historical data, for example, can be used to predict cyberattacks via the creation of statistical baselines to figure out what is “normal” and what is not, this is tricky stuff. Every
outlier is not a threat, nor is every threat an outlier, and this is particularly true of human behavior.
Established Patterns in the Workplace are Typical
Consider, for example, typical work patterns. Many people usually go to their office Monday through Friday during “normal” business hours. Should they travel internationally for work, however, they’re probably accessing the same corporate network at a different time from a different IP address located elsewhere else in the world. A system could be programmed to ban network access under these conditions, but then traveling professionals wouldn’t get much work done.
Another conundrum relates to insider threats. Headlines about security breaches tend to be about nefarious actors in other countries, but more breaches are caused by the action or failure of someone inside the company. A 2016 study by IBM found that 60 percent of all attacks were waged by, or connected to, insiders, mostly involving malicious intent.
In the overall picture, the fact is that humans are typically creatures of habit. Just as they come to work at the same time daily, they usually interact with technology in the same way. Deep analytics and AI is able to uncover deviations in behavior at the level of individual employees, making it much easier to spot signs that systems have been compromised.
Nonetheless, determining whether an outlier is a legitimate threat doesn’t stop at this point. It also requires striking a balance between the needs of users and their security-minded employer as much as possible.
Where Context Enters the Scene
Here is where context – the means of learning the whole story – enters the picture. If someone in corporate finance, for example, accessed a part of the network he has never visited before, this would probably seem strange at first blush. Whether it requires action, however, depends, among other things, on whether colleagues have done the same thing and, if so, when.
Weaving conjectural context into algorithms currently is only at the nascent stage of a work in progress. In fact, more work needs to be done on multiple fronts regarding the relationship between humans and computers as it relates to security.
Most important at this juncture, however, is the big picture, and it leans positive. Defenses against cyber attacks are increasingly sophisticated. To be sure, much remains to be done. Still needed, for instance, is innovation allowing enterprises to detect attacks in real time and, beyond that, hopefully developing the ability to predict attacks and their sponsors. These efforts and others are the only rational path to pursue in a world teeming with ever more skilled hackers.