At RSAC 2018, RSA Conference blogger, Tony Kontzer, sat in on a number of sessions. Check out the posts below on security strategy and operations sessions to see what happened at last year’s Conference and to give you a taste of what you can expect at RSAC 2019!
AWS CISO: If You Want Your Data to Be Secure, Keep Humans Away From It
"The number one thing I worry about is humans," said Schmidt. "It's almost always a person who does something to make a problem come to life."
And the story behind that story is this: Most cyber security tools, techniques and strategies were developed with hardware failure and attackers in mind. As a result, the risks of simple human failure often fall through the cracks, creating gaps in an organization's protection. And when humans must access certain pools of critical data to do their jobs (at AWS, that data is code), balancing those requirements with the need to restrict access becomes a difficult balancing act.
Schmidt, however, has taken a number of steps to shore that up at AWS. And when Schmidt shares what AWS is doing on the security front, people listen. That kind of gravitas comes with being the world's biggest provider of cloud-based computing resources.
To start with, in identifying those security gaps, Schmidt said he asks himself a simple question.
"I don't think we can trust or secure that which we cannot see," he said. "So, what are our blind spots?"
The answer is that they are almost always where humans interact with data. Hence, Schmidt's strategy is straight-forward: Get humans away from the data. To do that, his team had to be clear on what constitutes the baseline environment that's critical to securing the AWS cloud. The environment has to scale on a dime in order to keep up with the business and adapt to all the interactions with data.
To make matters more complicated, AWS doesn't have much of an operational staff because, as Schmidt pointed out, even when it comes to security, the company tends to hire developer types who can build stuff, and they don't like doing repetitive tasks.
In other words, a big part of the answer is automation, and in AWS's continuous integration and delivery environment, that means monitoring access to the mountains of code its developers depend on.
"We build automation constantly," said Schmidt. "There aren't enough qualified security people in the world to do all this stuff manually."
So, for example, when it comes to access privileges, every month Schmidt gets a report on who has access to what code, and he can revoke access for those who don’t need it any more. He also uses behavioral analytics to get a clear picture of what baseline behavior patterns look like.
Schmidt said he takes steps like this through every stage of the continuous integration/delivery process, from source control, building and testing to production and maintenance. In other words, he's embedded security into the entire software development process, making it a core component of AWS's value proposition.
"It's not enough to get a seat at the table," Schmidt said, using a familiar security refrain. "We need to be building the table itself."
Schmidt also offered some tips to anyone in the audience who's cranking out software and wants to make sure it's secure when it's released. He recommended understanding how software is created and shipped, cataloging controls and visibility into the continuous integration/delivery pipeline, and keeping a record of the percentage of workloads that are built on automation versus manual processes.
And then there's that separating humans from data thing, which Schmidt said requires an aggressive approach. Along those lines, he suggested setting a goal to reduce human access to code (or whatever critical data a company's employees touch most often) by 80 percent, a figure he intentionally sets so high that it would be impossible to achieve without automation.
Perhaps most importantly, he called upon security practitioners to document every instance of human interaction with systems that process data, a process he said is certain to raise eyebrows once people see how often it's happening.
"Every one of these interactions brings the possibility of failure," said Schmidt.
If that thought doesn't keep CISOs on their toes, nothing will.
Threat Researchers Panel: New Attack Vectors Present Plenty of Reasons for Concern
Message to all who attended the SANS Institute's annual threat researcher panel keynote at the RSA Conference in San Francisco: It only felt like Chicken Little had taken over the proceedings.
No, the sky is not falling, although it may have seemed that way while listening to the trio of cyber security brainiacs on stage tell us of the horrible (and apparently very difficult to counter) new attack vectors that are rearing their heads with alarming frequency.
And perhaps none of the three had as scary a message as James Lyne, head of research and development at SANS who was subbing for ailing colleague Michael Assante, considered one of the world's foremost experts on threats to critical infrastructure. If Assante feels the world needs to worry that its infrastructure is in great peril, then Lyne communicated those concerns with aplomb.
In particular, Lyne stressed that as human beings depend more on technology and automation to hold our world together, and as more and more of that technology is interconnected via the Internet of Things, attackers are becoming wise to the fact that we're not protecting this stuff anywhere near as much as we do our mainstream computing systems.
"We have an environment that moves slowly, that is targeted more, and that is less prepared," said Lyne. "These devices are about to experience a level of focus they've never seen before."
Even worse, attackers have figured out that the best place to focus their efforts is on safety systems, the failure of which can pose a threat to life. At one point, Lyne reacted to a nervous laugh that rose above the din of the conference hall by saying, "That’s an appropriate reaction."
Lyne pointed out that, unlike the operating systems we use every day, industrial control systems, which are used to manage everything from the electrical grid and traffic signals to chemical processing and telecommunications systems, have relied on obscurity as a key component of their security profiles. But that obscurity is about to be blown wide open, and with billions upon billions of devices soon to connect to the IoT, the potential battlefront is dizzying.
And if the bad guys work their way to the increasingly abundant sensors that deliver the data those systems use to make decisions or generate alerts, watch out.
"When your source of truth is lying to you," said Lyne, "you end up with very manipulative, very concerning, and very hard to detect attacks."
And it's not like Lyne's colleagues on stage weren't also raising red flags. Ed Skoudis, SANS' penetration testing curriculum lead and a longtime RSA Conference favorite, warned attendees that he's seeing a lot more action in the area of leakage from code and data repositories. So many companies are using cloud-based storage tools from Amazon, Google and the like, and Skoudis said that these tools typically aren't configured properly to meet organizations' security requirements.
This has been a problem for entities like Time Warner, Uber and the U.S. Army, all of which Skoudis said have been bitten by such leakages.
Given that backdrop, cyber security leaders need to take steps such as creating a data asset inventory, and putting someone in charge of managing that knowledge.
"If you don't know what your data assets are, and you continuously put them on computers and systems you don't operate, you've got a problem," Skoudis said.
Fortunately, he said there are tools to help. Two of these, git-seekret and git-secrets, help to ensure that developers don't inadvertently submit code containing secrets into a repository. Another, gitrob, searches for sensitive information in those repositories. Additionally, all of the big cloud providers offer tools that work in their environments. Amazon Macie identifies sensitive data in S3. Microsft's Azure SQL Threat Detection looks for attack patterns. Google Cloud has a data loss prevention API.
That said, all the tools in the world won't offset all of the risks that come with storing important data in cloud repositories.
"When you're putting data in the cloud," Skoudis said, "you've got to keep track of it carefully."
Just as malicious—if not as scary—is the emerging attack vector of cryptocoin mining. More specifically, the bad guys steal your or your customers' computer processing power to support their cryptocoin mining activities, which can be quite profitable.
The rise of this attack profile, which doesn’t result in any stolen assets, per se, is a good news-bad news development for security leaders.
"Nobody wants your data anymore," Johannes Ullrich, dean of research at SANS, told the packed keynote hall. "They already have it all."
What makes cryptocoin mining especially difficult to defend against is not only that it typically goes undetected for months, but that it also doesn't have to be reported to any regulatory authorities because no data has been stolen. The resulting absence of attention—from authorities and the media—makes this vector attractive to hackers, most of whom like to keep a low profile, thereby emboldening them.
Ullrich told of one case in which an infiltrator managed to install a cryptocoin detection server under the raised floor of an enterprise's data center. In other words, he said, maybe it's time for cyber security leaders to consider infrared heat detectors as part of their defense profiles.
What Ullrich finds most disturbing about the cryptocoin mining trend is the questions it raises about infected hardware. And those concerns only grow when the hardware in question belongs to a cloud provider.
"As a software developer, I always trust that hardware does its job right," said Ullrich. "If you can't trust hardware, who can you trust?
The answer, according to Ullrich's closing list of takeaways — which included a caution to beware of the cloud, and a reminder that software doesn't isolate data, and that only hardware that's physically isolated does — is simple. "Trust no one."
There's a powerful security lesson in those three little words.
Is Darwinism the Answer to the Cyber Security Challenges Enterprises Face Today?
The fact that Darwin's principals of evolution and adaptation can be applied to security should come as a surprise to no one. Darwin's work percolates through everything human beings do. His theories have proven timeless and endlessly applicable.
Perhaps the better question is why the cyber security industry hasn't adopted Darwinism before now.
Better late than never.
During a well-attended session at this year's RSA Conference in San Francisco, two executives from threat detection vendor SOPHOS made a strong argument as to why establishing Darwinian security strategies makes more sense than ever.
More than anything, it comes down to two inescapable realities: Not only is business — and the data it relies on — moving faster and in larger quantities, making it impossible for humans to keep up, but it also is subjected to a wider array of attacks than ever before, and thus is collecting piles of data about the failures that allow those attacks to happen. That offers huge opportunities for learning, and thus evolution.
Meanwhile, vendors have put out a ton of innovative products to help secure endpoints and protect the mushrooming array of resources, and yet they've managed to come up short in terms of enabling business to fully evolve their approach to cyber security, said Dan Schiappa, general manager and senior VP of products for SOPHOS.
"Why do we not feel any safer?" asked Schiappa. "Because you can't manage what you don't measure, you can't fix what you don't know is broken, and lastly, you can't secure what you don't know is there."
Along those lines, Schiappa pointed out that Darwin's theories were based on four critical steps that enable evolution to occur: discovery, identification, analysis and response. For early man, that meant spotting possible food, determining whether it was predator or prey, determining a course of action, and then taking that action.
Interestingly, this tracks perfectly with how cyber security is done. Devices, networks, apps and data are discovered. Those assets are identified and organized, and then correlated and analyzed as events, and finally an automated response or enforcement is carried out.
Alone the way, every piece of data can inform not only a current security situation, but future situations as well. For instance, assets come in different classes. Notebooks, servers, mobile devices, Internet of Things devices — they all have different access policies, security protocols and risk profiles.
"You have distinct knowledge that can be learned from each of these asset classes," said SOPHOS CTO Joe Levy.
So how do you apply Darwinian security principals? SOPHOS is trying to jumpstart things with a new approach it's dubbed SEAR, which stands for sensors (discovery), events (identification), analytics and response. The idea is use analytics to create mathematical models from events, and continuously analyze those models to discover anomalies, monitor high-interaction interfaces, and formulate adaptive responses.
"This is the Darwin part," said Schiappa. "It's self-adapting, learning from its analytics, and it can change things."
Schiappa and Levy said the approach delivers results. It enables real-time threat intelligence that applies the lessons of the models it's analyzed. It enables security parameters to be changed on the fly as security scores and risk postures fluctuate. And, if enough companies adopt the practice, it can lead to extending analytics to a set of APIs to enable bidirectional exchanges of information, thus creating a collaborative, living knowledge base of security incidents.
That would, in theory, lead to stronger products, expanded services, and new partnerships, all of which would feed the Darwinian protection loop.
"The concept is to do this around an autonomous IT security ecosystem," said Schiappa. "We think this is the ecosystem the industry should play with."
RSAC Panel: AI- and Machine Learning-Based Security Products Are Here, but Adequate Testing Methods Aren't
If there's one certainty cyber security professionals brought home with them from this year's RSA Conference in San Francisco, it was this: Ready or not, artificial intelligence is coming to a security strategy near you.
Whether that means cyber security teams will get ahead of the trend and start experimenting with AI-infused security tools, or that they'll passively wait until they're forced to contend with AI-powered attacks, there's no avoiding the pending explosion of this still nascent technology.
What makes this prospect especially worrisome to security leaders is how little they really know about the AI products about which they have to make purchasing decisions. At its most basic, it comes down to a cart-before-the-horse issue; AI is here, but sufficiently sophisticated testing methods are not, leaving buyers to wonder exactly what they're getting.
"Artificial intelligence and machine learning are happening because we've got the data, we've got the compute power, and we've got the data scientists," Chad Skipper, VP of competitive intelligence and product testing at AI-based threat prevention vendor Cylance, said during a final-day panel discussion on evaluating AI-based security products. "In order to test the dataset, the advances in the testing methodologies have to introduce lots of unknowns. That's how you test the ability of AI to detect tomorrow's threat today."
But, as Skipper admitted, introducing unknowns is extremely difficult. Most of today's machine learning algorithms are trained with the known, but are asked to predict the unknown (or, more accurately, the yet-to-occur) based on that training. The simple reality is that there are no easy ways to accumulate data on unknown malware or other threats.
Liam Randell, president of the Critical Stack, Capital One's secure container orchestration spinoff, said he has no choice but to test products against real users dealing with live unknowns, but he'd like to introduce unknowns in a more controlled environment, with simulated users.
"I would love to have products or services that allow me to introduce randomization as part of a unit test," said Randell.
Even without the unknowns, third-party testing of AI-based security products is already a very complicated affair, with testers evaluating as many as two dozen vendors at a time, requiring thousands of virtual machines to manage all the malware variations.
Panelist Mike Spanbauer, VP of research and strategy for third-party tester NSS Labs, said that NSS places a premium on due diligence, and that it has built a robust testing infrastructure designed not to give any single vendor or solution an advantage. Spanbauer called this undertaking "no small feat."
Spanbauer also said that NSS aspires to mimic a live environment during testing, and while he would never claim that the company is there, he believes NSS tests products as diligently as any source in the industry.
"Testing at this scale is not easy," he said. "The level of complexity of attacks, the delivery — there's a lot to it."
Randell said that's typically skeptical of any claims, but in the absence of truly robust testing against unknowns, he likes NSS's approach, which focuses on the analysis of actions. He said it drives and aligns his thinking about getting products operational in the field by giving him insight into the wide range of possible scenarios Critical Stack might face.
"In our business, every process becomes a snowflake," he said.
Skipper also said he supports NSS's approach because of the range of attack vectors it employs, which lets it work a bit of unknown into its testing procedures.
One thing all of the panelists agreed with was the notion that vendors cannot ever be behind any testing. Randell said there are a huge number of jobs to be done with AI and machine learning-based products that don't exist yet, and that he's skeptical when vendors are hiring labs to test their innovative new products. Spanbauer assured the audience that NSS never takes a penny from any vendor to get into its reports, and that its choices of what products to evaluate are based entirely on market demand.
Meanwhile, Skipper framed the essential truth about vendor-supported tests perfectly.
"If you see a sponsored test," he said, "a sponsor never loses."
Ultimately, the panel offered attendees a six-month plan that would enable organizations to ensure that innovative new AI and machine learning products meet their needs:
-In the first three months, they recommended defining a strategy for evaluating third-party product claims, creating a product evaluation role, and enlisting the resources of independent testers.
-Over the ensuing three months, they said organizations should drive a culture that values evaluating products to fill gaps rather than to get the latest and greatest tools, and let vendors know that they will need to adapt to accommodate third-party testing.