The Internet is Small: Own Your Attack Surface Before Somebody Else


Posted on in Presentations

The Internet is small enough that attackers can find vulnerabilities in less than an hour. Dr. Tim Junio will address how security teams must become resilient as digital transformation expanded the external attack surface. Using attack surface data and lessons from leading enterprises, this talk will help security benchmark attack surface metrics, understand the threat landscape and modernize security measures.


Video Transcript

- Hi, I'm Tim Junio. I'm the head of engineering for Palo Alto Networks Cortex. Before joining Palo Alto Networks, I was the CEO, and co-founder of Expanse, the leading attack surface management company. Today, we're pleased to share with you new research from Palo Alto Networks on attack surface management. Specifically, how Fortune 500 companies are at risk from attackers that can discover exposed assets on the public internet faster than ever before. When you think of the global internet, you probably think large scale. The internet is huge. If you look at any of the common metrics, like how many websites there are, how many internet protocol addresses there are, the numbers are gargantuan. And when you look at the future of the internet with IPv6, the numbers get even bigger. However, what we observed going back almost 10 years now is methodologies in network science to explore the internet space much more rapidly than was ever possible. So what used to take weeks or months in indexing devices and assets across the global internet, in about 2013 could be done in say 45 minutes, well under an hour, for a given protocol. So for example, if you wanted to index every webpage on the public internet using new methods openly published in 2013, you can index every webpage in about 45 minutes. And what we saw in the eight years since has been the discovery of additional methods for parallelizing how internet scanning is done such that you can actually scan for a given protocol in about five minutes. So what this means is from an attacker perspective, if you know that you want to look for a particular type of exploit across the global internet, or you just wanna watch for opportunistic threats, or you wanna keep an eye on certain types of targets, like just banks or pharmaceutical companies, or US government agencies, or whatever your subset of targets is, you can use these methods to persistently observe either the entire internet for a given type of exposure, or a target for any exposure that shows up across a wide range of known exploits. So what we see is attackers are using this network science to be able to scan networks extremely quickly with something like an upper boundary today on 45 minutes for scanning a given protocol, and then automated methods to within those data discover things specific to a target, and even start to test for whether or not they are immediately exploitable or connected to anything sensitive that you might want to try and access after gaining illicit access to that first exposed device on the internet. One of the recent events in 2021 was the publication of zero day exploits associated with Microsoft Exchange servers. This was an interesting moment in the history of the internet, because we actually saw within minutes of publication of the exploits an increase in scanning for these assets on the public internet. What this meant is what used to take maybe a few weeks after the publication of how to exploit a piece of software, now we saw just within minutes that attackers were looking across the global internet for something that they could use to try and gain illicit access to a network, for criminal attacks like ransomware, or for nation-state level attacks going for sensitive information. Whereas on the defender's side, we were seeing that it took days to weeks for large organizations to find where all of their exposed servers were, and identify which ones needed to be patched to defend them. So when we think about this from the perspective of what's the range of sophistication for cyberattacks, we tend to think of the most sophisticated and effective as zero day attacks. When you definitionally have an exploit for software that nobody else knows about, and can get into a device that has no protections, because nobody knew that the software flaw existed. We tend to think of that as something that governments do. And at the opposite extreme, we think of baseline cyber hygiene as making sure you don't have exposed, say database servers with no passwords or other low level things on the public internet. The new network science that makes possible scanning the entire internet that has now been around for almost a decade, means that cyber hygiene is actually similar in importance to zero day attacks by sophisticated threat actors. And even very, very sophisticated threat actors, like governments, are likely to be looking for that low hanging fruit as well to try and break into networks. When we look at this in balance between attackers and defenders, what is very interesting about this problem is that attackers can within seconds start scanning the entire internet on a given protocol for exploitable systems. On the defender side, we see that it takes days, weeks and sometimes much longer than that to discover all assets associated with an organization. And the reason that happens is digital transformation has resulted in an explosion of internet-connected devices, including in different commercial cloud environments, regions around the world, for subsidiaries, after merger and acquisition events. And that means that if you're a large organization, a company or a government agency, you have to be much slower than attackers today at trying to find what is your true attack surface, and what is the risk associated with it, especially when something bad happens for internet security like publication of a new software exploit. When we look at how organizations try to defend themselves today, there is a two-by-two access that helps understand what approaches companies and government agencies are taking. One is known versus unknown assets. So what are you checking against? Things that you are sure of, like your IP master list, what are your IP ranges? And then unknowns, like did you have a developer set something up in a cloud environment that was not provisioned through central IT, and therefore you need to find it, and get caught up to what your employee did? On the other axis is speed. So at what cadence are you using different methods to protect your attack surface? And if we look at the primary methods that organizations use, something like penetration testing has been around for a long time. That's basically checking deeply on assets that you know about for whether or not you can exploit them. That's typically done on a quarterly cadence. Some are slower and do it only once a year. Some are faster and do it on a more regular basis. But around once a quarter is when pen testing happens. And then some organizations invest further in red teaming, which is a more expansive approach, looking at how you can break into a network across many different methods. Not just looking at your known assets, but looking across potentially discovery of new things that would let you get into an organization. And then the kind of last few years have evolved to have vulnerability management as a standard, which means scanning parts of your network that you know about to see whether or not the software is up to date and if it's exploitable, to try and create a work process around fixing that software and patching that software. However, all of these methods are slower than what attackers do. So attackers are constantly monitoring the global internet and specific targets of interest using the network science methods mentioned earlier, such that in sub hourly scanning, and even in just a few minutes scanning, an attacker could find an asset that pops up that was not intended to be routable over the public internet. We at Palo Alto Networks only observe the top one or 2% of security organizations operating at the same pace of attackers. Almost the entire industry in cybersecurity is operating more slowly than attackers. We think of this as, hopefully not being too cheeky, security needing an out of body experience. So the average security organization is looking at a fairly narrow aperture, which is what do you believe your network is, and then applying security tools against it. Whereas what attackers are doing is taking a holistic view. They're looking across the global internet, or they're looking across targets and entire sectors, to see everything and then wait for something to pop up that they can go after to try to gain elicit access. Security organizations need to do the same as attackers in monitoring the global internet for their assets to be able to quickly update their inventories such that if anyone throughout the entire organization puts something on the internet that attackers could discover the organization's IT and security teams know about it as well. And therefore, if there is a security risk then or in the future, they can very quickly move to gain control over that asset or patch its software, and otherwise avoid the risk of attackers going after it. So one of the benefits of digital transformation is that you can try and get data from networked assets for business productivity and other gains, like being able to update software quickly, maintain patch status, and optimize business processes. However, embedding internet access in almost everything that we would consider IT, or information technology today, means that your attack surface is getting larger. And over the last several years in particular, what we have observed is during COVID, large numbers of people working from home, further increasing the attack surface, because you have people who are using corporate equipment from their homes or other places that are not the office, and not on a corporate network, but using the public internet, and also SaaS applications and the explosion of commercial cloud, such that we're seeing less in managed data center and co-location facility services, and much more in public services, like from Amazon, Microsoft Azure, from Google. A metric that we're seeing the most advanced organizations use is Mean Time to Inventory. So when we talk about the top couple of percent that have high degrees of automation around this problem, they're thinking about security with an inventory first mentality, which is you can only protect what you know about. And if you don't know about assets, then all of your investments are for nought. You could be spending hundreds of millions of dollars per year on cybersecurity, but if an asset is not under management, then it's not getting the benefits of that program. We are recommending to our customers and what we're seeing as the industry leading approach is for you to measure your mean time to inventory, how long does it take you to discover a new asset that is stood up by somebody in your organization if it is not through a central provisioning service? So if say an employee set something up in a commercial cloud environment, how long does it take you to find that device and be able to add it to your inventory along with the business context, of which part of your organization stood it up, who is the person, and are you able to maintain it under the management of the rest of your IT and security infrastructure? Today, we're pleased to publish our first Attack Surface Management Report from Palo Alto Networks. What we did is look at a subset of Fortune 500 companies to get a sense for how quickly their environments are changing on the public internet, and also what the rates of security exposures are across those assets. What we found is that for the average Fortune 500 company, an exposure that we would consider a serious exposure occurs every 12 hours. What we mean by a serious exposure is either an asset that should never be routable over the public internet, or that has a known exploit associated with it. The most common category was remote desktop protocol. RDP is a remote access protocol that lets you interact with a Windows workstation as though you were sitting in front of it, but from anywhere in the world over the public internet, which could be a windows virtual machine in a data center, or it could be an employee's laptop. And if it's not configured correctly, then anyone in the world would be able to gain access as though they were sitting at that machine. And the reason that this happens is we tend to see connections that are supposed to occur over corporate VPNs, but then if misconfigured, when the VPN drops, which happens on a regular basis because of just internet connectivity, if that device will automatically connect to the next available internet, when not on the VPN, a significant proportion of the time a Windows workstation will end up on the public internet without a proper firewall configuration such that anyone in the world would be able to start to test exploits or test usernames and passwords against that Windows machine. We tend to see that at a high rate. It is actually about 1/3 of everything that we observe in Fortune 500 serious incidents. And that happens both on premises and in the cloud. When we look across all of the data in our white paper, greater than three-quarters of all of these serious exposures, we see for Fortune 500 organizations are in cloud environments. This implies that the rate at which organizations are standing up and standing down environments in public cloud, means they are not well monitored from an attack surface management perspective. And we even see that roughly one in five of exposures are still on premises, which is supposed to be the best monitored and parts of the organization that IT, and security are supposed to know the most about. When we look across the categories of things that we see, it's not just RDP, and it's not just cloud workers. We're also seeing a wide range of exposures for the Fortune 500 that are current lower rates, by things like simple database server exposures and misconfigurations associated with IoT, operational technology, and other types of assets that now have internet protocol in them. We see them at frequent rates, maybe not every 12 hours, but on the days to weeks cadence in the Fortune 500, meaning that when attackers are monitoring for anything that shows up across targets of interest, they are very likely to see those assets. And if those assets happen to be connected to any sensitive systems, not have proper defense in depth, or have sensitive corporate information on them, they are at high risk of discovery by attackers before they can be found and remediated by defenders. The last thing I'd like to communicate is that this is not an unsolvable problem. And so even though attackers have a strong advantage today in finding weaknesses in attack surfaces, what we see in the top couple of percent of organizations is a high degree of automation in resolving this problem. And the rest of the world could actually catch up and also deploy processes and technology to help solve the attack surface management problem. Thank you for your time, and we hope you're enjoying the RSA conference. We are posting to asmtop10.com our attack surface management white paper with our research on the Fortune 500, and also best practices on how you can help get your attack surface under control. Thank you, again.


Participants
Dr. Tim Junio

Speaker, Participant

Senior Vice President, Cortex , Palo Alto Networks

Analytics Intelligence & Response

data security hackers & threats biometrics zero trust


Topic

Subtopic


Share With Your Community