I’d like to end this series with thoughts on options for transforming information sharing to drive scalable solutions that have the potential for a broad impact using the few skilled resources that exist.
In the current wave of information-sharing efforts, discussions are typically led by those with the resources to manage or participate in sharing initiatives or sponsors of those efforts. This narrows down the discussion to governments, country-level CSIRTs, large organizations, and vendors. In previous waves, the discussion was led by operator-driven communities and not discussed outside their circles much. The operator-driven models typically focus on a particular use case with a goal of eliminating offending traffic, as opposed to just sharing information for defense purposes (block-lists, etc.).
I have been advocating for operator driven models to ensure that we not only help large organizations with skilled information security and threat intelligence resources, but also the small and medium sized organizations that are part of our supply chain. However, I still continue to see efforts that are aimed at helping only the large organizations who have the resources to participate in threat intelligence sharing where sharing circles exchange broad sets of information with their members.
I sat through another Financial Services Information Sharing Analysis Center (FS-ISAC) presentation on December 15th and left with more than a few questions on their current direction. The FS-ISAC has been working closely with DHS and MITRE using the US government funded specifications to exchange threat intelligence with their members: STIX, CybOX, and TAXII. The work they are doing is very positive for the large organizations able to participate. Currently, they are deploying a sharing model that enables the exchange of very rich cyber security information across numerous use cases that do not involve vendors or existing operator-driven communities that address threats at the source. Instead, a solution is being designed that allows organizations to take in threat intelligence by each organization’s threat analyst team to then deploy mitigating controls out to firewalls, intrusion prevention systems, routers, monitoring applications, etc.
While the talk referenced Rick Johnson’s Forester paper released in October 2013 when discussing this model, it seemed to neglect the discussion from that paper asserting this type of solution reaches only the top %1 of organizations. This is a critical point to consider when building a scalable model that can have a broad impact. While the FS-ISAC work is very positive and necessary for the large financial institutions that have the resources to participate, adjustments to the model will be needed if it is intended to help medium and small sized organizations. In recent discussions with several large financial institutions they asserted they did not have the resources to use an automated tool that ingests threat intelligence in addition to other security appliances on their network. If some large organizations cannot even participate, how do we flip the threat sharing models to improve efficiency and effectiveness to cover small and medium sized organizations as well?
Critical thinking and privacy concerns post-Snowden have helped to make some progress, such as the end to discussions on cross ISAC sharing organized by DHS. Had this effort moved forward, all of the data from each ISAC, contributed by participating organizations, would be in one place managed by the US Government. While this could have been good from a research standpoint to assess threats broadly across industries, it does not address the threats directly or efficiently. On a positive note, more operational models for sharing are popping up such as the ACDC botnet effort and Microsoft’s initiative aimed at eliminating malware last week.
I’ve talked a lot about exchanging directed information to have a broad impact in previous blogs. Let’s be clear here as well, in those discussions, I am not referring to models that automate defenses based on received intelligence into a threat-sharing hub and then out to devices on every network. Most organizations are not willing to automate in that way as it could cause adverse effects on the network and on an organization’s business. Instead, let’s focus on with whom you are sharing what data. We should be asking critical questions like:
- How can we have a greater impact and better scale threat intelligence sharing models?
- Why are we replicating use-case exchanges, like anti-phishing, within multiple sharing groups leading to the duplication of data and stale block-lists?
- Why aren’t we leveraging the APWG and similar operator-driven organizations who address the threats head on with take-down services and proving protections for organizations of all sizes?
- Why aren’t vendors involved in some of the newer threat sharing models?
- Shouldn’t we be limiting what data is shared with whom to solve the problem and minimize legal and regulatory concerns?
- Why are governments leading the discussions in many cases, such as in the development of specifications (STIX, CybOX, and TAXII) or sharing models (US-based ISACs)?
Yes, I do work for a vendor. With a former CISO hat on, I can’t see why we would not want to shift sharing models and have vendors involved enabling a broader impact with the exchanged intelligence. The Industrial Control Systems Information Sharing and Analysis Center (ICS-ISAC) plans on doing just that because they are fully aware of the limitations on resources that many of their participating organizations face.
Microsoft also announced plans to address malware with an operator driven model, with the goal of directly eliminating threats and malware families. While I applaud this approach, I do hope it will be run by a vendor neutral organization. A vendor neutral organization will help get the participation needed of vendors, service providers, and other members similar to other use case driven operator models like the APWG (specific to anti-phishing and eCrime) and ACDC (specific to Botnets).
In the not-too-distant future, I think the models being driven in groups like the FS-ISAC will run into scalability issues and will box out not only the credit unions and smaller financial services organizations, but also some of the medium and large organizations. Luckily, there are some key people who are trying to figure out how to change those models from within the organization. Until that happens, we are seeing public talks and papers promoting this model by MITRE, DHS, and a few at the FS-ISAC as groundbreaking and a model to follow. Why not instead follow the models set forth by successful operational groups such as large scale mail operators using IETF’s ARF, the eCrime and anti-phishing working group using IETF’s IODEF, and the other use case driven sharing initiatives for botnets using a mix of standards and specifications to meet their specific use case needs?
In the wake of numerous releases from Snowden, I think we as a community, need to stop, take a step back and Apply Critical Thinking to the development of information sharing models, processes, and methods. We need more people involved in the discussion to provide viewpoints other than those with extensive resources. We need privacy and security considerations to drive better threat modeling, not just for the use of secure protocols and protocol options, but also to share only what is necessary. We can still ‘get this right’ and build better defensive models that are sustainable and achieve what I think is the primary goal:
To prevent attacks or minimally to raise costs associated with conducting attacks.
It is promising to see successful operator driven models exist and continue to emerge. For more information on the topic see the RSA Perspectives paper.