My consulting firm is increasingly receiving requests from customers to help them address what seems to be the last frontier of security analysis: source code. As an analyst, I have a lot of tools at my disposal for identifying problems in both compiled code and p-code. Security, after all, started out as a black box-oriented approach to figuring out answers to problems; we know what the specs of the box are, so we throw stuff at it and see what comes out the other side. While this is perhaps an over-simplification, it does illustrate what security analysts do all day. Even more advanced security assessment techniques out there such as fuzzing are still based on the same pattern of observing the behavior of both pre-compiled units and complete systems of code.
Including the actual source code into security analysis is not something that is done often or particularly well. While there are plenty of software QA frameworks, vendors, and tools out there, few are effective at security testing. Within the software QA world, "security" is a four-letter word because it is viewed as an adversary for software development. To be honest, I can't say I blame the developers and QA folks. After all, most of the time they're under pressure to develop new features and capabilities, or make products more efficient. Development teams rarely, if ever, give security anything other than lip service and security is often viewed as a barrier, rather than a partner, in the SDLC.
This post should not be taken as an affront to developers. Although I'm not a software engineer, I absolutely understand and respect the difficulties of developing integrated systems in today's environment. So much of software development today is based around piecing together existing libraries and frameworks of code, and most "new" code that's part of a complete system is often simply the glue that binds all of these things together. Development today is largely about linking pre-existing components and making sure that the parameters and data formats are correct. All this is evidence of needing more source code analysis, not less. When entire applications are largely cobbled together with libraries from a range of different authors and organizations—each with their own coding styles and assumptions—ensuring security consistency across the combined jigsaw code base is critical.
Perhaps the greatest thing we can do is to stop looking at developers as "the problem," and instead focus on the tools they work with in order to better understand how they do their jobs. We have to break the cycle of antagonism so that analysts are not viewed as a threat but as real partners. Without that, the cycle will just continue.
The second thing we can do is communicate to the software QA community—the developers of both open source packages and commercial QA framework vendors—that security needs to come to the forefront of their software QA and test suites. Without automation, source code analysis to support security will remain a pipe dream.
There are, of course, many coding-related threats out there, and they're far too numerous to identify in a single post. At RSA Conference Asia Pacific and Japan 2014, Anthony Lim of ISC2 presented "Application Security—The Invisible Onslaught Gets Worse," which addressed a broad spectrum of modern threats not only from the adoption of the cloud, but also from mobile, third-party currency (like Bitcoin), and other technologies.
While there is no magic wand to improve the security of technology systems, there is one thing that every security person can agree on: better quality code means fewer risks, improved performance, and reduced exposure to security threats. It's time for developers and security analysts to bury the hatchet, come together, and insist on secure code.