Back in 2006, a large company in Chicago contracted my company to conduct an advanced information security controls assessment. In addition to looking for technical vulnerabilities—unpatched servers, web app vulnerabilities, open ports that should be closed, and the like—we were also contracted to conduct a social engineering assessment. On the first day of our technical assessment, our team noticed that the security officer in the client's downtown Chicago building handed us simple, black and white, thermal-printed visitor badges with our names, the name of our client, and the date. When lunchtime arrived, we left the building and asked if we needed to return the badges and get new ones when we returned in about an hour. Their response? "No, just make sure you're showing the badge when you come back in after lunch." My team and I looked at each other, knowing we had found our way in for the social engineering portion of our test.
After we completed the technical security assessment a week later, we started the social engineering phase. To our client's credit, nearly all of their personnel did the right things. When we picked up the phone and claimed to be a new employee who was missing VPN logon credentials, we were directed to the Help Desk, who correctly challenged us and didn't give us the credentials we requested. When we called the CEO's administrator, claiming to be from a speakers' booking agency, offering a $10,000 speaking fee, and asking for his home address and private phone number, we were correctly shut down.
We sent in one of our best social engineering resources, “Lisa” to exploit those visitors’ badges. Lisa was 26 years old, a professional actress, and could read people incredibly well. She didn't know a lot about technology...and she didn't need to. We used an old thermal printer to generate a new badge and sent her off to the building. Flashing her visitor's badge, she walked through the security gate with no questions asked. She proceeded to the main desk and warmly informed the receptionist that she had a meeting with the Help Desk Manager (whose name we found on LinkedIn). Lisa was told to proceed to the 26th floor. Removing her badge while in the elevator, she knocked on the secured door at the floor and was let in by a member of the IT team. "I hope you can help me: today's my first day in HR, and [the HR manager, whose name we also found on LinkedIn] asked me to come up here and have you guys create my network ID so that I can start working ASAP." Within 10 minutes, we had legitimate credentials and access to shared network drives with lots of tasty data. Our job was done.
This was eight years ago. As much as I would like to say that organizations aren't as susceptible to these techniques today, the sad fact is, these methodspale in comparison to what attackers are doing today. Perhaps the biggest difference between then and now is that the level of pretexting (intelligence gathering for the purpose of establishing trust with a potential target) has increased dramatically, thanks tosocial media. From personal sites such as Facebook and Instagram, to business tools such as LinkedIn, a substantial portion of a person's life can be pieced together. The attacker can impersonate a personal friend, employer, employee, or even a family member. This information can be exploited in a lot of different ways, most commonly through targeted phishing attacks.
So what can individuals and organizations do to limit the potentially damaging effects of social engineering attacks? Fortunately, there are some effective solutions:
- First, remember that people are usually the weakest link in the information security chain. It's usually not the case that people want to introduce malware into the network, or give up proprietary business information. But very often, they believe that "IT and the security guys" are handling threats at the perimeter of the network, and that once information is inside, it's been vetted and verified. That, of course, simply isn't the case. Anything that comes from the Internet—email, website content, links, or anything else—should be viewed with a degree of suspicion until verified.
- Second, people often trust other people more than they trust technology. Social engineers who attempt to exploit organizations and their employees are likely to come after tangential contacts: the CEO's administrative assistant, a non-executive member of the finance team, or interns who are more susceptible to not questioning any source that sounds authoritative. As Lisa proved, it’s not just the less technical employees that are at risk. A member of the IT staff can be fooled, too. Reinforce the idea that, unless you personally know someone and can verify their face or voice when communicating with them, they should not be fully trusted.
- Finally, don't forget that it's the job of law enforcement to go after the bad guys. By involving law enforcement when a social engineering attack has been thwarted—rather than simply letting it go and saying, "We did a good job!"—organizations can eliminate the threat for the attackers’ other victims.
Social engineering has come a long way in recent years, and the ways in which we use the Internet—including permissive sharing of information—have created a massive arsenal of tools for would-be attackers. But more effective training of employees and business partners, and a healthy dose of suspicion for activities and individuals that appear abnormal, organizations can limit the likelihood that these shadowy and artistic methods of information exploitation will succeed.