Facial recognition tech isn’t just a privacy concern; it’s also a cybersecurity problem.


Posted on by Aaron Barr

Facial recognition technology (FRT) has rapidly gained ground across sectors. In August, a report by the Government Accountability Office found that 18 of the 24 surveyed federal agencies are currently using FRT. Stated uses include digital access or cybersecurity (such as unlocking agency-issued phones), controlling access to a building, and generating leads in criminal investigations. Ten of those agencies are planning to expand the technology’s use.

While this technology brings a lot of potential benefits, it also brings real privacy and cybersecurity concerns. For example, the United Nations High Commissioner for Human Rights, Michelle Bachelet Jeria, expressed concerns over the technology when she called for a freeze on certain AI-based technologies including facial recognition. She used the example of China’s social credit score and asked governments to “halt the scanning of people’s faces in real time until they can show the technology is accurate and meets privacy and data protection standards.”

The broad issue is that there is now more and more personal information available in the public domain that wasn’t purposely released by the individuals to whom it refers. And that opens up a whole new level of potential for attackers—it gives them more ammunition to use to successfully compile enough information to compromise an individual—and by extension, their personal and private networks.

Facial recognition technology and the risks posed by images

Good facial recognition technology is now commercially available, meaning it’s essentially accessible to anyone. Everyone has a sensor now, given the pervasiveness of mobile devices. People are putting a lot of data out into the public domain already. But now, there are images being captured, sometimes unbeknownst to us—images that aren’t just stored locally on our devices but are potentially being broadcast into the world.

We now face a situation where images that include our faces are captured and being broadcast into the open for people to scour—and they can do it at scale. Strangers can identity us from images we don’t even know exist and do whatever they want with that information without our consent. As mentioned above, China is purportedly using FRT to judge citizens’ social behaviors as part of their overall social credit score.

Understanding the risks an image can pose

On the corporate side, in one business email compromise (BEC) incident, an insurance company lost a great deal of money after a bad actor gained access to an executive-level account. A photo of that executive on social media that showed them on a ski trip gave the bad actor an indication of when to be active to avoid detection. The attacker was then able to alter a routing number and steal money from the company.

There also are humanitarian risks. A heartbreaking example is the situation with the Taliban, who have gotten ahold of facial recognition devices and databases that could allow them to identify Afghans who were cooperating with coalition forces.

Steps to gain some control

Facial recognition software does bring a lot of opportunities for beneficial uses, such as digital and physical security, and it’s understandable why businesses and other organizations would want to use it. However, this technology needs to be used with safety and security in mind.

For individuals, the problem is a bit thorny. While you obviously can’t control how organizations are using this technology, there are things you can do to try to protect yourself or mitigate risk. This includes actions like:

  • Not using business email addresses to sign up for personal social accounts
  • Making sure your email addresses aren’t easily discoverable
  • Avoiding the use of common photographs across your different social media channels
  • Making your accounts private

All of these actions are within your realm of control, even if the rest of the technology isn’t.

The risk is real

Facial recognition technology, like any other, is a mixed bag. While “Big Brother is watching,” privacy concerns are legitimate, at least in some areas of the world. FRT helps others keep their smartphones safe and gain lawful access to secure facilities. But corporations face greater risk with the wide availability of this software, as malicious actors have already demonstrated with BEC and other types of social engineering attacks. Matching faces in publicly available images can give cybercriminals the information they need to target their prey. Security and IT professionals can train those under their corporate care to consistently use the safeguards noted above as a first line of defense. We no longer have complete control over where our likeness appears, but we can exercise caution in every place where we do have control.
Contributors
Aaron Barr

Chief Technology Officer, PiiQ Media

Privacy

biometrics big data analytics privacy security awareness

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs