As Usual, New Technology Means New Privacy Considerations


Posted on by Sam Pfeifle

The financial services industry is certainly no stranger to privacy regulations. In fact, most people point to the Fair Credit Reporting Act as the first real privacy regulation with teeth in the world. 

So, you’d think people would have sorted out how to comply by now, right?

Well, sometimes it’s not so easy. New technological capabilities are changing the way we think about “credit scoring” and changing the way companies evaluate risk, especially in the financial world. 

So much so, that we now have “FinTech,” a little buzzword that describes not only a new suite of products and services for the consumer space, but also a new suite of tech products aimed at financial institutions themselves to help them both evaluate their customers and comply with any number of regulations. 

We’ve already seen one insurance company try — and fail — to use Facebook as a way to evaluate customers for potential discounts. And many consumer advocates have expressed concern about the increasingly complex algorithms that are being used to approve or deny customers for any number of offerings, without providing insight into how those algorithms work. 

In some cases, the companies using the algorithms, themselves, don’t completely understand how decisions are being made. It can be a black box; information goes in, and a decision comes out. 

Why is this a privacy issue? Because, at its core, privacy is about providing people control over how their information is collected and used. That’s why FCRA allows people the ability to access and correct their information, and mandates that information about customers be demonstrably correct if you’re using it to make decisions about them. 

What if there’s a coding error in your algorithm? It’s not unprecedented. Even government agencies have been found to have made incorrect decisions about availability of public services because of algorithmic glitches. It’s important that as these new technologies come on board that there are auditing processes to make sure that people aren’t being disenfranchised by mistake. Or, maybe worse for the company in question, to make sure you aren’t taking on risk by mistake. 

Now, take these automated decision-making powers even further. Already we’re seeing development of “financial advisors” that are actually just computer-based artificial intelligence. What happens if they give bad advice? What happens if they ask for information they don’t need, or an input gets slotted into the wrong place, or it sends something to the wrong place? 

Of course, humans make plenty of errors, too. We know that well enough. But with humans there is accountability. There is a person who did the wrong thing and a system for prosecuting, should it have been done with malice. What’s our system for addressing a grievous error made by a machine? 

It’s not just me asking these questions. Rather, regulators are asking them. For example, the Federal Trade Commission’s next FinTech Forum, to be held March 9, at the University of California, Berkeley, will examine both artificial intelligence and the use of blockchain technology. 

With a mission to protect consumers comes a mission to protect their privacy, and we’ve seen the FTC be among the most active and most technologically progressive of the U.S. regulatory agencies. These kinds of forums help commissioners and staff both understand what technology is capable of and understand the likely perils for consumers. 

In many cases, the benefits far outweigh the perils. Customers are excited about getting services faster and cheaper, about getting loans through their phones or having their bank automatically know that they couldn’t possibly have used their debit cards in two far away places at once. 

But providing those services means collecting location data and accessing stored personal information and engaging in many other potentially sensitive actions. 

Privacy compliance is getting harder, not easier, as data becomes ever more readily available. Having a plan of attack for compliance, which comes with a trained privacy professional, is vital for managing privacy risk as technology continues to evolve. 

Contributors
Sam Pfeifle

Content Director, International Association of Privacy Professionals

Privacy

privacy

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs