The Road to Smart Cities is Paved with Good AI Intentions


Posted on in Podcasts

In a world where the terms “AI” and “machine learning” are used liberally to describe new products and technologies, creating an assessment framework for buyers (and sellers!) to evaluate these products is essential. In this session, we’ll follow the Cost and Vulnerability dimension of MITRE’s AI Relevance Competence Cost Score (ARCCS) Framework and consider the security and privacy implications of AI for smart cities and the humans that travel them.

Podcast Transcript

Introduction:
You're listening to the RSA Conference podcast, where the world talks security.


Kacy Zurkus:
Hello listeners. And thank you for tuning in. We have a great podcast lined up for you today discussing an interesting angle of artificial intelligence. Specifically looking at the cybersecurity and privacy impact of AI-enabled technologies, with my guest Anne Townsend, department manager and cyber security engineer at the MITRE Corporation. Here at RSAC, we hosts podcast twice a month, and I encourage you to subscribe, rate and review us, on your preferred podcast app so that you can be notified when new tracks are posted. And now it is my pleasure to ask Anne to take a moment to introduce herself so that we can dive into today's topic. Anne?


Anne Townsend:
Hi, thank you so much for this opportunity. As you said, I'm Anne Townsend. I am a department manager and cyber security engineer at the National Cyber Security FFRDC, which is a part of the MITRE Corporation. I'm excited to talk to you today about some of the available technologies and frameworks that are out there to really help people understand AI-enabled technologies.


Kacy Zurkus:
And we are super excited to have you. And Anne I'm so curious to know the degree to which you and your team at the MITRE Corporation are looking at artificial intelligence and even more to that, what prompted you to explore AI-enabled technologies more deeply?


Anne Townsend:
So I believe when some people hear the term artificial intelligence, to them it sounds exciting fantastical even futuristic. It includes things like robots that perform human functions, self-driving cars that have become our personal chauffeurs or advanced technology that really changes the whole way the world works. So the truth is, it isn't that exotic or futuristic anymore. And just taking a walk through my house and look where AI might be in my very own home. I notice that the second I step into my living room.


Anne Townsend:
I can turn on my TV through a voice-enabled remote control. And from my streaming service have movies recommended to me. If I do not like the recommendations I could ask the voice-enabled assistant in the room to suggest another. When I sit down in my house and I'm comfortable on my couch, if I've decided the temperature of my house is uncomfortable, I don't have to get back up again.


Anne Townsend:
I could just pull out my phone and adjust the thermostat. And while I'm at it, I could have the robot vacuum cleaner begin to clean the house. When I step outside of my home, it's even more pervasive. I walk past homes with camera-enabled doorbells, and I traverse through streets that have AI managing traffic at intersections, controlling streetlights more efficiently and on and on. All these little pieces of technology I mentioned are in reality, AI-enabled. What prompted us to explore this domain is bit by bit, device by device, AI is becoming more present around all of us and it's contributing to growing new concepts like smart city.


Anne Townsend:
With this evolution occurring around us. It's important for everyone to really understand what type of impact AI and its contributions to technology will have on our lives. At the MITRE Corporation, we are looking at ways to establish justified confidence in AI systems, especially those in high-consequence applications.


Kacy Zurkus:
It's so interesting listening to you, especially thinking about the TV and the remote control. I remember when I was a kid... I grew up in the '70s and '80s and I remember that transition from having to get up and go and turn the channel, to then the little slider to pick your channel. And then we got a remote control, right? And we called it the remote and my kids now call the TV remote the talker because you just talk right into it.


Kacy Zurkus:
So for those who realize that AI is a part of one of their technologies, or perhaps they decide to put AI in their own products or devices, what are the things that they should be thinking about? Right? And Especially in this home environment that we're talking about too like the ways that that remote then interacts with our lives and things like that.


Anne Townsend:
Well, there are three questions to really ask yourself if you're going to go the AI route and put AI into a technology. The first question is, is this the right place for AI? Is it really adding any value to the technology? The second question is, is the AI going to do a good job, with demonstrated success that really aligns with and adapts to your needs. The third question is, is adding AI to the picture worth the effort? It's not cheap and it's not without dangers. We can sum up these questions with three areas to consider. Relevance of AI, competence of AI for the task and cost of using AI in the task.


Anne Townsend:
At the MITRE Corporation, we formalized these questions and what would be optimal answers in a framework we called ARCCS, A-R-C-C-S, for AI relevance, competence, and cost score. These aren't always easy questions to answer though, which is why the ARCCS framework provides a lot of guidance on how to answer the questions, worst case to best case answers and how to score your answers.


Anne Townsend:
Also, there's a way to express the quantity and quality of information you use to answer those questions. This results in a set of scores that tell you the value of the AI tool. You can look at the numbers as absolute, but even better use them to compare among the AI-enabled tools you're considering. But it's not just about numbers. The greatest value of the tool is the questions you know to ask and the answers you should be looking for. Ultimately, the framework is an evaluation methodology and metrics to address the degree and effectiveness of the AI component of the AI-enabled technology. The framework guides the assessor to organize the available evidence, evaluate its strength and determine whether a product performs as advertised in a technically relevant manner.


Kacy Zurkus:
I love that. So could you walk us through part of this ARCCS framework? Just to give our listeners an example.


Anne Townsend:
Sure. Let's start with the cost and vulnerability aspect and recognize not only that it leads us to a score, but it gives us other considerations once you really understand when adopting technology that is AI-enabled. Let's take, for example, a piece of technology that is AI-enabled and the technology involves people. Either it visually recognizes them, it diagnosis medical condition. Maybe it makes a credit worthiness decision or it even sends us a judgment.


Anne Townsend:
These types of technologies will be working with data that contains information about people. Maybe it includes their names, identifiable information, medical information, and so on. When we use ARCCS on this type of technology and consider it with cost and vulnerabilities, we analyze unaddressed or unmitigated vulnerabilities. We know that AI can be misused and it can make mistakes. When you include AI in a system, particularly a machine learning model, you have to be aware of how that model could be fooled or misused either accidentally or deliberately.


Anne Townsend:
Anytime a malicious user can trick the AI model into an incorrect decision or classification. There's an opportunity for ill-intended outcome. In this case, it could result in a violation of privacy or the release of false information, which may lead to a false conclusion. Recent research has shown that personally identifiable information, including names, emails, and conversations can be extracted from some common language models, which were trained on public data sets, scraped from the internet.


Anne Townsend:
In other cases, the models may be fooled into revealing data by mistakenly accepting false biometric credentials before revealing sensitive information. Using ARCCS and evaluating this type of AI-enabled technology with the information available about the AI-enabled technology leads one to question and investigate if a tool has been developed in a way that it identifies unaddressed or unmitigated vulnerabilities so that it can be addressed before they become a problem, the assessor would be able to assign a score that could range from a really poor result, meaning that this consideration was never even evaluated, to a really good result where it informed on how exactly this vulnerability is addressed and mitigated.


Kacy Zurkus:
So undoubtedly AI has raised some privacy concerns and you even touched upon that in your previous comments. Can you talk a little bit to our listeners about AI and its potential privacy implications?


Anne Townsend:
As we all know, cyber attacks can reveal private information in a database. However, not every privacy issue is caused by a cybersecurity issue, smart devices to include those that are AI-enabled to have a deeper reach into our lives than is immediately evident and data collected from them can paint a robust picture of a person's life.


Anne Townsend:
It could even create information that is used in ways that really surprise them. Let's go back and the walk through our lives and where we saw AI and how it's helping to enable smart cities. As I mentioned, cameras are everywhere now and surveillance is a byproduct of cameras on the street and cameras everywhere. They can tell you when someone is in a crosswalk, but ML models could also do facial recognition and tracking of those people. Some of the challenges we see stem from how privacy risk assessments tend to be conducted.


Anne Townsend:
Often they are conducted as a compliance assessment based on laws and regulations and conducted on a single system. As we continue to see, privacy laws and regulations don't always do a particularly good job of keeping up with technological innovations and they tend to be focused on general privacy principles that while critically important in their own right, tend to focus on organizational behaviors without much regard to how individuals experience the privacy risk. That compounded with an assessment of a single system, or maybe just a subset of systems and not a whole of your smart city's view means our risk assessments could be flawed. That's a bit of shaky foundation to start with.


Anne Townsend:
By taking a whole city view, you're able to build a better picture of the data that is captured and what's happening with it throughout your smart city ecosystem. That includes understanding what kinds of processing is being done through AI and how AI results are used. A whole city risk assessment will help you avoid surprises and better engineer, your smart city solutions to address privacy risk before it becomes a major issue. This would help avoid issues with some cities who did try deploying smart city technologies yet ended up being faced with a lack of transparency and concern over things like smart street light.


Anne Townsend:
We get a little background here. Some cities have installed traffic lights equipped with cameras that collected data in real-time for cost-saving purposes. We also use the video in very limited circumstances like violent crime, but the public was concerned due to the lack of transparency, a strong risk assessment. Would've helped them better understand whether and how their citizens may have gotten concerned so that the city could have built in stronger protections and transparency practices from the very beginning, this isn't an AI issue per se, but it does demonstrate what happens in privacy aspects, especially transparency are not addressed.


Anne Townsend:
AI tends to be viewed as somewhat of a black box for the general population and this lack of transparency and understanding leads to real and perceived privacy issues. From an AI and smart city perspective, I think about how complete a picture we can paint of a person's life, where and how they drive, where they work, where they live, where they eat, where they get healthcare, who they spend time with, which transportation services they use. And when. Applying AI to this information will continue to result in making assumptions about these individuals and influencing how organizations interact with them. Sometimes they're the better, safety and convenience, but maybe sometimes not such as things like surveillance and loss of autonomy.


Kacy Zurkus:
I love what you said about the real and perceived privacy issues. And I want to take that singular issue of privacy and really broaden the lens to talk about risk, in general, not just privacy risks. So in what ways can the ARCCS framework be used to help understand risks?


Anne Townsend:
So let's continue with the smart cities as an example. The wide range of smart cities technologies offer many types of benefits for government, businesses and individuals ranging from operational efficiency to safety. And providing these capabilities through setting up an infrastructure that collects and generates new information about the people that interact with those technology. These things can affect individuals decisions regarding how and where they travel. They create issues around autonomy and power balances. These potential privacy issues do not mean that smart city technologies are bad, or even that we shouldn't use them.


Anne Townsend:
However, does mean that we need to evaluate and adjust these privacy risks before they become real issues for people. Also, as a part of this, smart cities will have various levels of AI scattered all around. Smart devices need to be evaluated to see what vulnerabilities they have or introduced into the system as well as what type of privacy risks do they really introduce. The moment ago I mentioned we have to think of the entire view of the smart cities and ARCCS would help to understand the pieces that go into the entire view. As an example, ARCCS can help with the assessment of the technology that is the part of the smart city. A few examples would be ARCCS checks of the AI usage is necessary or gratuitous.


Anne Townsend:
Unnecessary use of AI introduces unnecessary vulnerabilities. And the competence dimension ARCCS is concerned with AI's alignment with the job needs, as well as its capability to detect model drift among other things. If AI usage and a tool doesn't score well in these categories, a tool may provide unneeded or unintended capabilities, or it may allow undetected and accurate conclusions about individuals in a database. Another part of assessing is competence, and it's really understanding how much transparency and explainability an AI-enabled tools provides about its conclusion. As I mentioned, transparency helps individuals understand whether and how data about use and provides ways to evaluate whether the AI model is behaving fairly and ethically.


Anne Townsend:
Along those same lines, understanding how the model was created as well as the data used to develop, train and test the system leads to a greater understanding of the privacy posture of that system.


Kacy Zurkus:
So there's a framework for that, was one of the trends that we saw come through when we were looking at our submissions for the RSA Conference, 2022. There is clearly an abundance of frameworks, right? There are lots of frameworks out there with good reason, and we know that no single framework can address all the risks one should be considering when using AI-enabled technology or even developing a smart city, but what can the ARCCS framework help people with?


Anne Townsend:
ARCCS itself will help people with understanding the relevance of AI, the competence of AI for the task it's providing for the smart city, as well as the real cost of using AI. But what I'd like to also point out is that ARCCS is not the only framework that can help people with AI-enabled technology or when developing a smart city. But it's a great example of the role and value of frameworks in general, for addressing risk. Frameworks, provide a consistent way to examine and implement concepts.


Anne Townsend:
When you pair frameworks, you can identify areas that intersect. For example, we've started looking at the touch point between MITRE's AI maturity model, which is a tool to assess and guide effective readiness, adoption and use of AI or ML across an organization and the NIST privacy framework which NIST developed to help organizations identify and manage privacy risk so that they can build innovative products and services while protecting individual's privacy.


Anne Townsend:
Each of these tools covers this specific domain, by looking at how they align, practitioners can see things like where AI activities can be strengthened by addressing privacy risk upfront or where governance activities such as defining policies or roles or responsibilities benefit from considering privacy needs. For the AI maturity model, we found that all of the capabilities from each of the pillar and the model directly maps to at least one subcategory in the privacy framework core with most areas, having multiple direct mappings, as well as additional subcategories, which are in alignment. When you apply these in smart cities contexts, these tools can help you think about where privacy risk may arise and the data that is collected and analyzed and how that could impact individuals.


Kacy Zurkus:
So with all these different frameworks, where should practitioners start when evaluating whether and how an AI activity has privacy considerations?


Anne Townsend:
From a basic compliance perspective, privacy requirements can vary widely by industry and geography. Like many areas such as cybersecurity, compliancy regimes have an important role, but there are typically a minimum starting point. So more fully examined privacy risk start by considering the context and the objective or your plan use of AI. What's the big picture and objective of what you're trying to accomplish? From there, start looking at data flows and how the data is changing as it moves through the model.


Anne Townsend:
Also keep in mind that end tools can directly and tangibly suffer from privacy risk. Remember to put yourself in their shoes and not just think about the benefits of AI to your organization. And lastly, take advantage of the growing body of tools and lessons learned that are publicly available. You don't have to start from scratch. This is where tools like the NIST privacy framework and the NIST cybersecurity framework can help. Both frameworks provide a menu of privacy and cybersecurity activities and outcomes that can help organizations manage risk in the defined profiles to align the activities and outcomes to mission and business needs.


Anne Townsend:
We have found that these framework profiles are powerful tools for bringing organization and industries together to understand how cybersecurity and privacy can help them achieve their objectives, how to prioritize their limited resources and how to build trust with the stakeholders. That's also what's so great about ARCCS it gives you a set of concepts and questions to get you started on your evaluation, starting with the decision to use AI, understanding what kind of metrics you need, considerate evaluating its performance and ending up of an analysis of risk you may be introducing into your project or your environment.


Anne Townsend:
From there, you can leverage other tools like the NIST cybersecurity framework, the NIST privacy framework to really dig deeper into specifics and get a fuller picture of the role benefits and hazards with using AI to smarten up your city.


Kacy Zurkus:
That's all great advice. And you have mentioned several different resources. Is there anywhere where our listeners can go to find more information?


Anne Townsend:
Absolutely. So I would start off with ARCCS A-R-C-C-S, .mitre.org. This is where you can find all the information about the framework for evaluation, methodology, and metrics to assess the degree and effectiveness of the AI component of an AI-enabled tool. I would recommend NIST for their AI work. They have exciting projects that look to advanced standards in AI, as well as to cultivate trust in AI technology. The NIST privacy framework quoting from minutes, the privacy framework is intended to be widely used by organization of all sizes and agnostic to any particular technology, sector, law or jurisdiction using a common approach adaptable to any organization's role in data processing ecosystems, the privacy framework purposes to help organizations manage privacy risk.


Anne Townsend:
I would also recommend a NIST... they've put together a NIST privacy engineering resource, which quoting again, aims to explore crosswalks, common profile, guidance and tools to support implementation. And also MITRE has put together privacy resources that MITRE privacy program emphasizes strategy and policy as well as privacy engineering and spans all aspects of the privacy program. That could be found on MITRE'S website at mitre.org/privacy, as well as there are frameworks from MITRE for developing and implementing data-driven, actionable, equitable policy.


Kacy Zurkus:
And this has been a great conversation. Thank you so much for joining us. Listeners, thank you for tuning in. To find products and solutions related to artificial intelligence and machine learning, we invite you to visit rsaconference.com/marketplace. Here, you'll find an entire ecosystem of cybersecurity vendors and service providers who can assist with your specific needs. Please keep the conversation going on your social channels, using the #RSAC and be sure to visit rsaconference.com for new content posted year-round. Thank you all for joining our us, be well.


Participants
Anne Townsend

Department Manager and Cybersecurity Engineer, The MITRE Corporation

Machine Learning & Artificial Intelligence

artificial intelligence & machine learning industrial control security innovation policy management privacy risk management standards & frameworks


Share With Your Community