Tackling Conversational AI's Trust Crisis!


Posted on by Chetan Honnenahalli

Recent advancements in Artificial Intelligence (AI) have created many ways to improve productivity of small and large businesses. Generative AI has taken the world by a storm. Many AI assistants for specific domains have already sprung up and have started working for business owners. While this helps business owners save on salaries, does it erode trust with regular people? I think, yes. 

Fox is already in the Henhouse

Over the weekend, I got a text from a number I didn't recognize, claiming to be my real estate agent's assistant. They were scheduling a meeting for the following week, which seemed normal since I was expecting it. After exchanging a few texts about my availability and house preferences, I asked for the assistant's name, but got vague responses. This raised my suspicions, so I questioned if it was a scam. Moments later, my realtor called to explain it was their AI assistant gathering info, which caught me off guard.

While no harm was done here, I was certainly taken aback by the fact that I had no idea I was talking to a bot. Normally, I have been able to do that easily in the past.

This got me thinking that these powerful AI agents, sounding exactly like humans, in the hands of bad actors can be very dangerous. The negative effects could be as straightforward as identity theft by a hacker or as subtle as manipulation by a political party.

My experience demonstrated to me that the effects of AI, positive or negative, is not something that will happen in the future, it has already begun.

An Identity Thief’s Dream: Automated Social Engineering

Conversational AI facilitates sophisticated social engineering, a tactic used by identity thieves to extract private information from unsuspecting victims. This data can unlock access to online accounts, exploiting security questions, or constructing guessable passwords. For instance, a child's birth year might reveal an ATM PIN. This prevalent technique fuels the Account Takeover (ATO) threat, costing companies millions in damages and tarnishing their reputation. According to Hacker News, ”A whopping 20 billion records were stolen in a single year, increasing 66% from 12 billion in 2019. Incredibly, this is a 9x increase from the comparatively 'small' amount of 2.3 billion records stolen in 2018.”

However, social engineering is a time-taking activity for an identity thieves. They first have to learn something about the victim, then they have to build trust with them over text-based conversations (Online chat, text Messages, etc.,) and then learn more private information that can be used to take over the victims' accounts, until now.

With the use of conversational AI, identity thief can now automate the trust building exercise. They could learn private information while they sleep!

Here is how this could play out over text messages:

Step 1. Public Information Gathering: The identity thief writes a script to scrape popular social media sites and news about potential targets to learn a single piece of recent information about each target and their phone number.

Step 2. Trust Building: The identity thief crafts a basic AI and script to initiate conversations with targets based on gathered public information. Using text messaging Application Programming Interface (API) like Twilio or Vonage, the generated messages are sent out. While not all targets respond, it is reasonable to assume that some will. The script relays responses back to the AI, which generates more replies. After around ten messages, the AI has successfully built trust with the target.

Step 3. Private Information Gathering: At this point, the script prompts AI to start learning personal information about the user. For example, “Oh you are also from Chicago? Which school did you go to?”.

Step 4. Exploitation: This information can then be utilized to perform ATO attempts or be sold on the dark web without the victims’ knowledge.

The pattern can also be repeated on social media messaging interfaces using a web browser. The Identity thief can create a browser extension in developer mode and have a script initiate chat with several possible targets simultaneously and learn personal information about them.

Social Mistrust

As these surface users start losing trust in the online communication channels. In the near future, will people really trust messages over online platforms? How will they differentiate between a legitimate interest versus a social engineering attempt? Can a user really open up to anyone? 

Several tools can already detect generated text (like Copyleaks, AI or Not, Writer, Quillbot, etc.). However, social media companies, cell phone networks, and mobile phone manufacturers are lagging in adopting this tech. Just as phones flag suspected spam calls or email filters catch phishing attempts, text messaging platforms should also identify "Generated Text." This simple measure could prevent countless individuals from falling victim to social engineering and identity theft.

Unlike in the physical world where neighbors can easily communicate and warn each other about potential threats like car break-ins, the same level of community cooperation is lacking among software companies. When security breaches occur in software applications, there's often no sharing of vital information like IP addresses or device details of the perpetrators. Establishing a shared, public database of malicious actors and their devices would enable swift detection and containment of threats, akin to the system used for maintaining website addresses. Such collaboration could significantly reduce harm to internet users.

Conclusion: It is Not All Doom and Gloom

Small business owners especially can improve their quality of life greatly by adopting AI. I wish for more people to find such innovative ways of using AI so that they can find more time to spend with their families. However, caution by the people and responsibility by companies is of paramount necessity at this point in the digital evolution.


Contributors
Chetan Honnenahalli

Software Engineer, Meta

Machine Learning & Artificial Intelligence

social engineering ethics exploit of vulnerability identity theft hackers & threats risk & vulnerability assessment risk management

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs