ChatGPT Risks and Remedies

Posted on by Federico Charosky

It can’t have escaped anyone’s notice that the world appears to have recently woken up to the arrival of AI, and in particular generative AI services. The news is awash with stories that predict the end of life on earth as we know it – with the dramatic consequences that generative AI services will have for education, healthcare, dating, and any aspect of our lives that involves producing written content. Widespread news coverage has reported that ChatGPT, the most widely established AI-powered chatbot has already passed the final MBA exam at Wharton, one of the USA’s finest business schools.


The consequence of this media coverage on this and other stories, is that many of us have enthusiastically logged onto ChatGPT, or its more recently introduced competitors, to ‘have a go’ and see what it can do. The results are usually mixed, and most of this is quite benign, but there is no doubt the seeds are being sown for a very rapid widespread use of the technology. We are already aware that people working in companies and public sector organizations are using ChatGPT to make their working lives easier. Its ability to summarize reports, compare data, or write halfway decent copy in response to a specific brief is very enticing to overworked executives, just as it is with undergraduates in universities who are already using it to provide at least the backbone of their essays (complete with referencing). It’s very good at what it does, and sometimes the results are indistinguishable from human generated content. Sometimes, they’re better.


A very significant security risk

Of course, the potential of this emerging technology is very exciting, but ChatGPT and its competitors are a cause for concern. Not because people will use the technology to make their working lives easier, but because AI-generated chatbots present a very significant security risk. Put simply, employees may be tempted to put confidential data into the chatbot, and the company they work for will have absolutely no idea what the chatbot will do with the data. AI works by harvesting everything it is shown and using it for a later date. It becomes better informed, and therefore more useful, every time someone gives it data. So, if a competitor searches for your confidential sales forecasts, and one of your employees has put them into ChatGPT, there is no guarantee at this stage that they won’t be found. AI is no respecter of confidentiality.


Business and reputation risks

As we see competitor AI technologies being developed, launched and licensed the issue will proliferate, and some of them will inevitably be controlled by powers who are less benign than OpenAI, the creators of ChatGPT, seem to be. There are very significant business and reputation risks presented by the entire world of AI. Businesses should act now to protect themselves against this very real problem.


Three rules to reduce risk now

Here are three things that organizations should put in place immediately:

  1. Adapt your staff handbook’s IT policy to ensure that it covers this problem – that you have in place a well explained set of rules about what your employees can and can’t load up to generative AI solutions. It must be seen to be on a par with putting confidential information out on social media, for example.

  1. Identify the data in your organization that should be protected – and what should not be allowed to be shared to ChatGPT without raising a red flag. There is likely to be an awful lot of this.

  1. Put in place monitoring technology. At its most basic you should leverage technologies that monitor uploads to ChatGPT’s website. This at least allows you visibility of what is going out of the digital door, and to raise that all important red flag if your monitoring software doesn’t like what it is seeing. At the more sophisticated end, you can use machine learning algorithms to identify patterns in the data uploaded that can identify any potential risks and highlight any wrongdoing by an employee.

The problem with ChatGPT is that it still all feels like a bit of fun – people at parties talking about how they asked it to produce a standard shopping list in the style of a Shakespeare sonnet. The time has come for the world to wise up to the risks.

Federico Charosky

Founder & CEO, Quorum Cyber

Machine Learning & Artificial Intelligence

risk management data security data sovereignty data loss prevention artificial intelligence & machine learning

Blogs posted to the website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.

Share With Your Community

Related Blogs