The Stronghold of AI - Top 5 Considerations for Adoption


Posted on by Nicholas Kathmann

As I walked up and down the aisles at RSA Conference 2024, chatted with industry peers, and attended presentations, I noticed something. Just about every booth, almost all signage, and tons of session topics either prioritized or included generative artificial intelligence (GenAI). I’m not surprised – the current state of AI is generating some compelling reasons for adoption. In fact, experts forecast that AI will add $15.7 trillion to the economy within the next six years. The majority of companies using AI believe it simplifies operations and 9 out of 10 businesses support the usage of AI for a competitive advantage. AI isn’t niche technology anymore – it’s everywhere.

There’s no denying the AI wave is here to stay, but while the buzz around AI is reverberating into every boardroom and investor meeting, it’s important to proceed with caution. Organizations need to take the rapid evolution of AI seriously and avoid acting on impulse – it’s essential not just for the safety and security of the business, but for its customers as well. Yes, AI can create efficiencies, accelerate innovation, drive down costs, increase productivity, and so much more – but a poorly secured implementation can incur significant financial and reputational damage. Is the juice worth the squeeze? It should be, because AI is here to stay, and organizations that fail to embrace it, will soon be left in the digital dust. To keep pace, organizations should empower their Governance, Risk, and Compliance, Privacy, and AI teams (GRCPAI) to collaborate and align on the below considerations when integrating AI into business operations and enhancing products and solutions.

AI is Pervasive

 AI, in its current trajectory and hype cycle, is extremely pervasive and disruptive. Looking beyond the internal productivity use cases, such as Copilot or Google/MSFT Office suite additions – AI also spans finance, sales, security, marketing, engineering, and product. Evaluating an AI feature in Salesforce requires vastly different governance processes than evaluating how you might incorporate AI into a product you’re developing, or even developing your own AI models. Recent innovations in AI are much like the original Internet innovation curve, where every department, engineering team, product team, security team, executive team, and board members are looking for ways to embed AI into the strategy and innovation of the company and avoid being left behind. Nobody wants to be the next Blockbuster and miss out on the Internet streaming disruption!

The Benefits are Massive

Leveraging AI is at the top of any Chief Financial Officer s’(CFO) list because the potential impact on the bottom line is huge. Not only will AI enhance the value of products and services to capture more market share and drive topline revenue, AI will also change how organizations operate. Efficiencies in day-to-day work become magnified up through the C-Suite, making the entire enterprise more productive and valuable. Netflix is a great example, reportedly saving $1 billion by utilizing machine learning. Arm Holdings, a British semiconductor designer, reached record revenue, attributing its booming business to the integration of AI in the smallest and most complex facets of the company. And recent research from Stanford University shows AI enables employees to complete tasks more quickly and improves the quality of their output.  

The Pitfalls can be Just as Massive

AI is incredibly powerful and designed to continuously evolve. This means if you’re not keeping pace with  technology, it may grow beyond your understanding or control. Not having full oversight over AI usage can be detrimental with financial and brand reputation consequences. UnitedHealth has been in the news lately for a massive breach in February, and, if you remember, last year the organization was accused of “illegally denying elderly patients care owed to them under Medicare Advantage Plans” through a faulty AI model, nH Predict. Also in 2023, Samsung banned generative AI after its engineers accidentally leaked internal source code by uploading it into ChatGPT. And, sadly, an AI-powered autonomous vehicle from the driverless car startup Cruise was the cause of loss of life. I believe more corporate-AI failures will be disclosed as adoption increases without the appropriate protocols and protections. Make sure your company is not one of them.

AI is Evolving Really Fast

The goal posts are constantly changing with AI. Days ago, OpenAI released a new multimodal model GPT-4, creating a more natural human-computer interaction. As a reminder, the original ChatGTP launched less than two years ago. Forbes is on its sixth annual AI 50 report, and there has been a huge spike in AI Startups since 2000. When adopting AI technology and large language models (LLMs), it’s important to embrace the benefits while protecting against known vulnerabilities – but it’s also critical to have a plan for the future. The business will grow, and its usage requirements will change. New and emerging regulatory demands will be implemented to govern the use and development of AI. Legal teams are already seeing changes in contract negotiations. Soon, AI addendums will be standard in customer and partner agreements, and company handbooks will include a section on AI governance. Signing acknowledges awareness, and awareness likely means accountability.

Governance and Transparency are Pivotal

If you don’t know about something, you can’t protect it. Implementing a holistic AI governance program allows your organization to have oversight and knowledge of all AI models, determine how they are used by different departments, and safely approve or deny adoption requests. Transparency builds trust, and an AI governance program is only effective when employees trust and follow the process.

Due to the pervasiveness of AI/LLMs in not just your organization, but also every vendor/partner you’re using, it’s important to also implement a comprehensive third-party risk management AI workflow. After determining that a specific AI model or company is too risky for internal use, how can you be sure one of your vendors/third parties aren’t using that same model and your sensitive data ends up there.

Every year, RSA Conference is a great opportunity to connect with colleagues in the security industry and get a feel for what is on their minds. It’s no surprise that AI was the prime topic this year, and it’s interesting to see so many businesses grappling with not just how to implement AI, but how to implement it in a way that doesn’t sacrifice the safety and security of their data, or that of their customers. The rapid evolution of AI (especially generative AI) capabilities makes this an exciting time to be in the tech industry, but it’s critical for businesses to avoid letting their excitement outpace their good sense. Fortunately, RSA Conference provided reasons to be hopeful, as the growing emphasis on AI governance and security shows that the industry is – at the very least – moving in the right direction.


Contributors
Nicholas Kathmann

CISO, LogicGate

Machine Learning & Artificial Intelligence

innovation Artificial Intelligence / Machine Learning governance risk & compliance privacy professional development & workforce security awareness

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs