Imagine if the software solution driving a company's innovation also became its Achilles' heel. As companies explore the transformative power of artificial intelligence (AI), their reliance on third-party solutions, like ChatGPT for content creation and GitHub Copilot for software development, is rising. A recent MIT study found that 78% of companies now depend on these external AI solutions; however, they are behind 55% of all AI system failures. This creates a paradox; while third-party AI solutions offer advanced capabilities without significant upfront investments, they can also introduce hidden risks that ripple through an organization’s AI supply chain.
What Is the AI Supply Chain Risk?
Unlike traditional software, where risk management is mostly about code vulnerabilities and system integrations, AI solutions introduce a whole new level of complexity. Their lifecycle goes beyond implementation to include data collection, model training, real-time decision-making, and ongoing updates. This is why it is critically important for practitioners to gain a deeper understanding of AI risk and governance. In an AI supply chain, various components, like training data, algorithms, third-party add-ons, and regulatory obligations, are often split among multiple vendors or decentralized teams. Every link in this chain can introduce distinct vulnerabilities, from biased data and security breaches to compliance oversights. If left unchecked, these risks can spread throughout the AI system, potentially undermining the technology’s performance, ethical standards and overall value to a company.
Why Should You Care About Third-Party AI Risks?
The risks inherent in third-party AI extend far beyond the technology itself. They touch on data privacy, ethical decision-making, regulatory compliance, and overall operational reliability. This is why traditional risk management practices, designed primarily for static software, are no longer enough. Instead, companies must adopt a holistic, agile approach that includes continuous oversight and proactive vendor management.
Third-Party AI Procurement and Risk Management Best Practices
Procuring a third-party AI solution, such as an AI-enabled content generator, requires more than just a quick cost-benefit analysis. Below is a structured approach to help address potential challenges and ensure an organization gets the most out of their AI investment.
1. Clarify the Business Case and Conduct an AI Impact Assessment
- Focus on the ‘Why’: Start by defining why there is a need for a third-party AI solution: for example, an AI-enabled content generator. Does it streamline content workflows, reduce operational costs, or offer a competitive advantage in marketing campaigns? Articulating the strategic benefits and expected capabilities will help set realistic goals.
- AI Impact Assessment: Once an organization identifies the business case, perform an AI impact assessment to evaluate potential consequences. This includes assessing whether the solution’s generated content could inadvertently promote bias, damage the brand’s reputation, or conflict with internal ethics guidelines. Identifying these risks early, will help to make informed decisions about vendor selection, implementation timelines, and ongoing oversight requirements.
2. Vendor Risk Assessment
- Go Beyond Cost and Functionality: After clarifying the ‘why,’ dig deeper into the ‘who.’ Evaluate the vendor’s responsible AI practices, data sourcing methods, and ethical guidelines. Determine whether they have a track record for addressing bias, misinformation, or potentially harmful outputs.
- Practical Tip: Request detailed documentation on how the vendor collects and curates training data, along with evidence of bias mitigation. For example, if procuring a content generator, ensure the vendor’s model doesn’t rely on outdated or ethically questionable data sets that could skew results.
3. Comprehensive Evaluation
- Use a Multidimensional Assessment Framework: A single evaluation method won’t cut it. Assess several dimensions, including dataset attributes (quality, relevance, and provenance), model methodology (training processes and performance metrics), and bias identification (monitoring for skewed outputs).
- Why It Matters: By examining these dimensions together, you’ll gain a more holistic view of the AI solution’s reliability, fairness, and overall risk profile.
4. Integrate into Third-Party Risk Management
- Integrate AI into Existing TPRM Workflows: Don’t isolate AI risk management; integrate it into broader third-party risk management (TPRM) processes. This ensures consistent standards across all vendor relationships, whether it’s procuring a payroll system or an AI-enabled content generator.
5. Ongoing Governance and Building Trust
- Adopt a Culture of Continuous Oversight: AI models evolve over time, and so do their risks. Regularly review vendor performance, run audits, and refresh AI impact assessments to keep pace with new features, changing regulations, or shifting market expectations.
- Holistic Approach: A truly holistic governance framework builds trust across your company and with stakeholders. For example, host cross-departmental meetings that discuss AI tool performance and identified risks. This transparency encourages accountability and ensures ethical and legal obligations remain front and center.
The Bottom Line
Third-party AI solutions can be a game-changer for companies seeking quick wins in innovation and efficiency. Yet, as with any emerging technology, the risks are as real as the rewards. By proactively identifying the “why,” thoroughly vetting vendors, integrating AI into the broader risk management, and adopting a culture of continuous oversight can help ensure AI remains a strategic asset rather than a hidden vulnerability.
Learn more about countering third-party AI risk at our upcoming session at RSAC™ 2025.