Charles Cresson Wood is back with another gem in Internal Policies for Artificial Intelligence Risk Management (available at https://www.internalpolicies.com). He notes that the high-tech companies building AI and user companies are often far too optimistic about AI and plow ahead without adequate attention to risk management. His new book attempts to shift the conversation from an extreme focus on making money to a focus on respectful improvement in the situations of all involved parties related to an AI deployment.
This book intends to assist AI user organizations in quickly implementing real-world policies to reduce the risks associated with AI technology. Each policy includes enforcement details, encouraging the relevant parties to comply.
For many organizations, they may have an AI policy that is a paragraph or two in length. Perhaps some organizations may have a detailed two-page policy.
But what becomes eminently clear when reading through the many policies in this valuable book is that there are countless areas where AI policies are needed. Enforcing that in a one-page policy is nearly impossible.
The author points out that your AI policies need to be approved by your Artificial Intelligence Governance Council (AIGC). This begs the question: If you don’t have an AIGC, how can you ensure that you are doing AI in a compliant manner?
Wood is many things: an author, researcher, attorney, and long-time information security professional. I have been a fan of his books for many years. His Information Security Policies Made Easy (ISPME) is an invaluable resource for information security professionals. I reviewed version 11, version 12, and version 13 of ISPME. But alas I never got around to writing a review of version 14.
Wood wrote another valuable guide in Corporate Directors’ & Officers’ Legal Duties for Information Security and Privacy, which assists professionals in generating opinions that their organization’s directors and officers are complying with their information security, privacy, and legal duties. The book’s methodology is an independent audit, which can be used to determine whether the directors and officers at an organization are in compliance with their information security and privacy duties.
Like ISPME, the material in this latest book is intended to save policy writers the significant amount of time and effort it takes to craft these policies. While he has done the initial work, it is imperative that policy writers understand their organization's specific requirements in order to customize these policies to meet those needs. As well-written as these policies are, cutting and pasting without due diligence and understanding does not serve anyone well.
Policy writers need to understand their organization’s industry, mission, products and services, risk appetite, corporate culture, and, most importantly, how the organization will deploy and use AI.
At over 500 pages, this book is a bear. But all those details show how much effort needs to go into ensuring AI is done in a manner that does not introduce unneeded risk to your organization.
Perhaps one of the greatest tricks ever pulled was convincing the world that AI is easy. There are indeed parts of AI that are easy. But if organizations don’t have effective AI policies in place, they risk the wrath of angry stakeholders, customers, and expensive lawyers.
When it comes to AI risk management, many different players in an organization need to be involved. The book has a valuable diagram that details the many players and their component tasks, including the risk management comments, AI ethics committee, AIGC, IT governance council, and many more. The ease with which AI can be deployed beguiles how easy it is to break compliance with various laws and regulations. The book ensures you can stay on the right side of the law.
For those who want to use AI with all the benefits but eliminate the embarrassment and mitigate the risks, Internal Policies for Artificial Intelligence Risk Management is an invaluable resource to do that.