Traditionally, cyberthreat actors have come in one of three guises: criminals, nation-states, and hacktivists. Each had their own targets and their own methodologies. Criminals sought to steal money, payment card data, or the elements of personal data useful for identity theft. Nation-state actors wanted their adversaries’ secrets, either intellectual property or national security information. Hacktivists aiming to make a political point engaged in denial-of-service attacks and website defacements.
Long since, that has changed. North Korea’s cyberwarriors engage in theft of both fiat currency and cryptocurrency to supplement their country’s budget. Other nation-states have prepositioned themselves in critical infrastructure, ready to cause disruptions if it suits their military and geo-political goals. With the rise of ransomware, criminals have expanded their target sets to include all kinds of data that can be subject to extortion demands. Meanwhile, many hacktivists have become aligned with nation-states and have begun to target the operational technology that runs critical infrastructure.
Compounding these changes is the rise of artificial intelligence. The jury is still out on whether AI will be better for attackers or defenders, but it is clear that AI has now become part of the cybersecurity landscape. In its 2025 global threat report, CrowdStrike found that adversaries across all major categories—criminal, nation-state, and hacktivist—have become “early and avid” adopters of AI. On top of that, AI tools and features may themselves be vulnerable to adversarial attack, meaning that enterprises rapidly integrating AI into their operations should probably slow down a little to consider the supply chain risks.
Finally, the regulatory environment continues to evolve. While the course of cybersecurity law in the US may be hard to chart at the federal level given the new administration in Washington, attorneys general at the state level are likely to continue to actively enforce state laws requiring reasonable cybersecurity protection of personal data. At the same time, regulatory expectations with respect to the elements of a comprehensive cybersecurity program are rising. Ten years ago, Federal Trade Commission’s (FTC) settlements in cybersecurity cases never mentioned multi-factor authentication (MFA). Now, the FTC is likely to specify not only that a company must use MFA for employee and vendor accounts that provide access to sensitive assets, but that it must be phishing-resistant MFA. Meanwhile, Europe has developed a complex array of cybersecurity measures.
These changes require enterprises of all kinds—businesses, non-profits, and governments—to reexamine and adjust their cybersecurity practices and governance structures.
The adjustment must start with an entity’s risk assessment. Cybersecurity is all about risk management, and every sound cybersecurity program must begin with a risk assessment, based on which the enterprise should build its controls. As adversaries change, so must the risk assessment.
One risk of the risk assessment process has always been that entities would underestimate their risk or ignore certain classes of risks, leading to an inadequate cybersecurity program. The FTC’s recent settlement with the web hosting company GoDaddy suggests that regulators may be growing less deferential. In its complaint, the FTC alleged that GoDaddy had failed to adequately consider the type and sensitivity of information customers stored in its shared hosting environment. The proposed settlement of the case would require the company to assume a high likelihood of unauthorized access to its hosting service, due to the number of websites hosted there, and to assume that customers operating websites on its hosting service are likely to maintain sensitive information there. In other words, risk assessments must assume the worst.
The changing threat environment also means changes in governance structures, with an increasing recognition that privacy, cybersecurity, and AI adoption are interconnected elements of any enterprise’s risk management and regulatory compliance strategy.
In some people’s minds, privacy, cybersecurity, and AI governance may still be considered separate domains, to be addressed in separate corporate silos. In fact, cybersecurity was always part of privacy. One of the earliest articulations of privacy principles, by a federal advisory committee in 1973, recommended that any organization creating, maintaining, using or disseminating identifiable personal data must take precautions to prevent its misuse. The EU General Data Protection Regulation contains a data security obligation. So does the US’s Health Insurance Portability and Accountability Act (HIPAA), which requires privacy protections for personal health information, and the federal statute mandating privacy protections in the financial services sector.
Now the rapid emergence of AI is forcing a further realignment of data governance structures. Organizations need to think holistically about digital governance. Since many chief privacy officers already have experience dealing broadly with the opportunities and risks of sensitive data, many are seeing their roles and titles expanded to include additional responsibility for AI governance, data ethics, or cybersecurity regulatory compliance. And even when job titles remain distinct, close and regulation collaboration among privacy, cybersecurity and AI teams has become essential. This is nowhere more apparent than in the face of a ransomware attack, where a successful response to a rapidly moving and often very public crisis requires close cooperation among cybersecurity and privacy teams, along with IT, legal, PR, and affected business units. Such cooperation will only arise if those functions have a history of working together before the incident.