top of page

AI Governance: 5 Hidden Security Risks Boards Miss and What to Do About Them


The Oversight Gap That Is Expanding Faster Than AI Itself

 

Boards have never been more engaged with AI. Nearly half of Fortune 100 companies now explicitly include AI risk within board oversight responsibilities, up from just 16% a year earlier.

 

Governance terminology is now widely used.


However, substantive governance practices often fall short.


A December 2025 McKinsey report draws attention to the disconnect. While more than 88% of organisations use AI in at least one business function, 66% of directors report limited or no understanding of AI. Nearly one in three say AI does not appear on their board agendas at all.


At the same time, the risk landscape is accelerating. The Stanford AI Index 2025 reports a 56.4% year-on-year increase in publicly disclosed AI-related security and privacy incidents.


Boards are approving AI strategies without clear visibility into associated risks.


This article outlines five critical risks that boards frequently overlook.

Risk 1: Shadow AI Is Already Operating Inside Your Organisation


Shadow AI is not an emerging risk. It is already embedded across business functions.


Nearly half of employees use AI tools that are not sanctioned by their organisations. A significant proportion admit to entering sensitive company data into these tools without understanding how that data is stored or reused.


This creates a silent and persistent exposure:

  • Proprietary data is shared with external models.

  • No audit trail exists.

  • Security teams have zero visibility.


As opposed to traditional threats, this behaviour is not malicious. It is driven by productivity needs and occurs outside established governance.


The impact is measurable. Shadow AI incidents are more expensive, take longer to detect, and disproportionately expose customer data and intellectual property.


Board question:

Do we have a real-time inventory of all AI tools in use, including unsanctioned ones?

Risk 2: AI Agents Are Operating with Unchecked Privilege


AI is no longer limited to generating text. It is now taking action.


Agentic systems can access emails, query databases, execute workflows, and interact with enterprise systems. In many organisations, these agents are deployed with broad permissions and minimal oversight.


The principle of least privilege is often ignored.


Research from Lakera AI in late 2025 shows that even early-stage agents create exploitable pathways, particularly through indirect prompt injection. These attacks originate from external content, such as emails or documents, and require fewer attempts to succeed than direct attacks.


OWASP has identified overprivileged agents as a top risk category in its Agentic AI security framework.


This concern is practical. Most organisations that experienced AI-related breaches lacked basic access controls.


Board question: Are we applying Identity and Access Governance controls to AI agents at the same level as we do for privileged human users?

Risk 3: Data Integrity and Model Leakage Risks Are Underestimated


Boards often assume AI outputs are reliable and contained.


In reality, AI systems introduce two critical risks: data poisoning and data extraction.


If attackers gain access to training pipelines, they can introduce manipulations that alter model behaviour. This may result in biased decisions, bypassed controls, or targeted misinformation.


At the same time, models can leak information. Techniques such as model inversion and membership inference allow attackers to reconstruct elements of the training data through repeated queries.


This is especially dangerous when models are trained on proprietary or sensitive datasets.


The NIST AI Risk Management Framework clearly identifies adversarial machine learning and data integrity as core risk domains.


Board question:

How shall we ensure the integrity of training data and prevent sensitive information from being reconstructed from model outputs?

Risk 4: The AI Supply Chain Is Expanding Without Oversight


Every AI system depends on a supply chain.


This includes:

  • Pre-trained models

  • Open-source libraries

  • Third-party APIs

  • External datasets


Each component creates additional risk.


Security researchers have already identified malware embedded in publicly available AI models and trojanised packages masquerading as legitimate AI development tools. In 2026, large-scale exposure was identified in agent ecosystems, in which insecure defaults left thousands of deployments publicly accessible.


Vendor risk now goes beyond contracts and is embedded in code, models, and dependencies.


Additionally, Shadow AI increases this exposure by introducing unvetted tools into the environment.


Gartner estimates that by 2026, 80% of enterprises will rely on generative AI APIs or applications, yet many do not have formal supply chain risk governance.


Board question:

Do we maintain a full bill of materials for our AI systems, and have we assessed the risks across the entire supply chain?

Risk 5: Regulatory and Disclosure Risk Is Increasing Rapidly


Regulation is rapidly catching up with AI advancements.


The EU AI Act introduces strict requirements for high-risk systems, including transparency, traceability, and human review. Non-compliance could result in penalties of up to 7% of global turnover.


Regulators are also focusing on AI-related disclosures. AI washing, which involves overstating capabilities or understating risks, is now a priority enforcement area.


This creates a new category of exposure. Boards may face legal and reputational consequences for both failures in AI systems and inaccurate representations of those systems.


At the same time, organisational readiness remains low. A large proportion of companies do not have formal governance frameworks, incident management plans for AI, or policies governing employee use.


There is a significant gap between stated readiness and actual capability.


Board question:

Are our disclosures about AI capabilities and risks validated by both legal and cybersecurity leadership?


The Real Risk: The Gap Between Governance and Reality


The most critical risk is not a single threat.


It is the gap between perceived implementation and actual practice.


Many organisations report:

  • AI policies in place

  • Governance committees established

  • Progress on AI deployment


Yet very few have:

  • A complete inventory of AI systems

  • Clear risk classifications

  • Defined accountability framework

  • Continuous monitoring mechanisms


Without this operational foundation, governance efforts remain hypothetical.


A 2025 MIT study shows that organisations with AI-literate boards significantly outperform peers in financial performance, while those without fall behind.


The difference does not lie in technology, but in oversight.


Moving from AI Adoption to AI Accountability


AI is not simply a technology layer.


It is a decision layer. This shift fundamentally changes the nature of risk.


Boards must move from passive awareness to active oversight. This requires:

  • Establishing a formal AI governance framework aligned to NIST AI RMF or ISO 42001

  • Creating a controlled enterprise AI environment to eliminate Shadow AI

  • Embedding AI risks into enterprise risk registers

  • Enforcing human review for critical decisions

  • Continuously monitoring AI behaviour, access, and outcomes


Final Thought


The question for boards is no longer whether AI is being used.


It is whether its use is defensible.


Organisations that view AI solely as an innovation opportunity will continue to increase their exposure.


Those that approach AI as a governed, measurable, and auditable risk category will be positioned to scale safely.

References


  • Stanford HAI, AI Index Report 2025

  • McKinsey & Company, The State of AI Governance 2025

  • NIST, AI Risk Management Framework 1.0

  • OWASP, Top 10 Risks for LLM and Agentic Applications 2025

  • Gartner, Generative AI Adoption Forecast 2026

  • Lakera AI, Threat Landscape Report Q4 2025ties

  • MIT Sloan Management Review, AI Governance and Performance 2025

  • IEEE Symposium on Security and Privacy, Model Inversion Research European Commission, EU AI Act




 
 
bottom of page