AI Playbook 3 of 5

How to Manage AI Risk and Address Shadow AI

Employees are already using unapproved AI tools. Banning them drives usage underground and makes risk invisible rather than eliminated. Effective risk management starts with visibility into actual usage, channels demand through sanctioned alternatives, establishes clear data handling rules, and maintains human oversight for consequential decisions. This playbook covers the practical steps for managing AI risk without killing the adoption you need.

Developing Start here. Build the foundation.
  • Deploy AI usage monitoring to gain visibility into actual tool usage across the organization. Start with network-level analysis to identify traffic to known AI services, supplement with anonymous surveys asking employees which AI tools they use and for what purposes, and review expense reports for individual AI subscriptions. The goal is a clear picture of your current exposure, not enforcement. Communicate the audit as a step toward providing better tools, not catching violations.
  • Identify the top three to five shadow AI tools your employees use and evaluate enterprise alternatives with proper data controls. For each tool, assess whether an enterprise version exists, what data protection features it offers, and how quickly it could be deployed. Prioritize alternatives for the most widely used shadow tools. Every week you delay providing a sanctioned alternative is a week of uncontrolled data exposure.
  • Create a data classification framework for AI use with three to four tiers. Define which data categories can be used with which types of AI tools. For example, public data can go to any approved tool, internal data to enterprise tools with data processing agreements, confidential data only to on-premise or private cloud AI, and restricted data not to any AI tool. Publish this alongside existing data handling policies so employees can make quick decisions.
Proficient Build consistency and rhythm.
  • Build a fast-track approval process for new AI tools that takes days, not months. When employees discover a tool that solves a real problem, your approval process must be faster than their alternative of just using the consumer version. Create a lightweight evaluation checklist covering data handling, security posture, and compliance requirements. Assign a single person authority to approve tier-one tools within 48 hours.
  • Implement human-in-the-loop requirements for consequential decisions. Define which decisions in your organization are consequential: hiring, credit, pricing, medical, legal, or safety-related. For each category, document the specific point where a human must review the AI recommendation, what review criteria they must apply, and how the review is documented. Make sure the human review is genuine evaluation, not a rubber stamp.
  • Run quarterly shadow AI audits to track whether sanctioned alternatives are displacing unsanctioned tools. If shadow AI prevalence is not declining, investigate why. Common reasons include the sanctioned alternative being slower, less capable, or harder to access. Fix the root cause rather than tightening enforcement, which simply drives usage further underground.
Mastered Operate at the highest level.
  • Track the regulatory landscape actively across jurisdictions your organization operates in. The EU AI Act, US state-level AI legislation, and industry-specific regulations are all evolving rapidly. Assign an owner for regulatory monitoring who reports to the governance committee quarterly. Classify every production AI system by risk tier under applicable regulations and build compliance into the development process rather than retrofitting before enforcement deadlines.
  • Build an AI risk register that catalogs every production AI system, its risk tier, the data it processes, the decisions it influences, the human oversight mechanisms in place, and the regulatory requirements that apply. Review the register quarterly and after any significant change to an AI system. This register becomes your primary tool for demonstrating regulatory compliance and for identifying systemic risks across your AI portfolio.
  • Develop incident response procedures specific to AI failures: biased outputs discovered in production, data exposure through AI tools, AI-influenced decisions that harmed customers or employees, and regulatory inquiries. Standard incident response procedures do not cover the unique aspects of AI failures such as the need to assess whether the issue is systemic or isolated, whether retraining is required, and how affected parties should be notified.

Unlock Skill Progression

Coaching Personalized to your current level
Progress Tracking Across every skill area
Mastery Validation Evidence-based, not guesswork