How to Use Only Approved AI Tools and Recognize Shadow AI Risks
Shadow AI is the new shadow IT, but with data leakage consequences that are immediate and irreversible. Once your data enters an unapproved tool, you cannot retrieve or control it. This playbook gives you specific practices for knowing which tools are approved, understanding why restrictions exist, and channeling legitimate tool needs through proper evaluation processes rather than taking matters into your own hands.
This playbook covers the how. For the why and what, see the
skill definition
.
Developing Start here. Build the foundation.
- This week, find and bookmark your organization's approved AI tools list. If you cannot find one, ask your manager or IT department directly. Write down every AI tool you currently use for work, including browser extensions, mobile apps, and integrations embedded in other software. Compare your list to the approved list. For any tool that is not approved, stop using it for work tasks immediately and identify the approved alternative that covers the same use case.
- For each approved AI tool you use, learn one key security requirement it meets that distinguishes it from its consumer or free version. This might be enterprise data handling agreements, no-training-on-input policies, geographic data residency, or SOC 2 compliance. Understanding one specific security feature per tool gives you a concrete reason to prefer approved tools rather than following the rule abstractly. Write these down and reference them when colleagues ask why the approved tool matters.
- Set up a weekly five-minute check: have you used any AI tool this week that is not on the approved list? This includes tools recommended by friends, free trials of new products, AI features embedded in productivity apps, and browser extensions that use AI. If you find one, stop using it and submit a tool evaluation request through your organization's process. The goal is to build awareness of how easily unapproved tools creep into your workflow.
Proficient Build consistency and rhythm.
- Learn why specific tools are restricted in your organization, not just which ones. Ask your IT or security team about the reasoning: is it a data residency issue, a training data concern, a lack of enterprise agreements, or a regulatory compliance gap? When you understand the reasoning, you can make better judgment calls in edge cases and explain restrictions to colleagues in practical terms rather than just saying it is not allowed.
- When you discover a new AI tool through a blog post, conference, or colleague recommendation, route it through your organization's evaluation process before trying it with any work data. Create a standard template for yourself: tool name, what it does, what business problem it solves, what data it would need access to, and why existing approved tools do not cover the use case. This makes evaluation requests faster to submit and more likely to receive a timely response.
- When you notice a colleague using an unapproved AI tool, share what you know about approved alternatives and the specific risks of unapproved tools rather than ignoring the situation or escalating punitively. Frame it as helping them protect themselves and the organization. Most shadow AI usage stems from not knowing what is approved rather than intentional policy violation.
Mastered Operate at the highest level.
- When no approved AI tool meets a legitimate business need you have identified, advocate for evaluation through proper channels with a documented business case. Include the specific task, the volume of work involved, the data classification level required, the limitations of current approved tools, and any candidate tools you have identified. A well-documented request is far more likely to result in a timely evaluation than a vague complaint about tool limitations.
- Contribute to your organization's tool evaluation process by volunteering to pilot and provide structured feedback on new tools being considered for approval. Your practical perspective as a daily user complements the security team's technical assessment. Document your findings on usability, output quality, and workflow fit alongside any security observations.
- Help build organizational awareness of shadow AI risks by sharing real examples of how unapproved tool usage has created problems, whether from public case studies or anonymized internal incidents. Make the risk concrete rather than abstract. People change behavior when they can picture the consequences, not when they read a policy document.
Unlock Skill Progression
Coaching Personalized to your current level
Progress Tracking Across every skill area
Mastery Validation Evidence-based, not guesswork