How to Establish Shared Standards for AI-Assisted Work
Without shared standards, output quality from AI-assisted work varies wildly and managers spend increasing time on rework. This playbook walks you through building quality rubrics, operating agreements, and shared prompt libraries that give your team a common bar. Start with simple criteria for your most common deliverables, then progress to calibration exercises and failure mode tracking.
This playbook covers the how. For the why and what, see the
skill definition
.
Developing Start here. Build the foundation.
- Pick your team's 3 most common AI-assisted deliverables (client emails, weekly reports, data analyses). For each one, write 4-5 specific quality criteria covering accuracy, formatting, evidence standards, and tone. Test the criteria by having two team members independently evaluate the same deliverable. If they score it differently, tighten the wording until they converge.
- Draft a one-page team AI operating agreement in a shared Google Doc with four sections: Approved Tools, Quality Checkpoints (when to review AI output before sending), Escalation Criteria (when to flag AI-generated work for a second opinion), and Review Cadence. Share it in a team meeting, collect feedback for one week, then finalize. Set a calendar reminder to revisit it every quarter.
- Create a shared Google Doc organized by task type (emails, reports, analysis, summaries) to serve as your team prompt library. For each section, add one proven prompt with three elements: the full prompt text, the context it needs to work well, and its known limitations. Assign one team member to review the library monthly, remove outdated entries, and add new submissions from the team.
Proficient Build consistency and rhythm.
- Run a calibration exercise every 4-6 weeks: select one recent AI-assisted deliverable, distribute it to the whole team, and have everyone score it independently against your quality rubrics. In a 30-minute meeting, compare scores and discuss where evaluations diverged. Update the rubric language to close any gaps. Document the changes and the reasoning behind them.
- Start a failure modes registry in a shared spreadsheet with four columns: Date, What Happened, Why It Failed, and How to Avoid It. When anyone on the team catches AI producing incorrect or substandard output, add an entry. Review the registry in your monthly alignment check and look for patterns. Recurring failures in the same task type signal a rubric gap or a prompt that needs updating.
- Add a 'standards check' step to your team's existing review process for AI-assisted deliverables. Before submitting work for review, the author checks it against the relevant quality rubric and notes which criteria it meets and any areas of concern. This takes 2-3 minutes per deliverable and cuts review cycles by catching obvious issues before the reviewer sees them.
Mastered Operate at the highest level.
- Run a quarterly standards audit: pull 5-10 recent AI-assisted deliverables, score them against your rubrics, and compare quality trends over time. Present the results to the team with specific examples of improvement and areas that still need work. Use the data to decide which rubrics need tightening, which prompts need updating, and whether your operating agreement still reflects how the team actually works.
- Build a structured onboarding module for new team members that covers your standards in 90 minutes: 30 minutes on the operating agreement, 30 minutes working through the prompt library on a practice task, and 30 minutes reviewing 3 entries from the failure modes registry. Have them score a sample deliverable against your rubrics and compare their scores with a tenured team member to calibrate quickly.
- Transition rubric ownership from yourself to the team. Assign each rubric to the person who uses it most. That person is responsible for proposing updates based on calibration exercises and flagging when the rubric no longer matches the work. Review their proposed changes in your monthly alignment check. Your role shifts from standards author to standards reviewer.
Unlock Skill Progression
Coaching Personalized to your current level
Progress Tracking Across every skill area
Mastery Validation Evidence-based, not guesswork