Engineering Advanced AI ReviewCode ReviewTeam Patterns

AI Code Review Tutor

Teach AI to review like your team: custom review checklists, tone calibration, org-specific patterns, and AI-assisted code review workflows.

A skill for engineering leads who want AI code review that reflects their team's standards — not generic best-practice lists. It helps you encode your review checklist, calibrate tone (learning-focused vs. blocking-critical), capture org-specific patterns, and design workflows where AI handles routine checks so humans can focus on architectural concerns.

Added Mar 20, 2026

$npx skills add johnefemer/skillfish --skill ai-code-review-tutor

What This Skill Can Do

Concrete capabilities you get when you install this skill.

Build custom code review checklists from your team's historical review comments

Calibrate AI review tone: blocking vs. advisory, nitpick vs. critical, teaching vs. enforcing

Encode org-specific patterns that generic linters can't catch as AI review rules

Design AI-first review workflow: AI handles style/safety, humans handle design/context

Generate review templates for different PR types: feature, bugfix, refactor, migration

Run retrospectives on past reviews to identify patterns worth automating

Real Conversations

See how the AI Code Review Tutor skill responds to actual requests. These are representative examples of what your AI agent will produce.

You

We want AI to review PRs like our senior engineer does. How do I build a checklist that captures our specific standards?

Agent

Extract your team's implicit standards from the evidence — your closed PRs. Here's the process:

Step 1 — Mine past review comments

# GitHub CLI: get all review comments from last 90 days
gh api '/repos/{owner}/{repo}/pulls/comments' \
  --paginate --jq '.[] | select(.created_at > "2025-12-01") | .body' \
  > review_comments.txt

Step 2 — Cluster into categories Group ~100 comments by theme. Common clusters:

  • Security (SQL injection, auth bypass, secrets in logs)
  • Error handling (unhandled rejections, missing error boundaries)
  • Performance (N+1 queries, missing indexes, unbounded loops)
  • Testing (missing edge cases, brittle mocks, no integration coverage)
  • Architecture (business logic in routes, missing abstraction layers)

Step 3 — Write review rules from the clusters

## Our Review Checklist

### Security (always block on these)
- [ ] No user input used in SQL without parameterization
- [ ] JWT not logged anywhere (check logger calls and error messages)
- [ ] No secrets committed (scan with `trufflehog`)

### Error Handling (block if missing)
- [ ] All async functions have explicit error handling or are in a try/catch boundary
- [ ] DB errors don't leak schema info to API responses

### Performance (advisory, not blocking)
- [ ] No queries inside loops — check for N+1 patterns
- [ ] New endpoints have response time budget documented

Step 4 — Calibrate blocking vs. advisory Pass this checklist to AI with: 'Block only on Security and Error Handling items. Flag Performance items as advisory with suggested fixes.' Run on 3 past PRs to calibrate.

Get Started

How to install and use this skill in your preferred environment.

Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase.

Models & Context

Which AI models and context windows work best with this skill.

Recommended Models

Works with Claude Sonnet, GPT-4o, or Gemini 2.5 Pro. Tone calibration and checklist generation benefit from models with strong instruction-following.

Context Window

SKILL.md is ~8KB. Load your existing review checklist and sample past review comments in context for best calibration results.

Pro tips for best results

1

Be specific

Include numbers — users, budget, RPS — so the skill can size the architecture.

2

Share constraints

Compliance needs, team size, and existing stack all improve the output.

3

Iterate

Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.

4

Combine skills

Pair with companion skills below for end-to-end coverage.

Works Great With

These skills complement AI Code Review Tutor for end-to-end coverage. Install them together for better results.

$ skillfish add johnefemer/skillfish --all # install all skills at once

Ready to try AI Code Review Tutor?

Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.

$npx skills add johnefemer/skillfish --skill ai-code-review-tutor
← Browse all skills