Marketing SEOAISearch

AI SEO

Optimize content to get cited by AI search engines and AI Overviews.

$skillfish add ~skills/ai-seo

Real Conversations

See how the AI SEO skill responds to actual requests. These are representative examples of what your AI agent will produce.

You

Our blog post on sprint planning tips ranks #4 on Google but we're not getting cited in AI Overviews. Competitors who rank #7 and #8 are being cited instead. What are we doing wrong and how do we fix it?

Agent

AI Overviews don't just pull from top-ranked pages — they pull from content formatted for extraction. Here's likely what's happening and how to fix it:

Why lower-ranked pages get cited:

  1. Their content has direct, extractable answers — clear question followed by a direct 1–2 sentence answer
  2. They use definition-first writing — the first sentence after an H2 answers the question completely
  3. They have structured lists and tables that AI can parse and quote cleanly

Fix 1 — Add a direct-answer intro block:

What is sprint planning? Sprint planning is a Scrum ceremony where engineering teams select backlog items for the upcoming sprint and estimate the work required. A typical session for a 5–10 person team should take 60–90 minutes.

AI Overviews heavily favor definitions in the first 100 words after a heading.

Fix 2 — Restructure sections as question-format headers:

  • Change: "Tips for Better Sprint Planning"
  • To: "How long should sprint planning take?"
  • To: "How do you run an effective sprint planning meeting?"

Fix 3 — Add a structured summary table:

Sprint length Recommended planning duration
1-week sprint 1–2 hours
2-week sprint 2–4 hours

Fix 4 — Add FAQ schema markup — AI Overviews weight FAQ structured data heavily.

Fix 5 — Author E-E-A-T signals: Add a bio with real credentials ("Engineering manager for 8 years") — AI Overviews favor demonstrated expertise over anonymous content.

Implement these and check for AI Overview inclusion within 2–3 weeks.

Get Started

How to install and use this skill in your preferred environment.

Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.

Models & Context

Which AI models and context windows work best with this skill.

Recommended Models

Best
Claude Opus 4 Claude Sonnet 4 GPT-4.1 Gemini 2.5 Pro Grok 3 Kimi K2
Good
Claude Haiku 4.5 GPT-4.1 mini Gemini 2.5 Flash Grok 3 mini

Larger models produce more detailed, production-ready outputs.

Context Window

This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.

8K Skill only
32K+ Skill + conversation
100K+ Skill + references + codebase

All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.

Pro tips for best results

1

Be specific

Include numbers — users, budget, RPS — so the skill can size the architecture.

2

Share constraints

Compliance needs, team size, and existing stack all improve the output.

3

Iterate

Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.

4

Combine skills

Pair with companion skills below for end-to-end coverage.

Good to Know

Advanced guide and reference material for AI SEO. Background, edge cases, and patterns worth understanding.

Contents

Classic SEO optimizes for ranking signals — PageRank, backlinks, keyword density — to win a position on a results page. AI search works differently: the model retrieves candidate sources, extracts content, synthesizes an answer, and selects citations from that extraction. A page ranked #7 can be cited over a page ranked #1 if its content is more extractable.

Key differences practitioners need to internalize:

  • Ranking is not citation. AI Overviews and Perplexity pull from their own retrieval and indexing pipelines, not directly from the live SERP position order.
  • Entity understanding matters more. AI systems recognize named entities, relationships, and topical authority — thin keyword-matching content is increasingly invisible.
  • Freshness thresholds vary. Perplexity indexes actively; Google AI Overviews pull from pages already in the search index, so indexation lag affects AI visibility.

E-E-A-T Signals for AI Retrieval

Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) applies to AI retrieval with some platform-specific weighting:

Signal What it means How to implement
Experience First-hand demonstration, not just description Case studies, specific outcomes ("reduced churn by 22%"), dated examples
Expertise Demonstrated domain knowledge Author bios with credentials, linked profiles (LinkedIn, personal site)
Authoritativeness Third-party recognition Backlinks from domain-relevant sites, citations in other authoritative content
Trustworthiness Factual accuracy, transparency Citations with links, correction notes, clear publication and update dates

Perplexity and ChatGPT Browse weigh backlink profiles and external citations heavily — a page that authoritative domains reference is a stronger citation candidate. Google AI Overviews additionally weight Schema markup (especially FAQ and Article type) as a structured signal of content intent.

Content Structure for AI Retrieval

AI extraction favors content that is pre-digested — the answer is complete in the first 1–2 sentences after a heading, not buried three paragraphs in.

Definition-first writing: Place the direct answer in the opening sentence of each section. "Sprint planning is a ceremony where..." beats an intro paragraph building up to the definition.

Question-format headings: H2s phrased as questions ("How long should sprint planning take?") match query intent and are frequently pulled verbatim into AI answers.

Short, self-contained paragraphs: Each paragraph should make one complete point. AI extraction windows are typically 100–200 tokens; paragraphs that sprawl across multiple ideas lose coherence when extracted.

Structured lists and tables: Numbered lists and tables are among the most reliably extracted formats. If your content has comparative or sequential information, use them.

FAQ sections with Schema: A dedicated FAQ section with FAQPage JSON-LD markup gives AI systems an explicit, machine-readable signal of Q&A structure.

Platform Comparison

Platform Source preference Citation style Freshness requirement Schema influence
Google AI Overviews Google index (existing rankings inform but don't determine) Inline source cards, 3–5 sources Standard crawl cycle FAQ, Article, HowTo weighted
Perplexity Live web retrieval + Bing index Numbered footnotes, typically 5–8 sources Near real-time Minimal direct schema use
ChatGPT Browse Bing index, real-time retrieval when enabled Inline links Days to weeks Minimal
Claude (claude.ai) No live web in base mode; citations when tools enabled Tool-dependent N/A for base model N/A

Tracking AI Visibility

Current tooling for AI search monitoring is early-stage with significant gaps:

Google Search Console: Shows AI Overviews impressions as a filter in the Performance report (Search type → Web, filter by "AI Overviews"). Available for GSC-verified properties only.

Semrush / Ahrefs AI Overview tracking: Both tools added AI Overview appearance data to keyword rank trackers in 2024. Useful for monitoring which keywords trigger AI Overviews and whether your domain appears.

Perplexity source tracking: No official API or analytics tool. Manual monitoring via searching target queries and checking Sources panel. Third-party tools like Brandwatch have added Perplexity monitoring but coverage is partial.

Key limitation: No platform exposes click data from AI citations — impressions and mentions are measurable but CTR from AI panels is largely unmeasurable as of early 2025. Attribution models for AI-driven traffic remain unsolved.

Ready to try AI SEO?

Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.

$skillfish add ~skills/ai-seo
← Browse all 169 skills