Engineering Advanced AgentsWorkflowsPatterns

Agent Workflow Designer

Sequential, parallel, router, orchestrator, and evaluator agent patterns.

$skillfish add ~skills/agent-workflow-designer

Real Conversations

See how the Agent Workflow Designer skill responds to actual requests. These are representative examples of what your AI agent will produce.

You

I need to research 8 competitors simultaneously and produce a synthesis report. Right now I'm doing them sequentially and it takes 4 minutes. Can I parallelize this with Claude?

Agent

Yes — this is the canonical parallel fan-out/fan-in use case. Sequential = 8 × 30s = 4 min. Parallel = 30s total.

import asyncio
import anthropic

async def research_competitor(client, name: str) -> dict:
    loop = asyncio.get_event_loop()
    resp = await loop.run_in_executor(None, lambda: client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        system="Competitive intelligence analyst. Return: pricing, key features, weaknesses, target market.",
        messages=[{"role": "user", "content": f"Analyze {name} for a project management SaaS comparison."}]
    ))
    return {"name": name, "analysis": resp.content[0].text,
            "tokens": resp.usage.input_tokens + resp.usage.output_tokens}

async def run_research(competitors: list[str]) -> str:
    client = anthropic.Anthropic()
    results = await asyncio.gather(
        *[research_competitor(client, c) for c in competitors],
        return_exceptions=True
    )
    successful = [r for r in results if not isinstance(r, Exception)]
    combined = "\n\n".join(f"## {r['name']}\n{r['analysis']}" for r in successful)
    synthesis = client.messages.create(
        model="claude-3-5-sonnet-20241022", max_tokens=2048,
        system="Strategic analyst. Produce a comparison matrix and top 3 strategic recommendations.",
        messages=[{"role": "user", "content": combined}]
    )
    return synthesis.content[0].text

Cost note: 8 parallel Sonnet calls ≈ $0.08 total. Each research agent gets max 1,024 output tokens to keep the fan-in prompt manageable.

Get Started

How to install and use this skill in your preferred environment.

Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.

Models & Context

Which AI models and context windows work best with this skill.

Recommended Models

Best
Claude Opus 4 Claude Sonnet 4 GPT-4.1 Gemini 2.5 Pro Grok 3 Kimi K2
Good
Claude Haiku 4.5 GPT-4.1 mini Gemini 2.5 Flash Grok 3 mini

Larger models produce more detailed, production-ready outputs.

Context Window

This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.

8K Skill only
32K+ Skill + conversation
100K+ Skill + references + codebase

All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.

Pro tips for best results

1

Be specific

Include numbers — users, budget, RPS — so the skill can size the architecture.

2

Share constraints

Compliance needs, team size, and existing stack all improve the output.

3

Iterate

Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.

4

Combine skills

Pair with companion skills below for end-to-end coverage.

Ready to try Agent Workflow Designer?

Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.

$skillfish add ~skills/agent-workflow-designer
← Browse all 169 skills