Engineering FullstackNext.jsDjango

Senior Fullstack

Fullstack development with Next.js, FastAPI, MERN, and Django stacks.

$skillfish add ~skills/senior-fullstack

Real Conversations

See how the Senior Fullstack skill responds to actual requests. These are representative examples of what your AI agent will produce.

You

We have a Next.js frontend and FastAPI backend. Need real-time notifications when background jobs complete. Currently polling every 5s, which is hammering the DB.

Agent

Replace polling with Server-Sent Events (SSE) — simplest right-architecture for job notifications.

SSE is unidirectional (server → client), works over HTTP/1.1, and needs no WebSocket infra.

FastAPI SSE endpoint:

from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from asyncio import Queue
import asyncio, json

subscribers: dict[str, Queue] = {}

@app.get('/notifications/{user_id}')
async def notifications(user_id: str):
    queue: Queue = Queue()
    subscribers[user_id] = queue
    
    async def event_stream():
        try:
            while True:
                event = await queue.get()
                yield f"data: {json.dumps(event)}\n\n"
        except asyncio.CancelledError:
            subscribers.pop(user_id, None)
    
    return StreamingResponse(event_stream(), media_type='text/event-stream',
                             headers={'Cache-Control': 'no-cache', 'X-Accel-Buffering': 'no'})

# Call this when a job completes:
def notify_user(user_id: str, payload: dict):
    if q := subscribers.get(user_id):
        q.put_nowait(payload)

Next.js client hook:

function useNotifications(userId: string) {
  const [events, setEvents] = useState<Notification[]>([]);
  useEffect(() => {
    const es = new EventSource(`/api/notifications/${userId}`);
    es.onmessage = e => setEvents(prev => [...prev, JSON.parse(e.data)]);
    es.onerror = () => es.close();
    return () => es.close();
  }, [userId]);
  return events;
}

Result: 0 DB polling. SSE connection stays open, ~1KB/min per connected user. For >10k concurrent users, upgrade to Redis Pub/Sub as the notification bus.

Get Started

How to install and use this skill in your preferred environment.

Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.

Models & Context

Which AI models and context windows work best with this skill.

Recommended Models

Best
Claude Opus 4 Claude Sonnet 4 GPT-4.1 Gemini 2.5 Pro Grok 3 Kimi K2
Good
Claude Haiku 4.5 GPT-4.1 mini Gemini 2.5 Flash Grok 3 mini

Larger models produce more detailed, production-ready outputs.

Context Window

This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.

8K Skill only
32K+ Skill + conversation
100K+ Skill + references + codebase

All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.

Pro tips for best results

1

Be specific

Include numbers — users, budget, RPS — so the skill can size the architecture.

2

Share constraints

Compliance needs, team size, and existing stack all improve the output.

3

Iterate

Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.

4

Combine skills

Pair with companion skills below for end-to-end coverage.

Ready to try Senior Fullstack?

Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.

$skillfish add ~skills/senior-fullstack
← Browse all 169 skills