Senior Prompt Engineer
Prompt optimization, template design, agentic systems, and RAG implementation.
$skillfish add ~skills/senior-prompt-engineer Real Conversations
See how the Senior Prompt Engineer skill responds to actual requests. These are representative examples of what your AI agent will produce.
Our RAG chatbot answers support questions using our docs. It confidently gives wrong answers when the retrieved context doesn't contain the answer. Using LangChain + GPT-4o.
Root Cause: The model is inferring answers not in context — classic RAG grounding failure.
Fix 1 — Strict grounding system prompt:
SYSTEM_PROMPT = """You are a support assistant. Answer questions using ONLY the provided context.
Rules:
- If the context does not contain enough information to answer, say exactly: "I don't have enough information in our documentation to answer that. Please contact support at support@company.com."
- Do NOT use your general knowledge to fill gaps.
- Quote the relevant documentation section when answering.
- Never say 'based on my knowledge' or similar phrases."""
Fix 2 — Add a relevance gate before generation:
from langchain_core.output_parsers import JsonOutputParser
RELEVANCE_PROMPT = """Does this context contain enough information to answer the question?
Question: {question}
Context: {context}
Respond with JSON: {{"relevant": true/false, "reason": "one sentence"}}"""
def answer_with_gate(question: str, context: str) -> str:
gate = llm.invoke(RELEVANCE_PROMPT.format(question=question, context=context))
result = JsonOutputParser().parse(gate.content)
if not result['relevant']:
return "I don't have documentation on that topic. Please contact support."
return llm.invoke(f"{SYSTEM_PROMPT}\n\nContext: {context}\n\nQuestion: {question}").content
Fix 3 — Improve retrieval (garbage in, hallucinations out):
- Use
top_k=6with MMR reranking instead oftop_k=3cosine similarity — reduces the chance the relevant chunk is missed - Add chunk metadata (doc title, section) so the model can reference its source
Measurement: Sample 50 questions with known answers, score before/after. Grounding prompt alone typically cuts hallucination rate by 60-70%.
Get Started
How to install and use this skill in your preferred environment.
Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.
Models & Context
Which AI models and context windows work best with this skill.
Recommended Models
Larger models produce more detailed, production-ready outputs.
Context Window
This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.
All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.
Pro tips for best results
Be specific
Include numbers — users, budget, RPS — so the skill can size the architecture.
Share constraints
Compliance needs, team size, and existing stack all improve the output.
Iterate
Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.
Combine skills
Pair with companion skills below for end-to-end coverage.
Ready to try Senior Prompt Engineer?
Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.
$skillfish add ~skills/senior-prompt-engineer