Marketing AdsPPCPaid Media

Paid Ads

Paid advertising campaigns on Google, Meta, LinkedIn, and more.

$skillfish add ~skills/paid-ads

Real Conversations

See how the Paid Ads skill responds to actual requests. These are representative examples of what your AI agent will produce.

You

We're spending $8,000/month on Google Ads with a 4.2% CTR but only 1.1% conversion to trial. CPC is $12. We're targeting keywords like 'project management software' and 'team collaboration tool'. Something is off.

Agent

Paid Ads Audit: Google Ads Account

The diagnosis: Your CTR is strong (4.2% is above SaaS average of ~2.8%), which means ad copy is resonating. The 1.1% trial conversion is the problem — this is a landing page and keyword intent issue, not an ad issue.

Root cause 1: Keyword intent mismatch "Project management software" and "team collaboration tool" are top-of-funnel research queries. These users are 60–90 days from buying. You're paying $12 CPC for browsers, not buyers.

Better keyword structure:

Campaign Keywords Intent Est. CPC
Brand [your brand name] Purchase $2–4
Competitor [competitor] + alternative/vs/pricing High $8–15
Bottom-funnel "project management software pricing" / "best PM tool for [niche]" High $10–18
Top-funnel "project management software" Low $8–12 — reduce budget here

Root cause 2: Landing page mismatch If you're sending all traffic to your homepage, that's costing you 50–70% of potential conversions. Each campaign/intent tier needs a dedicated landing page.

Immediate actions:

  1. Add negative keywords: "free", "open source", "template", "examples", "tutorial"
  2. Shift 30% of budget from top-funnel to competitor + bottom-funnel campaigns
  3. Build a dedicated trial landing page with one CTA

Expected outcome: Trial CVR should move from 1.1% to 2.5–3.5% within 45 days.

Get Started

How to install and use this skill in your preferred environment.

Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.

Models & Context

Which AI models and context windows work best with this skill.

Recommended Models

Best
Claude Opus 4 Claude Sonnet 4 GPT-4.1 Gemini 2.5 Pro Grok 3 Kimi K2
Good
Claude Haiku 4.5 GPT-4.1 mini Gemini 2.5 Flash Grok 3 mini

Larger models produce more detailed, production-ready outputs.

Context Window

This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.

8K Skill only
32K+ Skill + conversation
100K+ Skill + references + codebase

All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.

Pro tips for best results

1

Be specific

Include numbers — users, budget, RPS — so the skill can size the architecture.

2

Share constraints

Compliance needs, team size, and existing stack all improve the output.

3

Iterate

Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.

4

Combine skills

Pair with companion skills below for end-to-end coverage.

Good to Know

Advanced guide and reference material for Paid Ads. Background, edge cases, and patterns worth understanding.

Contents

Quality Score and Ad Relevance by Platform

Google Quality Score is a 1–10 diagnostic score per keyword, comprising three components:

Component Weight (approximate) What it measures
Expected CTR ~35% Historical likelihood of clicks relative to ad position; benchmarked against similar ads
Ad Relevance ~30% How closely ad copy matches the keyword's intent; checks headline and description alignment
Landing Page Experience ~35% Page relevance to keyword, load speed, mobile usability, absence of excessive interstitials

Quality Score affects your Ad Rank (position) and effective CPC — a QS of 8 can outrank a competitor bidding higher with a QS of 4.

Meta Relevance diagnostics (no single score — three separate ratings in Ads Manager):

  • Quality ranking: creative quality vs competing ads for the same audience
  • Engagement rate ranking: expected engagement vs competing ads
  • Conversion rate ranking: expected conversion rate vs competing ads with the same optimization goal

LinkedIn relevance: LinkedIn does not publish a score equivalent. Relevance is reflected in CPM/CPC volatility — highly relevant ads for an audience see stable or declining CPCs over time; low-relevance ads see rising CPCs as LinkedIn optimizes delivery toward better-performing creative.

Bid Strategy Decision Tree

Automated bid strategies require conversion data to function correctly. Using smart bidding without sufficient conversion volume produces erratic results.

Strategy When to use Conversion volume threshold
Manual CPC New campaigns, testing phase, very low volume Any volume; you control bids directly
Maximize Clicks Drive traffic when no conversion tracking exists No conversions required
Maximize Conversions Scale conversions within a fixed budget 15–30 conversions/month per campaign minimum
Target CPA (tCPA) Optimize to a specific cost per conversion 30–50 conversions/month for stable performance
Target ROAS (tROAS) E-commerce or high-value lead gen with revenue tracking 50+ conversions/month; revenue values must be passed

Common mistake: Switching to tCPA or tROAS before hitting the volume threshold. The algorithm enters a learning period with insufficient data, CPAs spike, and campaigns are paused prematurely. Use Maximize Conversions first to build the conversion history, then transition to tCPA once thresholds are met.

Audience Match Rate Issues

Customer list uploads consistently underperform expected match rates. Typical platform match rates:

Platform Expected match rate Common cause of low rates
Google Customer Match 40–60% Unhashed emails, personal vs work email mismatch
Meta Custom Audiences 50–70% Old email list, incorrect hashing format
LinkedIn Matched Audiences 15–40% Work email required; personal emails don't match

Hashing requirements: All platforms require SHA-256 hashed email addresses. Hash must be lowercase before hashing — John@Example.com must be normalized to john@example.com then hashed. A common error is hashing before lowercasing, which produces non-matching hashes.

Improving match rates:

  • Upload multiple identifiers simultaneously (email + phone + name + ZIP) — platforms use probabilistic matching across all signals
  • Use work email addresses for B2B platforms (LinkedIn match rates improve significantly with work emails)
  • Refresh lists quarterly — email churn rates of 20–30%/year mean old lists match poorly
  • On Meta, enable Advanced Matching to capture hashed parameters client-side in addition to list uploads

Creative Fatigue Signals

Frequency thresholds vary by platform and campaign objective, but the underlying signal is the same: CTR declines as the same user sees the same creative repeatedly.

Platform Frequency warning threshold Observation window
Meta 2.5–3.5x for cold audiences 7 days
LinkedIn 4–6x before significant CTR decline Campaign lifetime
Google Display 3–5x per week 7-day rolling

Reading a frequency-to-CTR decay curve: Pull a report with frequency as a dimension and CTR as a metric, segmented by week. A healthy creative shows flat or slowly declining CTR as frequency increases from 1x to 3x. Fatigue looks like a steep CTR drop between frequency buckets (e.g., CTR halves from 2x to 4x frequency). When you see this pattern, creative rotation is overdue.

Response to fatigue: Refresh creative assets before frequency-forced performance decline. On Meta, maintaining 3–5 active ad variations per ad set with automatic creative optimization reduces fatigue systematically. On LinkedIn, plan a creative refresh every 4–6 weeks for evergreen campaigns.

Attribution Window Trade-offs

Attribution windows determine which ad interactions receive credit for a conversion. Different windows produce materially different ROAS numbers from the same underlying data.

Window What it counts Best for
1-day click Conversions within 24h of a click Short purchase cycles; direct response
7-day click Conversions within 7 days of a click SaaS trials, considered purchases
28-day click Conversions within 28 days High-consideration B2B, long sales cycles
1-day view-through Conversions within 24h of an impression (no click) Brand awareness measurement; easily inflated

Why windows produce different ROAS numbers: A user who clicks an ad on Monday and converts on Sunday is counted under 7-day click but not 1-day click. View-through conversions are the most inflationary — users who saw an ad and later converted organically get attributed to the ad.

Choosing a window: Match the window to your actual sales cycle. If your trial-to-paid conversion happens within 24 hours, a 1-day click window is accurate. If users evaluate for two weeks before converting, a 7-day window undercounts. For B2B with 30-60 day sales cycles, 28-day click is more representative but still misses offline conversions — supplement with CRM-based attribution for deals over a threshold value.

Cross-platform double-counting: Each platform applies its own attribution window and takes credit independently. Meta, Google, and LinkedIn will each claim full credit for the same conversion. Use a single source of truth (GA4, your CRM, or a third-party MTA tool) for budget decisions rather than summing platform-reported conversions.

Ready to try Paid Ads?

Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.

$skillfish add ~skills/paid-ads
← Browse all 169 skills