Skip to main content
Back to Writing Better Prompts: Core Techniques
Lesson 2 of 10

Being specific: the single most important prompting skill

~20 min read

The Prompt That Wasted 45 Minutes

Picture this: It's Tuesday morning. You need a competitive analysis before a 2pm strategy meeting. You open ChatGPT, type "give me a competitive analysis of our market," and hit enter. The model returns five paragraphs of generic business-school prose about Porter's Five Forces, market dynamics, and the importance of differentiation. Nothing about your actual competitors. Nothing about your industry. Nothing you can use. You spend 40 minutes reprompting, clarifying, and reformatting — arriving at the meeting with a document that still feels half-baked. The model didn't fail you. Your prompt did.

Why Vague Prompts Produce Generic Outputs

Every large language model — ChatGPT, Claude, Gemini — is trained on billions of documents covering every topic imaginable. When you send a vague prompt, the model has no signal telling it which slice of that vast knowledge to draw from. It defaults to the statistical center: the most common, most average response that fits your words. "Competitive analysis" matches thousands of MBA textbooks, consulting frameworks, and business articles. So that's what you get — the average of all of them. Specificity is the mechanism by which you pull the model away from that generic center and toward the precise output your situation requires. Every detail you add is a constraint that narrows the possibility space the model is working within.

This isn't a quirk of one tool. It's structural to how transformer-based language models work. GPT-4, Claude 3, and Gemini 1.5 Pro all process your prompt as a sequence of tokens and generate the most probable continuation given everything in that sequence. A short, vague prompt gives the model a wide probability distribution to sample from — meaning outputs vary wildly and land far from your actual need. A specific, detailed prompt dramatically narrows that distribution. The model has more signal about your context, your constraints, your desired format, and your audience. The output gets sharper, faster. This is why experienced prompt writers consistently get better results not because they know secret tricks, but because they've internalized one habit: add more relevant detail before hitting send.

There's a useful mental model here: think of every AI model as an extremely capable contractor who just walked into your office for the first time. They have deep expertise across dozens of domains — strategy, writing, data analysis, code, design briefs. But they know nothing about your company, your industry, your audience, your constraints, or your definition of "good." If you hand them a three-word task, they'll do something competent and completely off-target. If you spend two minutes briefing them properly — here's the context, here's the goal, here's the format I need, here's what good looks like — they'll produce something genuinely useful. The AI doesn't need more intelligence. It needs more information from you.

The 5-Element Specificity Check

Before sending any professional prompt, run through these five elements: (1) Role — who should the AI be acting as? (2) Context — what's the situation, company, or industry? (3) Task — what exactly needs to be produced? (4) Format — how should the output be structured? (5) Constraints — word count, tone, what to exclude, deadline context. You don't need all five every time. But the more you include, the sharper the output. Most weak prompts are missing at least three of these.

What Specificity Actually Looks Like

The gap between a weak prompt and a strong one is rarely about length — it's about information density. A weak prompt is long on request and short on context. A strong prompt tells the model who it's writing for, what constraints apply, and what success looks like. The difference shows up immediately in output quality. Compare what happens when a marketing manager asks ChatGPT for help with a product launch email. The weak version produces a boilerplate message that could belong to any company selling anything. The strong version — with role, audience, product details, and tone specified — produces something you could send with light edits.

Weak Prompt — Marketing Email

Prompt

Write a product launch email for our new software tool.

AI Response

Subject: Introducing Our Exciting New Software Tool! Dear Valued Customer, We are thrilled to announce the launch of our new software tool, designed to help businesses like yours work more efficiently and effectively. Our innovative solution offers a range of powerful features that will transform the way you work: • Increased productivity • Streamlined workflows • Better collaboration • Real-time insights We invite you to learn more about how our tool can benefit your organization. Click below to schedule a demo or start your free trial today. [CTA Button] Thank you for your continued support. Best regards, The Team

Strong Prompt — Marketing Email

Prompt

You are a B2B SaaS copywriter. Write a product launch email for Fieldwork, a project management tool built specifically for architecture firms. The audience is senior architects and studio principals at firms with 10-50 employees who currently use spreadsheets and email threads to manage projects. They are time-poor, skeptical of software promises, and care most about reducing admin overhead. Tone: direct, professional, no hype. Format: subject line, 3-paragraph email body, single CTA. Max 180 words total.

AI Response

Subject: Stop managing projects in your inbox Hi [First Name], Most architecture studios run projects across a tangle of spreadsheets, email threads, and shared drives. It works — until it doesn't. Missed fee changes, duplicated documents, and hours spent chasing status updates are the hidden cost. Fieldwork is project management built for architecture practices. It tracks fees, milestones, consultants, and deliverables in one place — without the setup overhead of generic tools like Asana or Monday.com. Studios using Fieldwork report cutting project admin time by an average of 6 hours per week per principal. We're offering a 30-day free trial with onboarding support included. No credit card required. → Start your trial at fieldwork.io/trial If it doesn't fit how your studio works, you'll know within a week.

The second prompt took approximately 90 seconds to write. The output required minimal editing before being usable. The first prompt took 10 seconds and produced something that belongs in no one's inbox. That ratio — time spent prompting versus time spent fixing output — is the core economics of prompt quality. Professionals who write specific prompts spend less total time on AI-assisted work, not more. The upfront investment in detail pays back immediately in reduced revision cycles. This is the habit that separates people who say AI "saves me hours" from people who say AI "never quite gets it right."

How Specificity Varies Across AI Tools

Different AI tools respond to specificity in different ways — and understanding those differences helps you calibrate how much detail to include. Claude 3 Opus, for example, handles long, highly detailed prompts exceptionally well and tends to follow complex multi-part instructions more reliably than GPT-4 in many benchmark comparisons. ChatGPT with GPT-4o is faster and handles conversational back-and-forth well, making it forgiving of lighter initial prompts if you're willing to iterate. Gemini 1.5 Pro has a 1-million-token context window, meaning you can paste in enormous amounts of background material alongside your prompt — a form of specificity through context-loading rather than instruction-writing. Knowing your tool changes your prompting strategy.

ToolModelResponds Best ToSpecificity StrategyCost (approx.)
ChatGPTGPT-4oConversational iteration, shorter prompts with follow-upsStart focused, refine in follow-up turns$20/mo (Plus) or API
ClaudeClaude 3 Opus / SonnetLong, detailed instructions with nuanced constraintsWrite the full brief upfront — it follows complex specs reliably$20/mo (Pro) or API
GeminiGemini 1.5 ProContext-heavy tasks with large reference documentsPaste in source material; let context do the specifyingFree tier + $20/mo (Advanced)
PerplexityMixed (GPT-4, Claude, Mistral)Research queries needing cited, current sourcesSpecify source type and recency requirementsFree + $20/mo (Pro)
Notion AIGPT-4 basedDocument editing, summarisation within existing contentUse existing document context; specify output format clearlyIncluded in Notion Plus ($10/mo)
GitHub CopilotGPT-4 + CodexCode generation with surrounding file contextAdd inline comments explaining intent and constraints$10/mo individual
How major AI tools respond to specificity — and the right prompting strategy for each

Applying This at Work on Monday

The practical shift is this: before you open any AI tool, spend 60 seconds writing down the five specificity elements — role, context, task, format, constraints — on a sticky note or in a scratchpad. You won't include all five in every prompt, but the act of thinking through them forces you to clarify your own requirements before asking the model. Most professionals discover in this process that their request was vaguer than they realized. "Write a summary" becomes "Write a 150-word executive summary of this client proposal for a CFO who is skeptical of the ROI claims — focus on the financial projections in section 3." That extra specificity takes 45 seconds to add and cuts revision time by 80%.

For managers running teams, specificity in prompts also becomes a quality-control mechanism. When your team uses AI tools like ChatGPT or Claude to draft documents, reports, or communications, the consistency of output depends almost entirely on the quality of prompts being used. A team that shares well-specified prompt templates produces more consistent, on-brand outputs than one where every person improvises from scratch. Companies like Klarna, which famously reported AI handling the equivalent of 700 customer service agents' work, didn't achieve that by letting employees type whatever they wanted. They built specific prompt frameworks. Even without enterprise infrastructure, you can create a personal library of specific prompt templates for your five most common work tasks.

Analysts and consultants have a particular advantage here: they already think in structured frameworks. The discipline of defining scope, constraints, and output format before starting an analysis maps directly onto writing specific prompts. If you can write a proper terms of reference for a consulting engagement, you can write a highly specific AI prompt. The skills transfer almost one-to-one. What changes is the medium — instead of briefing a junior analyst, you're briefing a model. The briefing process is identical: context, objective, format, constraints, and what good looks like. Analysts who recognize this parallel adopt high-quality prompting habits faster than almost any other professional group.

Rewrite Three Real Prompts from Your Work Week

Goal: Build the habit of specifying all five elements before prompting, and create at least three reusable prompt templates from real work tasks.

1. Open a blank document or notes app — not an AI tool yet. 2. Think of three tasks you completed or attempted with an AI tool in the past two weeks. Write down the prompt you actually used (or would have used) for each. 3. For each prompt, score it against the 5-element check: Role, Context, Task, Format, Constraints. Give each element a 0 (missing) or 1 (present). Note your total out of 5. 4. Pick the prompt with the lowest score. This is your rewrite target. 5. Rewrite it using all five elements. Don't worry about length — write until you've covered every element clearly. Aim for at least 60 words in the rewritten prompt. 6. Open ChatGPT or Claude and run both the original and the rewritten prompt back-to-back. Use the same model for both. 7. Compare the two outputs side by side. Note specifically: which required fewer edits to be usable? Which better matched your actual need? 8. Save your rewritten prompt as a template. Label it with the use case (e.g., "Client email — introducing new service") so you can reuse it. 9. Repeat the rewrite process for the remaining two prompts over the next 48 hours.

How Specificity Applies Differently by Role

  • Managers: Specify audience seniority and decision-making context — a briefing for your CEO needs different framing than one for a department head. Include the political or organizational context the model can't infer.
  • Marketers: Always specify brand voice, audience segment, and channel. "Email" and "LinkedIn post" for the same message require completely different tones — state both the channel and the audience persona explicitly.
  • Analysts: Define the analytical lens upfront. Do you want descriptive analysis, causal reasoning, or scenario modeling? State the data or assumptions the model should work from, even if you're pasting them in.
  • Consultants: Frame the client context and the stakeholder you're writing for. A deliverable for a risk-averse CFO in financial services reads differently than one for a growth-focused CMO in a startup — say so explicitly.
  • HR and People Leaders: Specify the employment context, jurisdiction, and tone carefully. Legal nuance matters — tell the model if you need language appropriate for a formal policy document versus a manager conversation guide.
  • Product Managers: Specify user personas, technical constraints, and which stage of the product lifecycle you're addressing. "Feature brief" means something different in discovery versus pre-launch.
  • Sales Professionals: Include the prospect's industry, deal stage, and the specific objection or opportunity you're addressing. Generic outreach prompts produce generic outreach — and buyers notice.

Specificity in Complex, Multi-Part Tasks

Single-output prompts are the easiest place to practice specificity. But the real productivity gains come when you apply the same discipline to complex, multi-part tasks — the kind that make up a significant portion of knowledge work. A strategy consultant preparing a board presentation, an analyst building a market entry assessment, a marketer planning a campaign — these aren't single prompts. They're sequences of prompts, each building on the last. Specificity at each step compounds. A vague first prompt produces vague material that constrains everything downstream. A precise first prompt creates a foundation that every subsequent prompt can reference and build on.

Complex Multi-Part Task — Analyst Preparing a Market Entry Brief

Prompt

You are a senior strategy analyst. I am preparing a market entry brief for a UK-based B2B logistics software company considering expansion into the German Mittelstand market (mid-sized manufacturing firms, 50-500 employees). Complete the following three tasks in sequence: 1. MARKET SIZING: Estimate the addressable market in Germany for logistics software targeting Mittelstand manufacturers. Use publicly available data where possible and state your assumptions clearly. Present as a short table. 2. KEY RISKS: List the top 5 market entry risks specific to this segment and geography. For each risk, rate it High/Medium/Low and suggest one mitigation action. Format as a table. 3. RECOMMENDED ENTRY MODE: Recommend one market entry approach (direct sales, partnership, acquisition, or other) with a 100-word rationale. Base your recommendation on the risks identified in task 2. Do not include general market entry theory. Focus on this specific company profile and target segment.

AI Response

**1. MARKET SIZING** | Segment | Estimate | Source/Assumption | |---|---|---| | Mittelstand manufacturers (50-500 employees) in Germany | ~18,000 firms | Destatis 2023; manufacturing sector filter | | Estimated % without dedicated logistics software | ~55% | Industry analyst estimates; high ERP penetration but low specialist logistics tool adoption | | Addressable market (no current solution) | ~9,900 firms | | | Average contract value (B2B SaaS, mid-market logistics) | €18,000–€35,000 ARR | Comparable: Setlog, Slimstock pricing benchmarks | | **Total Addressable Market (Germany, Mittelstand)** | **€178M–€347M ARR** | | **2. KEY RISKS** | Risk | Rating | Mitigation | |---|---|---| | Language and localisation barrier — German-language UI, contracts, and support expected as standard | High | Partner with a German reseller with existing Mittelstand relationships for Year 1 | | Long B2B sales cycles — Mittelstand firms are relationship-driven; decisions involve multiple stakeholders | High | Hire a German-speaking country manager with existing sector contacts before launch | | ERP lock-in — SAP dominates; buyers reluctant to add non-integrated tools | Medium | Build and certify SAP integration module pre-launch; lead with it in sales materials | | Data residency requirements — GDPR and sector-specific data localisation expectations | Medium | Ensure EU-based cloud hosting (AWS Frankfurt or Azure Germany) from day one | | Brand recognition deficit — UK brand unknown in German market | Low | Co-market with a recognised German logistics association (e.g., BVL) in Year 1 | **3. RECOMMENDED ENTRY MODE: Channel Partnership** A direct sales entry into the German Mittelstand is high-risk without local brand equity or established relationships. The two High-rated risks — language barriers and long sales cycles — both point to the same solution: partner with an established German logistics or ERP consultancy that already has Mittelstand clients and credibility. A revenue-share channel model reduces upfront cost, accelerates pipeline access, and provides localisation support. This approach accepts slower revenue ramp in Year 1 in exchange for substantially lower market entry risk and a shorter path to the first 20 reference customers.

That output isn't perfect — no AI output is. The market sizing figures need verification, and the analyst would pressure-test the entry mode recommendation against their client's specific capabilities and budget. But it's a working first draft of a structured brief that would have taken two to three hours to produce from scratch. The specificity of the prompt — role, company profile, target segment, task structure, format requirements, and explicit instruction to skip general theory — is what made the output immediately useful rather than generically plausible. The model didn't get smarter. The prompt got more specific.

The Specificity Stack: Layering Context, Constraints, and Format

Picture this: it's Tuesday morning, and your CMO wants a competitive positioning summary before the 2pm leadership meeting. You open ChatGPT, type 'write a competitive analysis,' and get back a generic five-paragraph essay that could apply to any company in any industry. You spend 40 minutes editing it into something usable. That's not an AI problem — that's a specificity problem. The model gave you exactly what you asked for: a competitive analysis. You just asked for the wrong thing. The fix isn't a better AI tool. It's a better-constructed prompt that layers three distinct elements — context, constraints, and format — in the right order.

Context: Telling the Model What It Needs to Know

Context is the background information that shapes everything the model produces. Without it, the AI defaults to the statistical average of every similar request it has seen — which means generic, bland, and probably useless. Effective context answers three questions before the model writes a single word: Who is asking? What situation are they in? What does success look like? A senior consultant preparing a board-level briefing needs very different output than a junior analyst drafting an internal Slack update, even if both ask about 'market trends in SaaS.' When you front-load context into your prompt, you collapse the model's probability space. Instead of drawing from millions of possible responses, it draws from a much narrower, more relevant slice. ChatGPT, Claude, and Gemini all respond dramatically better to context-rich prompts because they use that information to weight their token predictions toward your actual situation.

The 30-Second Context Rule

Before you type a prompt, spend 30 seconds asking: what would a smart human colleague need to know to help me well? Write that down first. Include your role, the audience for the output, the business situation, and any relevant constraints. Paste that as the opening of your prompt. This single habit cuts revision time by more than half for most professionals.

Constraints: The Creative Paradox That Improves AI Output

Constraints feel counterintuitive. You'd think giving the model more freedom produces better results, but the opposite is consistently true. Constraints force specificity in both directions — they tell the model what to include and what to exclude, which is equally valuable. Word limits, tone requirements, structural rules, and explicit exclusions all sharpen output dramatically. 'Write a 150-word executive summary, no jargon, no passive voice, for a CFO who is skeptical of this initiative' produces something radically more useful than 'summarize this for leadership.' The constraint on word count prevents padding. The tone requirement changes vocabulary choices. The audience specification shifts the framing toward financial impact. Each constraint acts as a filter, eliminating thousands of possible mediocre responses and steering the model toward the small subset that actually serves your purpose. Perplexity and Claude in particular respond well to multi-constraint prompts because their architectures handle instruction-following with high fidelity.

Format: Specifying the Shape of Your Output

Format is the most underused specificity lever. Most professionals ask for content but never specify its structure, and then complain that the AI 'doesn't understand what I need.' The model doesn't know whether you want bullet points or prose, a table or a timeline, headers or a single flowing document — unless you tell it. Specifying format does two things simultaneously: it makes the output immediately usable without restructuring, and it forces the model to organize its thinking before generating. When you ask Claude to 'produce a table with three columns — risk, likelihood (High/Medium/Low), and mitigation action' instead of 'list the risks,' you get something you can paste directly into a slide. Notion AI and GitHub Copilot have built-in format defaults for their contexts (documents and code, respectively), but in general-purpose tools like ChatGPT and Gemini, format is entirely your responsibility to specify.

Competitive Positioning — Vague vs. Specific

Prompt

You are a strategy consultant preparing materials for a B2B SaaS company's leadership offsite. The company sells project management software to mid-market professional services firms (50–500 employees). Their two main competitors are Asana and Monday.com. Write a competitive positioning summary for the CEO — 200 words maximum, written in plain business English, no marketing jargon. Structure it as: one sentence on where we win, one sentence on where we're vulnerable, and three bullet points on the key messages we should reinforce with prospects. Assume the CEO is data-oriented and skeptical of vague claims.

AI Response

Where we win: Mid-market professional services firms choose us over Asana and Monday.com primarily because of our billing and resource-utilization integrations, which neither competitor offers natively at our price point. Where we're vulnerable: Both Asana and Monday.com have significantly larger marketing budgets and brand recognition, meaning prospects often arrive with a preference for them before we get a conversation. Key messages to reinforce with prospects: • Our average implementation time is 3 weeks vs. 8–12 weeks for comparable Monday.com deployments — ask prospects what a delayed rollout costs them in billable hours. • We're the only option that connects project status directly to invoicing workflows without a third-party integration. • Our customer success model includes a dedicated account manager for all accounts over 75 seats — Asana and Monday.com both route these accounts through shared support queues.

Notice what the specific prompt produced: concrete competitive claims, a structure the CEO can act on, and messaging tied to real business outcomes. The vague version of this prompt — 'write a competitive analysis of our project management software' — would have produced generic paragraphs about feature comparisons. The specificity stack (context: who this is for and what company; constraints: 200 words, plain English, no jargon; format: prescribed structure with named sections) transformed a time-consuming editing task into a 90-second copy-paste.

How Different Tools Handle Specificity

Not all AI tools respond to specificity in the same way. The underlying model architecture, context window size, and instruction-tuning approach all affect how well a tool processes detailed, multi-part prompts. Understanding these differences helps you calibrate how much specificity to invest depending on which tool you're using — and where you'll get the highest return on prompt effort.

ToolContext WindowInstruction-FollowingBest for Specific PromptsSpecificity ROI
ChatGPT (GPT-4o)128K tokensStrong — handles multi-part instructions wellLong documents, structured analysis, role-playing personasHigh — rewards detailed prompts consistently
Claude (Sonnet/Opus)200K tokensExcellent — very high fidelity on complex instructionsNuanced writing, long-form analysis, precise formattingVery High — most responsive to specificity of any major model
Gemini 1.5 Pro1M tokensGood — occasionally over-interprets constraintsResearch synthesis, multimodal tasks, Google Workspace integrationHigh — especially strong on factual, structured tasks
PerplexityVaries by modelModerate — optimized for search, not instruction chainsCurrent events, sourced research, quick factual lookupsMedium — specificity helps but search context dominates
Notion AIDocument-scopedLimited — works best with simple, single-instruction promptsIn-document editing, summarizing existing contentLow-Medium — context is auto-supplied by document
GitHub CopilotCode-scopedStrong within code contextCode generation, debugging, refactoring with specific requirementsHigh — inline comments and specific requirements drive quality sharply
Specificity ROI across major AI tools — how much your prompt investment pays off by platform

Claude's Instruction Fidelity Advantage

In head-to-head testing, Claude (Sonnet and Opus tiers) consistently outperforms other models on complex, multi-constraint prompts. Anthropic's Constitutional AI training emphasizes following nuanced instructions precisely. If you're writing prompts with five or more specific requirements, Claude is worth testing against your default tool. The Pro tier costs $20/month — the same as ChatGPT Plus.

Applying the Specificity Stack at Work

The specificity stack isn't just a prompting technique — it's a thinking discipline that pays off beyond AI. When you practice articulating context, constraints, and format before generating anything, you often discover you weren't entirely clear on what you needed in the first place. That clarity gap is exactly why so many AI outputs disappoint: the professional typing the prompt had a fuzzy picture of the output in their head, translated that fuzziness into a vague prompt, and then blamed the model when it couldn't read their mind. The specificity habit forces you to crystallize your thinking upstream, which makes the AI's output useful on the first try instead of the fifth.

In practice, this means building a brief mental checklist before any substantive prompt. What role should the AI take — analyst, copywriter, devil's advocate, interviewer? Who is the output for, and what do they already know? What length and format will they actually use? What should the output explicitly avoid? What's the business context that makes this request non-generic? Running through these questions takes under a minute for experienced practitioners, and the difference in output quality is immediately visible. Teams that adopt shared prompt templates — essentially pre-filled specificity stacks for their most common tasks — report that AI tools become genuinely reliable parts of their workflow rather than hit-or-miss experiments.

The compounding effect is significant. Once you've written a highly specific prompt that produces great output, you own a reusable asset. A well-crafted prompt for 'weekly project status update for a risk-averse executive audience' works every week with minimal modification. You're not just prompting better — you're building a library of professional templates that encode your standards and your audience's expectations. ChatGPT's custom instructions feature, Claude's Projects, and Notion AI's templates all support this kind of reuse. The professionals who get the most from AI tools in 2024 aren't the fastest typists — they're the ones who invest once in specificity and reuse that investment repeatedly.

Build Your First Specificity Stack Prompt

Goal: Produce one highly specific, reusable prompt template for a real professional task, and directly observe how the specificity stack changes output quality compared to your previous prompting approach.

1. Choose a real task from your current workload — something you'd normally spend 30+ minutes writing, such as a stakeholder update, a briefing document, or a proposal section. 2. Open a blank document or note. Write three headings: Context, Constraints, Format. 3. Under Context: write 2–3 sentences describing your role, the recipient, and the business situation driving this task. 4. Under Constraints: list at least four specific requirements — word count, tone, what to exclude, level of technical detail, or any mandatory elements. 5. Under Format: describe the exact structure you want. Name sections, specify if you want bullets vs. prose, and indicate whether headers should be included. 6. Combine all three sections into a single prompt. Put Context first, Constraints second, Format third, then add your actual request as the final line. 7. Run this prompt in ChatGPT or Claude. Copy the output into a separate document without editing it. 8. Score the output from 1–5 on how usable it is without revision. Note which constraints produced the biggest improvement. 9. Save the prompt template with a descriptive filename. You now have a reusable asset for this task type.

Specificity Looks Different by Role

  • Managers: Specificity means naming the audience (your team vs. your board vs. a client), the decision the output should support, and the tone that matches your leadership style. Generic 'write a team update' prompts produce outputs that don't sound like you.
  • Marketers: Specificity means including brand voice guidelines, the funnel stage you're targeting, the specific customer segment, and what action you want the reader to take. Without these, AI copy sounds like every other brand.
  • Analysts: Specificity means defining the exact output format (table, chart description, structured data), the level of statistical detail required, and which variables to prioritize. Vague analysis requests produce narrative summaries when you needed numbers.
  • Consultants: Specificity means naming the client industry, the stakeholder's known concerns, the framework or methodology you're working within, and the decision that will be made with this information. Generic frameworks don't win client trust.
  • Product Managers: Specificity means including the user persona, the job-to-be-done, the technical constraints of your stack, and the sprint context. 'Write a user story' is nearly useless without these anchors.
  • HR and People Leaders: Specificity means stating the seniority level, the organizational culture, whether the output is internal or external, and the sensitivity level of the topic. A performance improvement communication needs very different framing than a team celebration post.
Analyst Use Case — Quarterly Business Review Narrative

Prompt

You are a senior business analyst preparing a written narrative for a Quarterly Business Review (QBR) at a 200-person B2B software company. The audience is the executive team (CEO, CFO, COO) — they are data-literate but time-poor. Using the following metrics, write a 250-word narrative that: (1) opens with the single most important trend, (2) explains the two metrics that most need executive attention, (3) ends with one recommended action. Use plain language. No passive voice. No phrases like 'it is evident that' or 'the data suggests.' Metrics: ARR grew 18% YoY to $24M. Net Revenue Retention is 104%, up from 98% last quarter. New logo acquisition is down 22% vs. last quarter. CAC increased from $8,400 to $11,200. Sales cycle length increased from 47 to 63 days.

AI Response

Q3's headline story is expansion revenue masking an acquisition problem — and the gap is widening. Net Revenue Retention hit 104%, up six points from last quarter. Existing customers are buying more, renewing reliably, and generating enough expansion to drive 18% ARR growth despite a weak new business quarter. That's the good news, and it's real. The two metrics that need immediate attention are new logo acquisition (down 22% quarter-over-quarter) and Customer Acquisition Cost (up 33%, from $8,400 to $11,200). These are moving in opposite directions and compounding each other. Fewer new customers are coming in, and each one costs significantly more to land. The sales cycle extension — from 47 to 63 days — points to a mid-funnel problem, likely in the evaluation and procurement stage rather than top-of-funnel awareness. NRR at 104% buys time, but it doesn't fix the acquisition engine. A company that grows primarily through expansion eventually exhausts its installed base. Recommended action: Commission a win/loss analysis on the last 30 closed-lost deals, focused specifically on deals that stalled or died between the demo stage and contract signature. The 16-day sales cycle extension is precise enough to indicate a process or competitive issue, not a market problem — and that's diagnosable within two weeks with the right analysis.

Over-Specification Has a Cost

There's a point where adding constraints starts to hurt output quality. If you specify too many conflicting requirements — 'be formal but conversational, comprehensive but under 100 words, include all five frameworks but keep it simple' — the model produces awkward compromises. As a rule, keep your constraint list to 4–6 clear, non-conflicting requirements. If you need more than that, consider splitting the task into two sequential prompts.

The Specificity Habits That Separate Good Prompters from Great Ones

  1. Always name the audience — not just 'write for leadership' but 'write for a CFO who prioritizes cash flow and is skeptical of technology investments.'
  2. Specify what the output will be used for — a slide, a Slack message, a client email, a printed handout — because medium shapes appropriate length and tone.
  3. Include one example of the style or tone you want, even a single sentence. 'Write in a tone similar to this: [example]' is one of the highest-ROI instructions you can give.
  4. State explicit exclusions. 'Do not include implementation details,' 'avoid academic language,' and 'do not recommend external consultants' all prevent specific failure modes.
  5. Tell the model what decision or action the output should enable. Outputs anchored to a decision are almost always sharper than outputs anchored to a topic.
  6. When the stakes are high, ask the model to state its assumptions before generating. This surfaces misalignments before you've read 400 words of the wrong thing.

It's Tuesday morning. Your director asks for a competitive analysis before the 2pm leadership meeting. You open ChatGPT, type 'compare our competitors,' and get four paragraphs of generic business-school prose that could apply to any company in any industry. You paste it into a doc, stare at it, and start rewriting from scratch. The AI didn't fail you — your prompt did. The output was exactly as vague as the input. This is the moment where specificity stops being an abstract principle and becomes a productivity multiplier. The professionals who consistently get useful AI output aren't smarter or more technical; they've simply learned to front-load their prompts with the information the model needs to skip the generic and go straight to the relevant.

What 'Specific' Actually Looks Like Under Pressure

Specificity in prompting has four dimensions that work together: context (who you are and what situation you're in), scope (what's included and what's explicitly excluded), format (how the output should be structured), and constraint (length, tone, or technical depth). Most beginners nail one dimension and ignore the other three. A prompt that gives rich context but no format guidance produces a wall of text you'll spend 20 minutes reformatting. A prompt with perfect format instructions but no context produces beautifully structured nonsense. The Tuesday scenario above fails on all four: no industry, no company name, no framing of what 'competitive' means in your context, no output format, no indication of audience. Adding just two of those dimensions would have produced something usable.

The practical way to build this habit is to run a five-second 'specificity check' before you hit send. Ask yourself: does this prompt tell the AI what I actually know about this situation? Would a talented new hire reading only this prompt be able to produce what I need? If the answer is no, you're missing context. Would they know when to stop? If not, you're missing scope. Would they know how to present it? That's format. Would they know the audience and tone? That's constraint. This isn't a checklist you laboriously fill in — it's a mental posture that becomes automatic after a few dozen prompts. The goal is that your first-draft prompt already contains enough signal that the AI's first-draft response is 70-80% of the way there.

The 'Brilliant Briefing' Mental Model

Before prompting, imagine you're briefing a brilliant contractor who started today. They're highly capable but know nothing about your company, your audience, or your definition of 'good.' Write the prompt you'd give them. That single shift — from 'what do I want?' to 'what does someone capable need to know?' — closes most of the specificity gap immediately.

Two Prompts, Two Completely Different Outputs

Vague prompt — what most people send

Prompt

Write a summary of the risks of our new product launch.

AI Response

Launching a new product involves several key risks. Market risk includes the possibility that customers may not adopt the product as expected. Operational risk covers supply chain disruptions and production delays. Financial risk involves cost overruns and missed revenue targets. Reputational risk can arise if the product underperforms. Mitigation strategies include thorough market research, robust project management, and contingency budgeting. [Five more paragraphs of identical generality follow — none referencing your actual product, market, timeline, or stakeholder concerns.]

Specific prompt — what gets you something useful

Prompt

I'm a product manager at a B2B SaaS company launching a workflow automation tool for mid-market HR teams in Q3. Write a one-page executive risk summary for our VP of Product. Cover three categories: market timing risks (we're entering a crowded space with Rippling and Workato already established), integration risks (our tool must connect with Workday and BambooHR on day one), and internal capacity risks (our CS team is 60% staffed). For each risk, include a likelihood rating (High/Medium/Low), a one-sentence impact statement, and our current mitigation. Tone: direct, no jargon, written for a VP who reads fast.

AI Response

**Product Launch Risk Summary — Q3 Workflow Automation Tool** **1. Market Timing Risk — HIGH** Rippling and Workato hold established share with HR teams already in multi-year contracts; late entry reduces first-mover pricing power. *Mitigation:* Positioning on implementation speed (14-day onboarding vs. industry avg. 60 days) and a 90-day ROI guarantee. **2. Integration Risk — MEDIUM** Day-one Workday and BambooHR dependencies create a hard launch blocker if API certification slips. *Mitigation:* Parallel certification tracks underway; fallback is a CSV import bridge for launch with native sync in month 2. **3. Internal Capacity Risk — HIGH** 60% CS staffing means each rep carries 140% of target account load; churn risk spikes in months 2-4 post-launch. *Mitigation:* Contractor onboarding approved; two senior hires targeting start date two weeks pre-launch. *Prepared for VP of Product review — [date]*

Choosing the Right Tool for Specific, Professional Prompts

ToolStrengths for Specific PromptsWeaknessesBest Professional Use Case
ChatGPT (GPT-4o)Follows complex multi-part instructions reliably; strong formatting controlCan over-elaborate if length isn't constrainedStructured reports, risk summaries, client-facing docs
Claude (claude.ai)Excels at long-context specificity; handles large documents as inputSlightly more conversational by default — needs format cuesAnalysing lengthy contracts, policy docs, research papers
Gemini (Google)Strong when specificity involves current data or Google Workspace contextLess consistent on highly structured output formatsMarket data synthesis, Gmail/Docs-integrated drafting
Perplexity AIAdds real-time source citations to specific research promptsNot ideal for creative or structured document generationCompetitive intelligence, fact-checking specific claims
Notion AIOperates inside your existing documents — context is already embeddedLimited to Notion's interface; can't handle complex multi-step promptsSummarising meeting notes, drafting within existing project pages
Tool selection matters less than prompt quality — but matching tool to task reduces friction.

What Changes at Work on Monday

The immediate behaviour change is simple: stop sending your first-draft prompt. Before you hit send, spend 30 seconds adding the one piece of context that's obvious to you but invisible to the model. Usually it's your role, your audience, or the specific decision this output needs to support. That single addition — 'I'm a senior consultant preparing this for a CFO who will use it to decide whether to approve a $2M budget' — transforms generic output into something you can actually send. You don't need to write a paragraph of preamble every time. One precise sentence of context beats three vague sentences of intent.

The medium-term change is building a personal prompt library. Every time you get an output you're proud of — a risk summary, a stakeholder email, a competitive brief — save the prompt that produced it. Strip out the company-specific details and keep the structure. Within four weeks of consistent use, you'll have 10-15 reusable prompt templates that cover 80% of your recurring work tasks. Teams that do this collectively — sharing a Notion page or Google Doc of 'prompts that work' — compound the benefit across everyone's workflow. This is how AI assistance scales from individual productivity hack to team capability.

The long-term shift is in how you think about your own expertise. Specificity in prompting forces you to articulate what you actually know: your audience, your constraints, your success criteria. Professionals who struggle to write specific prompts often discover they're unclear on those things themselves. The discipline of prompting well becomes a discipline of thinking well. When you can write a 60-word prompt that produces a first-draft deliverable in 30 seconds, you're not outsourcing your thinking — you're making your thinking machine-executable. That's a professional skill that compounds every time AI tools get more capable.

Build Your First Reusable Prompt Template

Goal: Produce one reusable, tested prompt template for a real work deliverable — something you'll actually use again next week.

1. Choose one recurring deliverable from your actual work — a status update, a client brief, a risk summary, a meeting agenda, or a competitor snapshot. 2. Open ChatGPT, Claude, or your preferred tool in a fresh session. 3. Write a vague version of the prompt first (2-3 words or a single generic sentence) and run it. Screenshot or copy the output. 4. Now write a specific version using all four dimensions: context (your role + situation), scope (what's in and what's out), format (structure, headers, length), and constraint (tone, audience, technical level). 5. Run the specific prompt and compare the two outputs side by side. 6. Identify the three most impactful words or phrases in your specific prompt — the ones that most changed the output quality. 7. Strip your prompt of all company-specific details, replacing them with [BRACKETS] as placeholders. 8. Save this template in a document titled 'Prompt Templates — [Your Name]' with a one-line description of when to use it. 9. Send the template to one colleague and ask them to test it with their own details — their feedback will show you where your placeholders need more specificity.

Specificity by Role — Where It Pays Most

  • Managers: Specify the decision the output must support — 'my team of 8 needs to decide X by Friday' — and AI stops giving you analysis and starts giving you decision-ready framing.
  • Analysts: Name the exact metric, the time period, and the comparison baseline. 'Q3 vs Q2 churn rate for enterprise accounts' outperforms 'customer retention trends' by an order of magnitude.
  • Marketers: State the channel, the audience segment, and the single conversion action. A prompt for 'LinkedIn post' and one for 'LinkedIn post targeting CFOs of 200-person SaaS companies, driving demo sign-ups' produce completely different copy.
  • Consultants: Include the client's industry, the stakeholder you're writing for, and the narrative tension — what problem this document is solving. Generic frameworks become client-specific recommendations.
  • HR professionals: Specify employment jurisdiction, company size, and whether you need policy language vs. manager guidance vs. employee-facing communication — these require completely different registers.

Key Takeaways

  1. Specificity has four dimensions — context, scope, format, and constraint. Weak prompts miss at least two of these.
  2. The 'brilliant new hire' mental model is the fastest way to identify what's missing from your prompt before you send it.
  3. One precise sentence of context — your role, your audience, the decision at stake — does more work than three vague sentences of intent.
  4. Tool choice matters less than prompt quality; but Claude handles long-context specificity best, Perplexity adds citations, and ChatGPT follows complex formatting instructions most reliably.
  5. Saving your best-performing prompts as reusable templates is the highest-ROI habit you can build in the first month of consistent AI use.
  6. The discipline of writing specific prompts trains you to articulate your own expertise more precisely — a compound benefit that goes beyond AI productivity.
Knowledge Check

A marketing manager sends this prompt: 'Write a social media post about our sale.' Which addition would most improve the output quality?

According to the four dimensions of specificity, which dimension is missing from this prompt: 'Write a risk summary for our product launch. Include market, operational, and financial risks. Use bullet points.'

A consultant wants to analyse a 40-page client contract for liability clauses. Based on the comparison table, which tool is best suited for this task?

A senior analyst saves a prompt that produced an excellent competitive brief, strips out the company-specific details, and replaces them with bracketed placeholders. What is the primary professional benefit of this action?

A manager reads this lesson and concludes: 'I should write longer prompts to be more specific.' Is this conclusion correct?

Sign in to track your progress.