Controlling length and format: getting output that's ready to use
~23 min readControlling Length and Format: Getting Output That's Ready to Use
In 2023, the content team at HubSpot began embedding ChatGPT into their blog production workflow. The early results were frustrating in a very specific way. The writing wasn't bad — it was just wrong-shaped. Ask for a 150-word product description and you'd get 400 words with a header, three bullet points, and a closing paragraph that summarized what the bullet points just said. Ask for a quick email draft and you'd receive something formatted like a business memo, complete with a subject line suggestion and a sign-off recommendation nobody asked for. The model was being helpful in the way an overeager intern is helpful: doing more than requested, in a format that created more editing work than starting from scratch.
HubSpot's team eventually solved this — not by using a different model, but by changing how they wrote prompts. They started specifying format explicitly: word counts, structural requirements, what to omit. A copywriter named Sarah Bedrick documented the shift internally: prompts that took 20 seconds to write were being replaced by prompts that took 90 seconds but produced output that went straight into their CMS without editing. The time investment moved from output to input, and total time per piece dropped by roughly 40%. The model hadn't changed. The instructions had.
This is the core tension in AI-assisted writing work: models like GPT-4, Claude, and Gemini are trained to be thorough and to demonstrate capability. Left to their own defaults, they produce output optimized for appearing complete and impressive, not output optimized for your specific workflow. A response that looks great in a demo often requires significant reformatting before it's actually usable. The fix isn't a better model — it's a more precise prompt. Format and length control are not cosmetic preferences. They are functional requirements, and treating them that way changes everything about how you work with AI tools.
Why models default to longer output
The Default Problem: What Models Do When You Don't Specify
Consider what happens when a product manager at a SaaS company asks Claude: "Write a summary of our Q3 feature releases for the company newsletter." Without format guidance, Claude will make a series of silent decisions. It will choose a length — probably 300-500 words. It will decide whether to use bullet points or prose. It will likely add a header. It may group features by theme and create subheadings. None of these choices are wrong in the abstract. But the newsletter editor may need exactly 120 words to fit the layout. The marketing team may have a no-headers rule for that section. The PM's manager may have already decided on a prose format. The model's reasonable defaults collide with a dozen invisible organizational constraints, and the output becomes a first draft of a first draft rather than something ready to publish.
The deeper issue is that models don't know what "ready to use" means for your context. That definition varies by role, by tool, by audience, and by moment. A Slack message that's "ready to use" is four lines of plain text with no formatting. A Notion document that's "ready to use" has headers, toggle lists, and callout blocks. A client-facing PDF summary that's "ready to use" has a specific word count, a formal tone, and zero bullet points because the client's legal team finds them informal. The model cannot know any of this unless you say so. Your prompt is the only channel through which that context can flow.
There's a useful mental model here borrowed from software development: think of the AI's output as an API response. When you call an API, you specify the data format you want — JSON, XML, plain text. You specify fields. You specify limits. You don't accept whatever format the server feels like returning and then reformat it manually on your end. Prompting for format and length works the same way. You are specifying the contract for what comes back. The more precisely you define that contract, the less post-processing you do, and the more the tool actually accelerates your work rather than shifting the labor around.
Prompt
VAGUE: Write a summary of our Q3 feature releases for the company newsletter. --- PRECISE: Write a summary of our Q3 feature releases for our internal company newsletter. Requirements: exactly 120 words, plain prose (no bullet points, no headers), past tense, written for a non-technical audience. The three features to cover are: AI-powered search (launched August), bulk export (launched September), and the redesigned dashboard (launched October). End with one sentence about what's coming in Q4.
AI Response
The precise prompt produces a 120-word prose paragraph that drops directly into the newsletter template. The vague prompt produces a 380-word structured piece with a header, three bullet sections, and a closing paragraph — requiring significant editing before it fits the layout. Same underlying request. The difference is entirely in the format specification.
A Legal Team Learns to Specify
The legal operations team at a mid-sized UK law firm — Mishcon de Reya — began piloting Microsoft Copilot for drafting client update emails in early 2024. Their initial frustration mirrored HubSpot's: the tool kept producing emails that were too long, too formal even by legal standards, and structured with numbered lists that their style guide explicitly prohibited for client-facing correspondence. A senior associate described it as "getting back a document when we asked for an email." The format mismatch wasn't a minor inconvenience — it created compliance concerns, because overly structured client communications could imply a level of formality that carried legal implications about the nature of the advice being given.
Their solution was a prompt template, co-developed by the operations team and a senior partner, that embedded format requirements directly into every request. The template specified: maximum 200 words, two paragraphs, no numbered or bulleted lists, formal but not legalistic tone, no passive voice in the first sentence. Associates filled in the content details and left the format block unchanged. Output quality became consistent enough that a junior associate's first draft and a partner's first draft were structurally indistinguishable — which was exactly the point. The firm's brand voice became a function of prompt engineering rather than years of socialization into house style.
Format Controls: What You Can Actually Specify
| Format Dimension | What to Specify | Example Instruction | When It Matters |
|---|---|---|---|
| Length | Word count, sentence count, or paragraph count | "Exactly 150 words" or "three sentences maximum" | CMS character limits, slide decks, email templates |
| Structure | Headers on/off, bullets on/off, numbered lists on/off | "Plain prose only, no headers or lists" | Client comms, legal docs, brand-voice copy |
| Hierarchy | Number of heading levels, nesting depth | "One level of headers only, no subheaders" | Reports, briefs, internal wikis |
| Tone register | Formal/informal, technical/plain, first person/third person | "Conversational, first person, no jargon" | Slack messages, customer-facing support, social posts |
| Output medium | Markdown, plain text, HTML, JSON | "Return plain text with no markdown formatting" | Pasting into tools that don't render markdown |
| Sections to include/exclude | Named sections required or forbidden | "No introduction, no conclusion, start directly with the recommendations" | Tight-format deliverables, exec summaries |
| Voice constraints | Active/passive, sentence length, prohibited words | "Active voice throughout, sentences under 20 words" | Accessibility, legal precision, brand style guides |
A Data Analyst's Formatting Problem
Priya Nair is a senior data analyst at a retail chain with 200 stores across Southeast Asia. Her job involves producing weekly summaries of sales performance data for regional managers — people who read these summaries on their phones during commutes and make decisions based on them within hours. When she started using ChatGPT to help draft these summaries, the default output was a 600-word prose narrative with an executive summary, a methodology note, and a "limitations" section. Exactly zero of those elements were wanted. Regional managers needed three numbers, a one-line interpretation, and a recommended action — formatted to fit a mobile screen without scrolling.
Priya's solution was to specify output as a strict template that she pasted into every prompt. The template defined four labeled fields: "This week vs. last week" (one number and a direction), "Key driver" (one sentence), "Watch out for" (one sentence), and "Suggested action" (one sentence). ChatGPT's job was to fill the template, not to decide the format. The output went directly into a WhatsApp message to the regional manager group — no editing, no reformatting. What had been a 25-minute task became a 6-minute task. The format constraint wasn't limiting the model's usefulness; it was the entire source of the model's usefulness in that workflow.
Use your actual destination as your format spec
What Happens When You Get Format Right
The HubSpot, Mishcon de Reya, and Priya Nair examples share a structural pattern worth naming explicitly. In each case, the professional's first instinct was to evaluate the model's capability — is this tool good enough? But the actual problem was never capability. GPT-4, Claude 3, and Copilot are all capable of producing a 120-word newsletter paragraph, a 200-word client email, or a four-field mobile summary. The gap between what the model could produce and what the workflow needed was a specification gap, not a capability gap. Every hour spent reformatting AI output is an hour that should have been spent writing a better prompt.
Format specification also has a compounding effect on quality beyond just structure. When you tell a model "three sentences maximum," it doesn't just truncate — it prioritizes. The most important information rises to the top because there's no room for anything else. When you say "no bullet points, plain prose," you force the model to create transitions and logical flow rather than hiding the absence of connection behind a dash and a line break. Constraints produce better thinking, not just better formatting. This is why the Mishcon de Reya partners found that format-constrained output was also substantively sharper — the 200-word limit forced the model to make editorial decisions it would otherwise avoid.
The practical implication is that format instructions belong in every prompt where the output has a destination. That's most professional prompts. If you're drafting something that will be sent, published, presented, or shared — specify the format. If you're exploring ideas or brainstorming, format matters less; let the model be expansive. But the moment output transitions from exploration to production, format specification is not optional polish. It's the difference between a tool that saves you time and a tool that moves your work around without reducing it. The prompts in the next examples will make this operational.
Goal: Produce three reusable prompt templates with embedded format specifications that generate output requiring zero reformatting for three recurring work tasks.
1. Identify three recurring AI-assisted writing tasks in your current role — these could be emails, reports, summaries, social posts, or internal updates. Write them down by name (e.g., "weekly status update to my manager", "LinkedIn post about company news", "client meeting recap email"). 2. For each task, open the actual destination where the output lands — the email client, the CMS, the document template, the messaging app. Look at three existing examples of that content. 3. For each task, write down the observable format characteristics of those examples: approximate word count, whether they use bullets or prose, heading structure (or lack of it), tone register, and any structural elements that appear consistently. 4. Convert those observations into explicit format instructions written in plain imperative language. Example: "Exactly 80 words. Plain prose, no bullets or headers. Conversational tone, first person. End with a specific next step." 5. Write a full prompt for one of your three tasks that includes both the content request AND your format specification block. Keep the format block at the end of the prompt, separated by a blank line. 6. Run the prompt in ChatGPT or Claude. Paste the output into its actual destination — the email draft, the document, the post composer. Assess whether it fits without editing. 7. If it doesn't fit cleanly, identify exactly what's wrong (too long? wrong structure? wrong tone?) and revise only the format specification. Re-run the prompt. Repeat until the output drops in without editing. 8. Save your working format specification as a reusable block — a note in Notion, a snippet in your email client, or a saved ChatGPT prompt. This is the start of your format library. 9. Repeat steps 5-8 for the remaining two tasks. By the end, you have three ready-to-use prompt templates that produce output shaped for your actual workflow.
Key Principles from These Examples
- Models default to thorough and structured — not because it's better, but because they're trained to appear capable. Your prompt overrides that default.
- Format mismatch creates more editing work than a bad first draft. A wrong-shaped response requires restructuring, not just revision.
- Specifying format is specifying a contract for the output. The more precisely you define it, the less post-processing you do.
- Work backwards from the destination. The format spec already exists in the place where the output lands — your job is to describe it.
- Constraints improve substance, not just structure. Word limits force prioritization; prose requirements force logical connection.
- Format instructions belong in any prompt where output has a destination — emails, reports, posts, summaries, anything that gets sent or published.
- The gap between AI output and usable output is almost always a specification gap, not a capability gap. The model can do it; you just haven't told it how.
Takeaways
- ChatGPT, Claude, and Gemini each have different default format behaviors — knowing your tool's defaults tells you what to override.
- Seven dimensions of format you can control: length, structure, hierarchy, tone register, output medium, sections to include/exclude, and voice constraints.
- A prompt template with a fixed format block — like Mishcon de Reya's email template — makes output quality consistent across different users and skill levels.
- Format specification has compounding benefits: it produces better-structured output AND substantively sharper content by forcing the model to prioritize.
- "Ready to use" is context-specific. Define it explicitly for every workflow, then encode that definition into your prompts.
- Saving format specifications as reusable blocks — in Notion, email snippets, or saved prompts — converts one-time effort into permanent workflow acceleration.
When Format Becomes the Message
In 2022, McKinsey's digital team ran an internal experiment. Consultants were using early GPT-4 access to draft client-facing summaries, and the outputs kept coming back from partners with the same complaint: too long, wrong structure, reads like a blog post. The consultants weren't writing bad prompts — they were writing prompts that ignored format entirely. They'd ask for a 'summary of key risks' and get four flowing paragraphs when the client deck needed five bullet points, each under twelve words. The content was accurate. The format made it unusable. Someone had to manually reformat every output before it touched a slide.
The team's fix was deceptively simple. They built a shared prompt library where every template ended with an explicit format block — specifying bullets vs. prose, maximum length per item, whether headers were needed, and what to omit. Output quality didn't change. Usability transformed overnight. Partners stopped complaining. The lesson wasn't about AI capability; it was about the gap between 'good content' and 'ready-to-use content.' That gap is almost always a format problem, and it's almost always solvable in the prompt itself.
This is the core tension that runs through every professional use of ChatGPT, Claude, or Gemini: the model will produce something coherent by default, but 'coherent' and 'ready to paste into your deliverable' are different standards. Default outputs are optimized for readability in a chat window — not for slide decks, briefing docs, email threads, or client reports. Every format instruction you skip is a formatting task you're doing manually afterward. Once you internalize that, you stop thinking of format as optional polish and start treating it as a core part of the prompt.
What Models Do Without Format Instructions
The Newsroom That Stopped Editing AI Copy
The Associated Press started using AI to generate earnings report summaries back in 2014 — long before most organizations were paying attention. By the time they expanded to sports recaps and other structured content, their editors had developed a hard rule: if a piece needs more than two human edits before publication, the prompt is broken, not the output. This reframing changed everything about how their team worked with AI tools. The question shifted from 'is this output good?' to 'what instructions would produce output that needs zero edits?' Format was always the first variable they'd tune.
Their earnings summaries follow a rigid structure: company name and ticker, one-sentence headline result, three bullet points on key figures, one sentence on guidance, one sentence on analyst reaction. Every prompt specifies this structure explicitly, with character limits on the headline and word limits on each bullet. The AP's editors aren't fixing prose — they're verifying facts. That's the right division of labor between human and machine. Format instructions aren't a nicety; they're what makes that division of labor possible in the first place.
The principle the AP stumbled onto applies directly to your workflow, regardless of what you're producing. Every output type you regularly need — status updates, competitive summaries, meeting follow-ups, proposal sections — has an implicit structure you already know. You recognize good format when you see it. The only step most professionals skip is writing that structure down and putting it in the prompt. When you do, the model stops guessing and starts executing. The editing time drops to near zero.
Prompt
Summarize the following Q3 earnings call transcript using this exact structure: 1. HEADLINE: [Company name] [beats/misses/meets] expectations — one sentence, max 15 words 2. KEY FIGURES: Three bullet points. Each bullet: one metric, one number, one comparison to prior year. Max 20 words per bullet. 3. GUIDANCE: One sentence on forward guidance. Max 20 words. 4. ANALYST REACTION: One sentence summarizing analyst tone. Max 15 words. Do not add commentary, headers beyond what's listed, or additional paragraphs. [Transcript pasted here]
AI Response
HEADLINE: Salesforce beats Q3 expectations on strong cloud subscription growth KEY FIGURES: • Revenue hit $8.7B, up 11% year-over-year, ahead of $8.4B consensus • Operating margin expanded to 17.2%, versus 14.1% in Q3 last year • Remaining performance obligations rose to $26.4B, up 21% year-over-year GUIDANCE: Company raised full-year revenue outlook to $34.8B from prior $34.5B guidance. ANALYST REACTION: Analysts broadly positive, citing margin improvement as key upside surprise.
Format Variables You Can Control
Most professionals think of format as a binary choice — bullet points or prose. The actual control surface is much richer than that, and understanding it lets you tune outputs with surgical precision. Length is one dimension: you can specify total word count, word count per section, number of sentences per paragraph, or maximum characters per line. Structure is another: numbered lists, nested bullets, headers and subheaders, tables, labeled sections, or pure flowing prose. Density is a third: how much explanation per point, whether examples are included, whether caveats appear. Each of these is independently adjustable in a single prompt.
| Format Variable | What You Control | Example Instruction | Best Used When |
|---|---|---|---|
| Length | Total output size | 'Max 150 words total' | Output goes into a fixed space (slide, email) |
| Structure | How content is organized | 'Three sections with H2 headers' | Reader will scan, not read linearly |
| Density | Detail per point | 'One sentence per bullet, no elaboration' | Busy audience, decision-support context |
| Tone markers | Formality signals | 'No hedging language, no caveats' | Executive or client-facing output |
| Inclusions | What must appear | 'Always include a recommended action' | Outputs used to drive decisions |
| Exclusions | What to omit | 'No background context, assume expert reader' | Audience already knows the domain |
| Sequence | Order of elements | 'Lead with the recommendation, then rationale' | Time-pressed reader, bottom-line-first culture |
How a Healthcare Team Cut Report Time by 60%
A clinical operations team at a mid-sized hospital network was using ChatGPT to help draft monthly quality reports — documents that go to department heads and the board. Their first attempts produced fluent, well-organized prose that their medical director still spent ninety minutes reformatting each month. The content was right; the format was academic when it needed to be executive. Headers were too granular. Paragraphs ran long. Recommendations were buried at the end instead of leading each section. The model wasn't wrong — it was producing the kind of document that dominated its training data.
Their solution came from reverse-engineering an existing report the board had praised. They identified the exact structure: a one-paragraph executive summary under 100 words, four sections each starting with a bolded single-sentence finding, two to three supporting bullets per section, and a final 'Actions Required' table with three columns — action, owner, deadline. They encoded this structure into a reusable prompt template. The next month's report took twenty minutes to produce and twelve minutes to verify. The medical director stopped touching the format entirely. That's what format specificity buys: not just better output, but reclaimed time.
Reverse-Engineer Your Best Existing Documents
The Density Problem Most Professionals Don't Notice
Length and density are not the same thing, and confusing them produces a subtle failure mode that's harder to diagnose than obvious length problems. A 200-word output can be dense with insight or diluted with hedges, qualifications, and restatements of the question. ChatGPT in particular has a strong default toward what you might call 'epistemic caution' — adding phrases like 'it's important to consider,' 'there are multiple perspectives,' and 'results may vary depending on context.' These phrases aren't wrong, but they consume words without adding information, and they can make outputs feel soft when you need something decisive.
Density instructions solve this. Phrases like 'be direct, no hedging,' 'state conclusions confidently,' 'omit caveats unless they change the recommendation,' or 'write for an expert audience who doesn't need context explained' dramatically change the texture of a response without changing its length. A strategy consultant who needs a crisp competitive positioning statement doesn't want 'Company X may be well-positioned in certain market segments depending on execution.' They want 'Company X owns the mid-market segment in APAC — their primary vulnerability is enterprise sales capacity.' Same information, different density, completely different usability.
The flip side matters too. Sometimes you want elaboration — when training junior staff, drafting explanatory memos for non-expert stakeholders, or producing content where the reasoning needs to be visible. In those cases, 'explain your reasoning,' 'include a brief example for each point,' or 'define any technical terms inline' shifts the model toward more expansive, pedagogical output. The model handles both registers well once you specify which one you need. The default is somewhere in the middle, which is rarely exactly right for any specific professional context.
Prompt
VERSION A (high density): You are advising a CFO. In three bullet points, max 15 words each, state the top risks of acquiring a company with negative free cash flow. Be direct. No hedging. VERSION B (explanatory): You are briefing a junior analyst. Explain the top three risks of acquiring a company with negative free cash flow. For each risk, explain why it matters and give a real-world example. Assume they have basic finance knowledge but haven't seen an M&A deal.
AI Response
VERSION A OUTPUT: • Integration costs compound cash burn before synergies materialize — runway shrinks fast • Debt covenants may trigger on combined entity's deteriorated coverage ratios • Valuation multiples assume cash flow improvement that management may not deliver VERSION B OUTPUT: [Longer, explained response with examples like WeWork's acquisition attempts, each risk developed over 3-4 sentences with context for why a junior analyst should care about it in due diligence]
What This Means in Practice
The format variables in the table above aren't meant to be used all at once. In practice, you identify the two or three that matter most for a given output type and encode them explicitly. For a slide deck bullet, length and density are critical — structure is already determined by the slide. For a client briefing memo, structure and sequence matter most — the reader's time is short and they need to find the recommendation fast. For a training document, density and inclusions drive quality — you want elaboration, examples, and inline definitions. Matching format instructions to output context is a skill that develops quickly with practice.
One underused technique is showing rather than telling. Instead of describing the format you want, you can paste an example of a previous output that hit the mark and say 'match this format exactly.' Claude handles this particularly well — it will extract the structural pattern from your example and apply it to new content. This works for tables, memos, executive summaries, and any document type where you have a strong existing template. The model becomes a format-replicator, not a format-inventor, which is often exactly what you need when outputs must conform to organizational standards.
The most important shift is treating format instructions as permanent infrastructure in your prompt library, not something you add on the fly. Every time you get an output that needs reformatting, that's diagnostic information: your prompt is missing a format instruction. Add it, save the updated prompt, and you'll never fix that same format problem again. Over time, your prompt templates become precision instruments — each one producing output that drops directly into your workflow. That's the compounding return on investing in format control: the first good prompt takes five minutes to build and saves an hour a month forever.
Goal: Produce one reusable, format-controlled prompt template that generates output requiring no manual reformatting for a recurring document type in your actual workflow.
1. Choose one document type you produce at least twice a month — a status update, competitive brief, meeting summary, or similar recurring output. 2. Find the best existing example of that document type from your files — one that your audience received well. 3. Analyze its structure: count sections, measure approximate word counts per section, note whether findings lead or follow evidence, identify any tables or lists. 4. Write a format specification block based on that analysis — list every structural element with its length constraint and sequence position. 5. Open ChatGPT or Claude and write a prompt for that document type that includes your format specification block at the end. 6. Run the prompt on a real piece of content you need to process this week. 7. Compare the output to your original example — identify any format gaps and add one instruction to close each gap. 8. Save the refined prompt as a named template in a document or Notion page you'll actually return to. 9. Use the template for the next three instances of that document type and note how much reformatting time you spend — it should approach zero.
Principles Extracted from These Examples
- Format is not polish — it's infrastructure. Every format instruction you skip becomes manual work after the output is generated.
- The model's defaults are optimized for chat readability, not for your specific deliverable. Explicit instructions always override defaults.
- Length, structure, density, sequence, inclusions, and exclusions are independently controllable — use the variables that matter for your specific output context.
- Reverse-engineering a document your audience has praised is the fastest way to build a format specification that actually works.
- Showing the model an example output and saying 'match this format' is often faster and more precise than describing the format in words.
- Density instructions — 'be direct,' 'no hedging,' 'expert audience' — change the texture and usability of output without changing its length.
- Prompt templates with embedded format specifications are compounding assets: built once, they eliminate the same reformatting problem forever.
When Format Is the Difference Between Used and Ignored
In 2022, the content team at HubSpot ran an internal experiment. They were using early GPT-4 access to draft blog outlines, and two writers were getting wildly different results from nearly identical prompts. One writer's outputs landed directly in their editorial workflow. The other's required 20 minutes of cleanup before anyone could touch them. The difference wasn't the topic, the tone, or even the quality of the ideas. It was format. The first writer had started specifying exactly how she wanted the output structured — headers, word counts per section, bullet constraints. The second was still prompting like he was asking a colleague a question and hoping for the best.
HubSpot's content lead documented this internally as a workflow issue, not a quality issue. The AI was producing good thinking either way. What it wasn't doing was producing work-ready output unless explicitly told what work-ready looked like. This distinction matters enormously at scale. When your team runs 50 prompts a day, the difference between outputs that need reformatting and outputs that don't is measured in hours, not minutes. Format instructions aren't cosmetic. They are the scaffolding that turns AI output into professional deliverables.
The Principle Behind the Story
AI models default to a format that feels natural given the prompt — which usually means flowing prose, moderate length, and whatever structure seems conversationally appropriate. That default is rarely what professionals need. The model isn't being lazy or careless; it genuinely doesn't know whether you're reading on a phone, pasting into a slide deck, or filing a client report. When you specify format explicitly — length, structure, heading style, list constraints — you're not over-engineering your prompt. You're giving the model the context it needs to produce something you can actually use without touching it again.
What 'Format' Actually Covers
How a Law Firm Stopped Editing AI Output
Freshfields Bruckhaus Deringer, one of the UK's Magic Circle law firms, began piloting AI drafting tools in 2023 for internal knowledge summaries — short briefings that associates could read before client calls. Early drafts came back as dense paragraphs with no clear hierarchy, which associates were rewriting entirely. A senior knowledge lawyer on the project made one intervention: she added a rigid format block to every prompt. Each output had to follow a fixed template — a three-sentence summary, a bulleted list of key risks (maximum five), a single recommended action, and a word ceiling of 200. Editing time dropped by roughly 70%.
What Freshfields discovered is that format constraints don't reduce quality — they redirect the model's energy. Instead of deciding how to organize information, the model focuses on which information matters most. A 200-word ceiling with a fixed structure forces prioritization in a way that an open-ended prompt never does. The associates weren't getting less information; they were getting better-filtered information in a shape they could use immediately. This is the professional use case for format control in its purest form: not decoration, but cognitive triage.
Prompt
Summarize the key legal risks in the following contract clause for an associate preparing for a client call. Format your response exactly as: SUMMARY: [3 sentences max] KEY RISKS: - [Risk 1, one sentence] - [Risk 2, one sentence] - [Risk 3, one sentence — add a 4th or 5th only if critical] RECOMMENDED ACTION: [1 sentence] Total response must not exceed 200 words. [Paste clause here]
AI Response
SUMMARY: This indemnification clause places uncapped liability on the client for third-party IP claims arising from any use of the licensed software. The carve-out for gross negligence is narrowly drafted and unlikely to apply in most dispute scenarios. Clients in regulated industries face particular exposure under this wording. KEY RISKS: - Uncapped indemnity creates unlimited financial exposure for the client - 'Use' is undefined, potentially extending liability to downstream customers - No reciprocal indemnity from the licensor is included - 14-day notification window for claims is unusually short and may be missed RECOMMENDED ACTION: Negotiate a mutual indemnity cap tied to contract value before signing.
A Different Context: The Marketing Dashboard Problem
A growth marketing manager at Shopify's partner ecosystem team faced a different format challenge. She was using ChatGPT to analyze campaign performance data and generate summaries for weekly exec reviews. The AI kept producing narrative paragraphs — well-written, accurate, but useless for a leadership team that wanted numbers and verdicts at a glance. She wasn't getting bad analysis. She was getting the wrong container for it. Her fix was to prompt for a specific table structure: metric, last week's value, this week's value, percentage change, and a one-word status flag (Up / Down / Stable). Executives started reading the summaries instead of skipping them.
Her experience points to something the HubSpot and Freshfields examples also show: the right format isn't universal, it's audience-specific. A table serves an exec review. A bulleted risk list serves a time-pressed associate. A 200-word narrative serves a customer reading a product update email. The same underlying information needs a different container depending on who reads it and what decision they need to make. When you write prompts, you're not just directing the AI — you're designing for your reader.
| Context | Best Format | Length Target | Key Constraint |
|---|---|---|---|
| Executive summary | Table or 3-bullet verdict | Under 150 words | No prose paragraphs |
| Client-facing email | Short paragraphs, no headers | 200–300 words | Conversational tone |
| Internal legal briefing | Fixed template with labels | Under 200 words | Structured sections |
| Slide deck bullet points | Fragments, not full sentences | 8–12 words per bullet | No sub-bullets |
| API / developer output | JSON or structured data | Exact schema required | No explanatory text |
| Social media post | Single paragraph or line breaks | Under 280 characters | No hashtag lists |
The Analyst Who Built a Prompt Library
A senior analyst at McKinsey's London office began saving every format-specific prompt that produced work-ready output on the first attempt. Within three months, she had a personal library of 22 prompt templates, each one tuned to a specific deliverable — slide bullets, client memos, data summaries, stakeholder emails. She shared it with her project team. The templates spread. What she'd built, without naming it as such, was a format specification library: a collection of prompts where the format instructions did as much work as the task instructions. New analysts on her team were producing senior-quality formatting from week one.
The lesson from her approach is practical and transferable. Format instructions compound in value over time. The first time you nail the exact prompt structure for a weekly status report, you save 10 minutes. The fiftieth time you run that same prompt, you've saved hours and trained everyone around you to expect a consistent output standard. Treating format as a first-class element of your prompting practice — not an afterthought — is how individual productivity gains become team-wide workflow improvements.
Build Your Format Library Now
Putting It Into Practice
The practical shift is straightforward: before you write a prompt, ask what the output needs to look like when you're done with it. Not what information you need — you probably already know that — but what shape it needs to be in. Will someone paste it into a slide? Read it on mobile? Use it to make a decision in 30 seconds? That answer determines whether you ask for a table, a bulleted list, a short paragraph, or a structured template. Write the format requirement into your prompt the same way you write the task. Both are instructions. Both are required.
Length control follows the same logic. Giving the AI an open-ended length invitation produces outputs optimized for completeness, not usability. A prompt asking for 'a summary of our Q3 results' will return something comprehensive. A prompt asking for 'a 3-sentence summary of our Q3 results for a board slide' will return something usable. The word 'summary' is doing almost no work in the first version. The word count, the audience, and the destination are doing all the work in the second. Specificity isn't pedantry — it's the mechanism that produces professional output.
Claude, ChatGPT, and Gemini all respect format instructions reliably when they're explicit and placed near the top of the prompt or immediately before the content to be processed. Vague instructions like 'keep it short' or 'make it professional' are interpreted loosely. Concrete instructions like 'maximum 4 bullet points, each under 12 words' or 'respond only in a two-column table with headers: Issue | Recommendation' are interpreted precisely. The gap between those two levels of specificity is the gap between output you edit and output you use.
Goal: Produce a reusable, format-specific prompt template that generates work-ready output for a recurring deliverable — something you keep and use again.
1. Choose one recurring work output you produce at least twice a month — a status update, client briefing, meeting summary, or performance report. 2. Write down exactly what that output looks like when it's finished and polished: how long is it, what sections does it have, what does your reader need from it? 3. Open ChatGPT, Claude, or Gemini and write a prompt that includes: (a) the task, (b) the exact format — headers, list style, word or bullet count, (c) the intended audience and where the output will be used. 4. Run the prompt using real content from your work — actual data, notes, or text you'd normally be summarizing. 5. Review the output against your polished benchmark from step 2. Mark anything that still needs editing. 6. Revise your format instructions to close the gaps — tighten word counts, add or remove structural elements, specify tone more precisely. 7. Run the revised prompt on a second real example to confirm it holds. 8. Save the final prompt — with all format instructions intact — in a dedicated doc, Notion page, or prompt manager. Label it with the output type and date. 9. Share it with one colleague who produces the same type of output and ask if it works for their version of the task.
- AI models default to whatever format feels natural for the prompt — that default is rarely work-ready without explicit format instructions.
- Format control includes length, structure, visual hierarchy, tone, and delivery container — changing any one of these changes how usable the output is.
- Hard constraints (exact word counts, fixed templates, specific table schemas) produce better-filtered, more usable output than open-ended prompts.
- The right format is audience-specific, not universal — the same information needs a different container for an exec, an associate, and a client.
- Format instructions compound in value: a well-specified prompt template pays dividends every time it's reused, especially across a team.
- Specificity is the mechanism — 'maximum 4 bullets under 12 words each' outperforms 'keep it short' every single time.
- Building a personal prompt library of format-controlled templates is one of the highest-leverage habits an AI-fluent professional can develop.
- Specify format in every prompt where the output has a destination — a slide, an email, a report, a tool.
- Use concrete constraints: word counts, bullet limits, table schemas, section labels — not vague descriptors like 'concise' or 'structured'.
- Place format instructions near the top of your prompt or immediately before the content block to be processed.
- Design for your reader, not for the AI — ask what shape the information needs to be in for the person who will act on it.
- Save every prompt that produces work-ready output on the first attempt; your prompt library is a professional asset.
- Test format-controlled prompts on real content, not hypotheticals — that's the only way to know if the template actually holds.
Why do AI models like ChatGPT and Claude produce outputs that often need reformatting, even when the content is accurate?
A consultant needs to summarize a 40-page report for a board meeting slide. Which prompt approach produces the most usable output?
The Freshfields example showed that adding fixed format constraints to legal briefing prompts reduced editing time by roughly 70%. What was the underlying mechanism?
A marketing analyst prompts ChatGPT for a weekly campaign summary and keeps getting well-written narrative paragraphs that executives ignore. What is the most accurate diagnosis?
You save a format-controlled prompt that produces a perfect client briefing on the first attempt. A colleague uses it the following week on different content and gets inconsistent results. What is the most likely cause?
Sign in to track your progress.
