Skip to main content
Back to Your First Prompts: Getting Started with Claude
Lesson 7 of 10

Using Claude for research and summarisation

~23 min read

Using Claude for Research and Summarisation

In the spring of 2023, the strategy team at McKinsey & Company faced a familiar crunch. A client engagement required synthesising 14 analyst reports, three academic papers, and a dense regulatory filing — all within 48 hours. The team had two junior analysts and a senior associate available. Historically, this kind of literature review took the better part of a week: reading, highlighting, cross-referencing, drafting summaries, then iterating based on the partner's feedback. The work wasn't intellectually hard. It was just slow, grinding, and prone to the kind of fatigue-induced errors that slip through at 11pm on a Tuesday.

What changed in 2023 is that the junior analysts started pasting documents into Claude and asking it to extract key findings, flag contradictions between sources, and produce structured summaries aligned to the client's specific strategic questions. The 48-hour job took eleven hours. The partner's first-round feedback was lighter than usual — not because the analysts had gotten smarter overnight, but because the summaries were more consistently structured, the sourcing was explicit, and the gaps in the research were surfaced rather than buried. Claude didn't replace the analysts' judgment. It handled the volume so their judgment had something to work on.

This story captures the core tension of research work in professional settings: the bottleneck is rarely intelligence, it's throughput. Most professionals are capable of synthesising complex information — they simply don't have time to do it thoroughly on every project. Claude changes the throughput equation without changing who is responsible for the final judgment call. That distinction matters enormously, both for how you use the tool and for how you explain its role to colleagues and clients.

What Claude actually does with research tasks

Claude reads and processes text you provide directly in the conversation — documents you paste in, reports you copy across, transcripts you share. It doesn't browse the internet by default. When you need current information or web sources, Perplexity AI or the browsing-enabled version of ChatGPT are better choices. Claude's strength is deep processing of material you already have: extracting structure, identifying patterns, comparing positions, and producing summaries calibrated to a specific purpose or audience.

The Difference Between Summarising and Synthesising

There's a distinction that most first-time users miss, and it costs them weeks of mediocre outputs before they figure it out. Summarising is compression — taking a long thing and making it shorter while preserving its structure. Synthesising is transformation — taking multiple things and producing a new structure that reveals something the individual pieces didn't. Both are valuable. They require different prompts. And Claude handles both, but only if you tell it which one you want.

Consider a product manager at Spotify who needs to understand user sentiment across 200 support tickets, a competitor's feature announcement, and two industry analyst reports. A summary of each source in isolation gives her four shorter documents. That's compression. What she actually needs is a synthesised view: where do user complaints align with gaps that competitors are already filling? Which analyst predictions are already showing up in the support ticket patterns? That's a different cognitive task — and a different prompt. Claude will default to summarising unless you explicitly ask for synthesis. Most users never ask.

The good news is the fix is simple. Once you understand the difference, you bake the instruction into your prompt. Instead of 'summarise these documents,' you write 'compare the positions across these sources and identify where they agree, where they conflict, and what questions they leave unanswered.' That single shift in framing produces outputs that are genuinely useful for decision-making rather than just shorter versions of what you already had. The McKinsey team discovered this in their first week of use — their second-generation prompts were dramatically more useful than their first.

Synthesis prompt across multiple sources

Prompt

I'm going to paste in three analyst reports on the European EV market. After reading all three, I want you to do the following: 1. Identify the 3-4 claims that all three reports agree on 2. Identify any significant contradictions between the reports — where they reach different conclusions from similar data 3. Flag any important questions that none of the reports address but that a strategic decision-maker would need answered 4. Keep each section to bullet points, max 5 bullets per section Here are the reports: [Report 1 text] [Report 2 text] [Report 3 text]

AI Response

**Points of Agreement (across all three reports)** • EV adoption in Western Europe will exceed 40% of new car sales by 2030, though growth trajectories differ • Charging infrastructure remains the primary consumer barrier, ahead of vehicle cost • Chinese OEMs (BYD, NIO) are positioned to take significant market share from legacy European brands by 2026 • Grid capacity upgrades will require coordinated public-private investment at a scale not yet committed **Significant Contradictions** • Bloomberg NEF projects 2025 battery costs at $87/kWh; Wood Mackenzie uses $104/kWh — a gap that materially changes profitability timelines for mid-market vehicles • Reports 1 and 3 treat hybrid vehicles as a transition technology; Report 2 argues hybrids will sustain a permanent 15-20% share, which changes fleet strategy significantly **Unanswered Questions** • None of the reports model the impact of a potential US tariff escalation on European battery supply chains • The regulatory scenario analysis stops at 2030 — no modelling of post-2035 ICE ban enforcement mechanisms • Consumer financing options for EVs are mentioned but not quantified as an adoption lever

How a Hospital System Cut Literature Review Time by 60%

The pharmaceutical and clinical research world runs on evidence synthesis. Before any protocol update, formulary decision, or treatment guideline change, clinical teams need to review published literature and produce summaries for medical committees. At a large hospital network in the US — one of several that started piloting Claude in late 2023 — the pharmacy team was spending roughly 12 hours per formulary review. Each review required reading 8-15 clinical studies, extracting efficacy and safety data, and producing a structured summary for the Pharmacy and Therapeutics Committee. The work was done by clinical pharmacists who had many better uses for their expertise.

After introducing Claude into the workflow, the team developed a standardised prompt template that extracted the same data fields from every study: study design, patient population, primary endpoints, adverse event profile, and statistical significance. Claude processed each study in seconds and produced a consistent structured output. The pharmacist's job shifted from extraction to validation — reading Claude's summary against the original, correcting errors, and adding clinical context that required human judgment. Total review time dropped from 12 hours to under 5. The committee summaries were more consistent in format, which made the committee's own deliberation faster. The pharmacists, notably, reported higher job satisfaction: they spent more time on the parts of the work that required their expertise.

Three Approaches to Research Assistance: A Comparison

Not every AI tool handles research tasks the same way, and choosing the wrong tool for the task is one of the most common mistakes professionals make. Claude, ChatGPT with browsing, and Perplexity AI each have a distinct profile of strengths. Understanding the differences saves you from the frustrating experience of getting confidently wrong information — which happens when you ask the wrong tool to do something outside its actual capability.

CapabilityClaudeChatGPT (Browsing)Perplexity AI
Primary strengthDeep processing of documents you provideBroad reasoning plus live web accessReal-time web research with citations
Best forSummarising, synthesising, analysing long documentsMixed tasks: research + generation + codingFinding and citing current sources quickly
Web browsingNo (by default)Yes (GPT-4 with browsing)Yes — core feature
Long document handlingExcellent (200K token context)Good (128K context in GPT-4o)Limited — better for shorter queries
Citation accuracyUses only what you provideVariable — can hallucinate URLsGenerally reliable, links to real sources
Ideal research scenarioAnalysing reports, contracts, transcripts you already haveResearching a topic from scratch onlineGetting a fast, sourced briefing on current events
Approximate cost$3 / $15 per million tokens (Haiku / Sonnet)$5 / $15 per million tokens (GPT-4o mini / GPT-4o)$20/month Pro for full features
Research tool comparison: Claude, ChatGPT, and Perplexity AI as of early 2025. Token costs are for API access; consumer subscriptions are typically $20/month.

The Consultant Who Stopped Reading Everything

Marcus is a management consultant at a boutique firm specialising in supply chain strategy. Before every client meeting, he used to spend 90 minutes reading industry news, earnings call transcripts, and trade publications to stay current. It was necessary but exhausting, and on busy weeks it simply didn't happen. His preparation quality was inconsistent. He started using Claude differently from most people — not to summarise documents one at a time, but to build a pre-meeting briefing ritual. He'd paste in a company's last earnings call transcript, their most recent investor presentation, and two or three recent news articles, then ask Claude to produce a structured briefing: key strategic priorities the company has stated publicly, any contradictions between what they said six months ago and what they're saying now, and three questions worth asking in the meeting.

The output took Claude about 40 seconds to produce. Marcus spent 10 minutes reading and annotating it. His total prep time dropped from 90 minutes to 15. More importantly, the quality became consistent — he was never under-prepared because the process was fast enough to actually do every time. He also noticed something unexpected: by asking Claude to surface contradictions in what the company had said over time, he was regularly walking into meetings with a sharper angle than clients expected from a first conversation. That edge came entirely from the prompt design, not from spending more time reading.

Build a reusable prompt template for recurring research tasks

If you do the same type of research task repeatedly — weekly competitive briefings, pre-meeting prep, literature reviews, earnings analysis — invest 30 minutes building one excellent prompt template. Define the output structure explicitly: what sections you want, in what order, at what length. Save it somewhere accessible (Notion, a text file, Claude's Projects feature). A well-designed template produces consistently useful outputs and eliminates the cognitive overhead of prompt-writing each time. The McKinsey team and Marcus both converged on this independently within their first month of use.

What Changes When You Stop Skimming and Start Directing

The professionals who get the most value from Claude in research contexts share one habit: they treat Claude as a directed analyst, not a search engine. A search engine gets a keyword. An analyst gets a brief. The brief specifies the question you're trying to answer, the format you need the answer in, the audience it's for, and any constraints on what matters. When you give Claude a brief instead of a keyword, the output quality difference is stark — not because Claude is trying harder, but because it has the information it needs to make the right trade-offs in what to include and what to cut.

This shift also changes how you handle long documents. Most people paste in a 40-page report and ask 'what are the key points?' Claude will produce a reasonable general summary. But a better prompt specifies the lens: 'I'm a CFO evaluating whether to enter this market. What does this report say that's directly relevant to capital requirements, payback periods, and downside risk scenarios?' The same 40 pages produce a completely different — and far more useful — output when you specify who is asking and why. The document hasn't changed. The question has. And the question is always the thing that determines the value of the answer.

There's also a discipline that the best users develop around what they give Claude to work with. Garbage in, garbage out applies here in a specific way: if your source material is a poorly structured internal report full of corporate hedging and vague language, Claude's summary will accurately reflect that vagueness. Claude doesn't manufacture clarity that isn't there. This is actually useful information — when a Claude summary of an internal document comes out muddled and circular, that's often a signal that the underlying document has the same problem. Several teams have started using this as a document quality check: if Claude can't produce a clean three-paragraph summary, the document probably needs rewriting before it circulates.

Build Your First Research Briefing Prompt

Goal: Produce a research briefing from a real work document using a purpose-built prompt, and develop a reusable template you can apply to the same task type in future.

1. Identify one recurring research task you do at work — a type of document you regularly need to read and extract insights from (e.g., competitor announcements, client reports, industry publications, meeting transcripts). 2. Find or locate a real example of that document — something you'd normally spend 30-60 minutes reading and processing. 3. Before opening Claude, write down on paper: (a) the specific question you need this document to answer for you, (b) the role you're playing when you read it (e.g., 'as the person responsible for pricing strategy'), and (c) the format you need the output in (e.g., bullet points, structured sections, a one-paragraph briefing). 4. Open Claude at claude.ai and start a new conversation. 5. Write a prompt that includes: your role and purpose, what you're providing, and explicit instructions for the output format and focus. Do not just paste the document and ask for a summary. 6. Paste in your document and send the prompt. 7. Read the output critically: does it answer your actual question, or a generic version of it? Note one thing it got right and one thing it missed or misframed. 8. Write a follow-up message refining the output — either asking Claude to adjust the focus, change the format, or go deeper on one specific section. 9. Save both the original prompt and the refined version somewhere you can reuse them. Note the total time this took compared to your usual approach.

Principles Extracted from These Stories

  1. The bottleneck in professional research is throughput, not intelligence — Claude addresses throughput, but your judgment remains the final layer.
  2. Summarising (compression) and synthesising (transformation) are different tasks that require different prompts. Default prompts produce summaries; you have to ask explicitly for synthesis.
  3. Claude processes documents you provide, not the live web. Use Perplexity or ChatGPT with browsing when you need current, sourced information from the internet.
  4. Specifying your role and purpose in the prompt transforms the quality of output — the same document produces radically different summaries depending on the question framing.
  5. Recurring research tasks deserve a reusable prompt template. The one-time investment of designing a good template pays back on every subsequent use.
  6. Muddled Claude summaries often reflect muddled source documents — this makes Claude a useful quality signal, not just a summarisation tool.
  7. The shift from 'search engine' to 'directed analyst' mindset is the single biggest unlock in getting research value from Claude.

Key Takeaways from This Section

  • Claude's research value comes from deep processing of material you provide — not from web access or real-time information retrieval.
  • Synthesis prompts (compare, contrast, identify contradictions, surface gaps) produce decision-ready outputs; summary prompts produce shorter versions of what you already have.
  • Defining the output format explicitly — sections, bullet counts, length — produces more consistent and usable results than open-ended requests.
  • Across industries from consulting to clinical pharmacy to product management, the pattern is consistent: Claude handles volume, humans handle judgment.
  • Choosing the right tool matters: Claude for document-heavy analysis, Perplexity for sourced web research, ChatGPT with browsing for mixed tasks.
  • Pre-meeting briefing, competitive monitoring, literature review, and earnings analysis are all high-value, immediately applicable use cases for Claude in research workflows.

When the Research Load Gets Real: A Consulting Team's Turning Point

In early 2023, a mid-sized strategy consultancy in Chicago faced a familiar crisis. A client had requested a competitive landscape analysis across five sectors — retail, logistics, fintech, healthcare, and manufacturing — with a two-week deadline and a lean team of three analysts. The traditional approach would have meant hundreds of browser tabs, PDFs printed and annotated in margins, and a shared Google Doc slowly becoming a monster no one fully understood. Instead, the team's lead analyst started feeding Claude dense industry reports, earnings call transcripts, and analyst briefings — asking it to extract themes, flag contradictions between sources, and surface questions the documents raised but didn't answer. The two-week project was delivered in nine days.

What changed wasn't the quality of the research — it was the shape of the work. The analysts spent less time extracting information and more time evaluating it. Claude handled the first pass: pulling out the signal from dense, jargon-heavy documents. The humans handled what followed: deciding what that signal meant for their client's specific situation. That distinction — extraction versus interpretation — is the core of what makes Claude genuinely useful for research work, rather than just a faster search engine.

Extraction Is the Unglamorous Work That Eats Your Day

Most professionals dramatically underestimate how much of their research time is pure extraction — finding the sentence in a 40-page report that actually matters, identifying which of twelve articles says something meaningfully different from the others, or pulling the five relevant data points buried inside a 6,000-word white paper. This isn't analysis. It's retrieval dressed up as thinking. Claude excels at this because it processes text in full context, holding the entire document in view simultaneously rather than scanning linearly the way humans do. When you paste a long document and ask Claude to extract specific types of information — financial figures, risk factors, competitor mentions, strategic priorities — you get structured output in seconds that would take a skilled analyst twenty to forty minutes to produce manually.

What Claude Can and Can't Access

Claude doesn't browse the internet in its standard form — it works with text you provide directly, or with documents you paste into the conversation. For live web research, tools like Perplexity AI or ChatGPT with browsing enabled are better fits. Claude's strength is depth processing of material you already have: long documents, dense reports, transcripts, research papers, or multi-source comparisons where you bring the sources and Claude brings the analysis.

A pharmaceutical marketing manager at a mid-sized biotech company used this distinction to reshape how her team prepared for advisory board meetings. Before every meeting, the team needed to synthesise recent clinical trial publications, competitor drug approvals, and payer coverage decisions — a process that had previously taken a full day of reading. She began uploading each source document to Claude with a consistent prompt structure: extract key findings, identify any claims that contradict current company positioning, and flag any regulatory language that might affect messaging. What emerged wasn't just faster — it was more consistent. Individual analysts had different reading habits and different blind spots. Claude applied the same lens to every document.

This consistency effect is underappreciated. When multiple people read the same source, they extract different things — not because some are better analysts, but because attention is selective and context-dependent. Claude doesn't have a bad day, doesn't skim because it's tired, and doesn't unconsciously prioritise information that confirms what it already thinks. That makes it a genuinely useful first-pass filter, especially on high-stakes research where missing a key detail has real consequences. The analyst's job becomes reviewing Claude's output critically, not starting from zero.

Multi-document research synthesis

Prompt

I'm going to share three analyst reports on the electric vehicle battery market. For each one, please extract: (1) the core thesis in one sentence, (2) the top three supporting data points, (3) any claims that conflict with the other reports, and (4) what question each report leaves unanswered. I'll paste them one at a time, starting now. [Report 1 pasted here]

AI Response

Report 1 — BloombergNEF Q3 2024 Battery Market Outlook 1. Core thesis: Lithium iron phosphate (LFP) chemistry will displace NMC as the dominant EV battery format in non-premium segments by 2027, driven by cost parity and improved energy density. 2. Top three data points: — LFP cell costs fell to $53/kWh in Q2 2024, down 18% year-on-year — CATL and BYD now supply 67% of global LFP production — Energy density gap between LFP and NMC narrowed to 12% in 2024, down from 22% in 2021 3. Potential conflicts with other reports: Will assess once Reports 2 and 3 are shared — flag this for cross-reference. 4. Unanswered question: The report doesn't address how Western OEMs plan to reduce CATL/BYD dependency given current geopolitical tensions around Chinese battery supply chains. Ready for Report 2 when you are.

The Legal Team That Stopped Drowning in Contracts

At a regional property development firm in the UK, the in-house legal team of two handled contract review for an average of thirty transactions per quarter. Each new supplier or contractor agreement required someone to read through standard boilerplate looking for non-standard clauses — liability caps, indemnity carve-outs, IP ownership provisions, and termination triggers. The senior solicitor described it as 'reading the same document thirty times with different names on it.' When she started using Claude for initial contract passes, she gave it a fixed extraction brief: flag any clause that deviates from their standard template across eight defined categories. Claude couldn't give legal advice — and she was clear-eyed about that — but it could tell her where to look.

The workflow she built is instructive. She never asked Claude to assess whether a clause was acceptable — that remained her professional judgment. She asked it to surface and categorise deviations, produce a structured summary of non-standard language, and draft a list of questions she might want to raise with the counterparty. The output became a working document she could annotate and take into negotiation. Her review time per contract dropped by roughly 40%, and — critically — her error rate on missed non-standard clauses dropped too, because she was reviewing Claude's structured summary rather than relying on her own reading of dense legalese after a full day of other work.

Research TaskClaude Handles WellStill Requires Human JudgmentRecommended Prompt Approach
Long document summarisationFull-context extraction, consistent coverage, structured outputDeciding which summary points matter for your specific goalSpecify output format and the decision the summary will inform
Multi-source comparisonIdentifying contradictions, mapping themes, flagging gapsWeighing source credibility and relevanceAsk for explicit conflict flags, not just synthesis
Contract / document reviewSpotting deviations from a stated baseline, categorising clause typesLegal or professional assessment of riskGive Claude your standard template as the comparison baseline
Competitive intelligenceExtracting claims, positioning language, product features from textStrategic interpretation of what competitor moves meanAsk for factual extraction first, analysis second in a follow-up prompt
Literature / research synthesisGrouping themes, surfacing methodology differences, identifying consensus vs. debateEvaluating research quality and applicabilityAsk Claude to distinguish between strong consensus and contested findings
Matching Claude's research capabilities to the right part of your workflow

The Product Manager Who Changed How She Ran Retrospectives

A senior product manager at a SaaS company in Amsterdam ran quarterly retrospectives across four product teams — each generating a mix of survey responses, Slack thread exports, support ticket summaries, and interview notes. Synthesising all of it into a coherent picture of what was working and what wasn't had been a two-day exercise in her own subjective pattern recognition. She started feeding each source into Claude with a consistent framing: 'You are helping me identify patterns in qualitative feedback. Extract recurring themes, note the frequency of each theme, and flag any feedback that doesn't fit the main patterns — especially outliers that might signal something important.' The instruction to flag outliers was deliberate. She'd learned from experience that the one-off comment that doesn't fit the pattern is often the most important signal.

What she found over three quarters of using this approach was that Claude surfaced themes she had been unconsciously discounting — particularly negative feedback from users in non-English-speaking markets, which tended to use different vocabulary and so had been underweighted in her own reading. Claude doesn't weight feedback by how fluently it's expressed. A terse two-sentence complaint from a German user gets the same analytical attention as a detailed paragraph from a native English speaker. For a global product team, that kind of neutral processing has real strategic value. Her retrospective reports became more defensible because they were grounded in a more systematic read of the evidence.

Build a Research Prompt Template You Reuse

Once you find a prompt structure that works for a recurring research task — contract review, competitive analysis, customer feedback synthesis — save it as a template. Store it in Notion, a Google Doc, or even a dedicated Claude Project if you're on Claude.ai Pro ($20/month). Reusing consistent prompt structures means your outputs are comparable across time, which matters when you're tracking changes or building on previous research. Consistency in prompting produces consistency in output format, which makes downstream analysis faster.

Why Summarisation Is Harder Than It Looks — and How to Fix It

Most people's first instinct when using Claude for summarisation is to paste a document and type 'summarise this.' The output is usually fine — but fine isn't the same as useful. A generic summary reflects Claude's default judgment about what matters in a document. That default is calibrated to a general reader, not to you, your role, your organisation, or the specific decision you're trying to make. A 2,000-word industry report summarised for a CFO making a capital allocation decision looks completely different from the same report summarised for a marketing director planning a campaign launch. The document is identical. The useful summary is not.

The fix is to treat every summarisation request as a brief you're giving to a smart analyst. Tell Claude who will read this summary and what they'll do with it. Specify the length — 'three bullet points', 'one paragraph', 'a one-page executive brief' — because ambiguous length requests produce variable results. If there are specific themes or questions the summary must address, name them explicitly. If certain sections of the source document are more important than others, say so. This doesn't require long prompts — it requires precise ones. The difference between 'summarise this earnings call' and 'summarise this earnings call in five bullet points for a sales director preparing to pitch this company's finance team next week' is about fifteen words, and the output difference is substantial.

There's a second failure mode that's less obvious: asking for a summary when what you actually need is an extraction. Summaries compress and synthesise — they make judgment calls about what to include. Extractions retrieve specific types of information without compression. If you need every mention of a competitor's pricing in a document, a summary will likely miss some of them. Ask for an extraction instead: 'Find and list every instance where pricing is mentioned in this document, including the exact quoted figures.' Understanding the difference between these two tasks — and asking for the right one — is one of the fastest ways to improve your research prompting.

Build a Research Brief for a Real Document

Goal: Produce a structured research output from a real work document using a decision-framed prompt, and build a reusable prompt template for recurring research tasks in your role.

1. Choose a real document from your current work — a report, article, research paper, competitor announcement, or long email thread. It should be something you'd normally spend 20-40 minutes reading carefully. 2. Before opening Claude, write down in one sentence: what decision or action will this research inform? Be specific — not 'understanding the market' but 'deciding whether to expand into the German market this quarter.' 3. Based on that decision, write down three specific questions you need this document to answer. These become the spine of your prompt. 4. Open Claude.ai and start a new conversation. Paste your document (or a substantial excerpt — at least 800 words). 5. Write a research prompt that includes: your role, the decision the research informs, your three specific questions, and the output format you want (bullet list, short paragraphs, table, etc.). 6. Review Claude's output against your three questions. Note which questions it answered well, which it answered partially, and which it missed or misinterpreted. 7. Write a follow-up prompt refining one answer — ask Claude to go deeper on the question it handled least well, or to find specific evidence in the document that supports a claim it made. 8. Compare the total time spent (writing brief + reviewing output + follow-up) against your estimate of how long you'd have spent reading and noting manually. 9. Save your prompt structure as a reusable template — adjust the role, decision context, and questions to make it generic enough to apply to the next similar document you encounter.

Principles Extracted From These Cases

  1. Separate extraction from interpretation: use Claude to surface information systematically, then apply your own judgment to what it means for your specific context.
  2. Consistency is a feature: Claude applies the same analytical lens to every document, which reduces the variability that comes from human attention fatigue and confirmation bias.
  3. Generic summaries serve generic readers — always specify who will use the summary and what decision it informs.
  4. Distinguish between summarisation (compression with judgment) and extraction (retrieval without compression) — and ask for the right one.
  5. Outlier instructions unlock hidden value: explicitly asking Claude to flag what doesn't fit the pattern surfaces the signals most likely to be missed in standard synthesis.
  6. Build reusable prompt templates for recurring research tasks — consistency in prompting produces comparable, auditable outputs over time.
  7. Claude's neutral processing removes vocabulary and fluency bias from qualitative analysis, which matters when working with multilingual or diverse source material.
  • The most valuable shift isn't speed — it's moving your attention from extraction to evaluation, where human judgment creates the most value.
  • Telling Claude the decision your research informs is the single highest-leverage addition to any research prompt.
  • Multi-document comparison prompts (asking Claude to flag contradictions across sources) surface insights that single-document summaries miss entirely.
  • Claude's limitation — no live web access in standard form — is also a boundary that keeps it focused: you bring the sources, it brings the analytical processing.
  • Follow-up prompts that drill into weak or partial answers are often where the most useful research outputs emerge.

When Speed Meets Accuracy: Summarising Without Losing What Matters

In 2023, the legal team at Allen & Overy — one of the world's largest law firms — began using AI to help associates process due diligence documents. The problem they faced wasn't access to information. It was volume. A single M&A deal could generate thousands of pages of contracts, disclosures, and regulatory filings. Associates were spending 60–70% of their time reading documents just to identify the 10% that actually mattered. The firm's solution wasn't to replace legal judgment. It was to use AI to compress the reading phase so that human expertise could focus on interpretation and strategy.

What Allen & Overy discovered — and what their associates had to learn quickly — is that AI summarisation only works when you tell it what to care about. A prompt that says 'summarise this contract' produces a generic overview. A prompt that says 'identify clauses related to liability caps, termination rights, and change-of-control provisions' produces something a lawyer can actually use. The difference isn't Claude's capability. It's the specificity of the instruction. The tool is only as targeted as the person directing it.

What 'summarisation' actually means to Claude

Claude doesn't summarise by randomly selecting key sentences. It identifies what's central based on your prompt's framing. If you ask for a summary with no context, it defaults to general importance. If you specify your role, your goal, and what you'll do with the output, it calibrates accordingly. This is why two people can paste the same article into Claude and get very different — and both valid — summaries.

The Principle: Research Is a Conversation, Not a Query

The Allen & Overy case illustrates something that applies far beyond law: effective AI-assisted research is iterative. You don't send one prompt and walk away with a finished product. You start with a broad question, review what comes back, identify the gaps, and sharpen the next prompt. This mirrors how a good research analyst actually works — not by reading everything once, but by progressively narrowing focus. Claude is built for this kind of back-and-forth, and professionals who treat it as a conversation rather than a search engine get dramatically better results.

Consider how a strategy consultant at McKinsey might approach a client brief on the European EV charging market. The first prompt establishes scope: key players, market size, regulatory landscape. The second prompt drills into a specific tension the first response surfaced — say, the gap between public charging infrastructure targets and actual installation rates. A third prompt might ask Claude to synthesise those findings into a structured argument for a specific slide. Each prompt builds on the last. The final output reflects a directed research process, not a single lucky query. This progressive approach is what separates professionals who get mediocre AI outputs from those who produce work they're proud to share.

Iterative research prompt — sharpening after a first pass

Prompt

You gave me an overview of the European EV charging market. Now focus specifically on the gap between EU member states' public charging targets for 2025 and current installation rates. Which countries are furthest behind, what are the main barriers cited by industry analysts, and what does this mean for infrastructure investors? Keep it under 300 words and flag any claims where data is uncertain.

AI Response

Several EU member states are significantly behind their 2025 public charging targets. Germany, France, and Italy have the largest absolute gaps — Germany had approximately 100,000 public chargers as of late 2023 against a target trajectory requiring closer to 300,000 by 2025. The main barriers cited by analysts include grid connection delays (often 18–36 months for high-power sites), fragmented permitting across municipalities, and insufficient incentives for private site operators in lower-density areas. For infrastructure investors, this creates a two-speed market: urban fast-charging locations are increasingly competitive and land-constrained, while semi-urban and highway corridors remain underdeveloped and potentially higher-margin. Note: installation figures vary by source and update frequency — treat specific numbers as directional rather than definitive, and verify against ACEA or Transport & Environment data before using in client materials.

A Different Industry, the Same Discipline

Journalists at Reuters and the Associated Press have been using AI-assisted summarisation tools since 2022 — primarily to process earnings reports, sports statistics, and financial data at scale. A business journalist covering 40 quarterly earnings calls in a single week can't read every transcript in full. What they can do is prompt Claude to extract the three moments in each transcript where an executive's language shifted — hedged language, unusual emphasis, sudden brevity — because those are often where the real story lives. This is research as pattern recognition, and it's something Claude handles well when the prompt is precise about what patterns to look for.

The lesson from journalism is that AI summarisation doesn't replace editorial judgment — it protects it. By offloading the mechanical reading work, the journalist has more cognitive bandwidth for interpretation, source verification, and narrative construction. The same logic applies to a marketing director reviewing 50 customer interview transcripts, or a procurement manager comparing 12 supplier proposals. Claude compresses the input so the human can do the thinking that actually requires a human.

Research scenarioWeak prompt approachStrong prompt approachOutput quality difference
Legal due diligence'Summarise this contract''Identify liability caps, termination triggers, and change-of-control clauses'Generic overview vs. actionable clause list
Market research'Tell me about the EV market''Compare charging infrastructure gaps across Germany, France, Italy — flag data uncertainty'Broad facts vs. investor-ready analysis
Earnings analysis'Summarise this earnings call''Find moments where executive language hedged or shifted — quote directly'Surface summary vs. editorial signal detection
Customer research'Summarise these interviews''Extract recurring complaints about onboarding — group by customer segment'General themes vs. segmented insight
Competitive intel'What do competitors do?''Compare pricing models and positioning of Salesforce, HubSpot, Pipedrive for SMB CRM'Vague landscape vs. structured comparison
The same task, two prompt qualities — the gap in output is almost always the prompt's fault, not the model's.

What a Product Manager Learned About Trust and Verification

A product manager at a mid-sized SaaS company used Claude to research competitors before a quarterly planning session. She pasted in three competitor blog posts, two G2 review summaries, and a LinkedIn post from a rival's CPO, then asked Claude to synthesise the competitive positioning landscape. The output was sharp — but she noticed Claude had inferred a pricing change from an ambiguous blog post that was actually describing a legacy tier. The insight felt right, but the underlying data point was wrong. She caught it because she'd read the source material herself. Someone who hadn't would have walked into a planning meeting with a confident-sounding error.

This isn't an argument against using Claude for research. It's an argument for building verification into your workflow. The product manager now uses a simple rule: Claude synthesises, she spot-checks. For any claim that will influence a decision, she traces it back to the source. For directional thinking — brainstorming, structuring, identifying what to investigate next — she trusts Claude's output more freely. This two-tier approach is how sophisticated AI users actually work. Not blind trust, not paranoid rejection — calibrated reliance.

Claude can be confidently wrong

Claude doesn't flag uncertainty the way a cautious human colleague might. It can state an inference as if it were a fact, especially when synthesising multiple sources. Build a verification habit for any claim that will be shared externally, used in a financial model, or cited in a recommendation. Ask Claude to flag uncertain claims explicitly by including 'note where data is uncertain or inferred' in your prompt — it will comply, and the output becomes far more trustworthy.

What This Means When You Sit Down to Work

When you use Claude for research and summarisation in practice, the quality of your session depends almost entirely on how clearly you've defined your purpose before you start. The professionals who get the most from Claude — the lawyers, journalists, consultants, and product managers described here — share one habit: they know what they're trying to decide or produce before they open a prompt. Claude is a thinking accelerator, not a thinking replacement. If you're unclear on your goal, Claude will produce fluent, well-structured content that moves you confidently in no particular direction.

Practically, this means starting each research session by writing one sentence — just for yourself — that completes the phrase: 'After this session, I need to be able to...' That sentence becomes the invisible frame for every prompt you write. It tells you when to stop, when to go deeper, and when the output Claude produced is actually good enough. Without that anchor, it's easy to keep refining prompts and accumulating outputs without ever reaching a usable conclusion. Focus is the skill. Claude is the tool.

The other practical shift is learning to treat Claude's first response as a draft, not a deliverable. Even a strong prompt rarely produces a perfect output on the first pass. Read what comes back, identify the most useful 60%, and write a follow-up prompt that says 'the section on X was most useful — go deeper there, and drop Y entirely.' This kind of directorial prompting — telling Claude what worked and what didn't — produces outputs that compound in quality across a session. By the third or fourth exchange, you're often holding something genuinely useful.

Build a Research Brief Using Claude

Goal: Produce a verified, shareable 150-word research brief on a real work topic, built through an iterative Claude session with explicit uncertainty flagging.

1. Choose a real topic you need to understand better for your work — a competitor, a market, a regulatory change, or an industry trend. 2. Write a one-sentence goal statement: 'After this session, I need to be able to [specific action or decision].' 3. Open Claude and write a scoped first prompt: include your role, your goal, what you already know, and what you specifically need. Limit the initial question to one clear aspect of the topic. 4. Read the response. Highlight or copy the two or three sentences that are most useful or surprising. 5. Write a follow-up prompt that builds on those sentences — go deeper on one thread, ask Claude to compare two things it mentioned, or request that it structure the insight for a specific audience. 6. Ask Claude to flag any claims in its response where the data is uncertain or inferred. 7. Review the flagged claims and verify at least two of them against an external source (a company website, industry report, or news article). 8. Ask Claude to produce a 150-word summary of everything discussed in the session, framed for a colleague who wasn't part of the conversation. 9. Save that summary — this is your research brief, and it's something you can share, file, or use directly in a presentation or document.

  1. Specificity in your prompt determines specificity in your output — vague questions produce vague answers regardless of the model's capability.
  2. Effective AI research is iterative: start broad, identify what's most useful, and sharpen each subsequent prompt based on what came back.
  3. Always specify your role and intended use — Claude calibrates its output when it understands who's asking and why.
  4. Claude can synthesise confidently and still be wrong — build a two-tier habit: free trust for directional thinking, verification for anything decision-critical.
  5. Asking Claude to flag uncertainty explicitly ('note where data is inferred') makes outputs significantly more trustworthy and usable.
  6. Treat Claude's first response as a draft — directorial follow-ups that name what worked and what didn't produce compounding quality improvements.
  7. The most important prompt you write isn't for Claude — it's the one-sentence goal statement you write for yourself before you start.
Knowledge Check

A consultant asks Claude to 'summarise the healthcare market.' The output is broad and not very useful. What is the most likely cause?

You're using Claude to research a competitor and it confidently states that the competitor raised prices last quarter. You didn't provide any source material. What should you do?

Which of the following best describes why iterative prompting produces better research outputs than a single detailed prompt?

A product manager wants Claude to analyse 30 customer interview transcripts. Which prompt approach will produce the most actionable output?

Before starting a Claude research session, a marketing director writes: 'After this session, I need to be able to recommend which two trade shows to prioritise next year.' What is the primary purpose of this exercise?

Sign in to track your progress.