Skip to main content
Back to AI for Everyday Productivity
Lesson 3 of 10

AI for summarising: cutting through information overload

~20 min read

The 47-Tab Problem

Picture a Tuesday morning. You have a 9am stakeholder call about a project you haven't touched since Thursday. Your inbox has 23 unread threads on it. Someone forwarded you a 34-page industry report at 11pm last night with the message "FYI — relevant to our Q3 strategy." Your browser has 47 tabs open, six of which are articles you flagged to read before the call. You have 22 minutes. This is not a crisis — this is just Tuesday. Information overload is the baseline condition of modern professional work, and it's getting worse as AI tools generate more content, more reports, and more updates faster than any human team can absorb. The bottleneck is no longer finding information. It's processing it fast enough to act.

This is exactly where AI summarisation earns its place in your workflow. Tools like ChatGPT, Claude, and Gemini can read a 34-page report in seconds and return a structured, decision-ready summary in under a minute. Perplexity can pull together the key points from multiple web sources and synthesise them into a coherent briefing. Notion AI can compress a week of meeting notes into three action items before your next standup. These aren't party tricks — they're genuine time multipliers when used correctly. The difference between professionals who find AI summarisation useful and those who find it underwhelming almost always comes down to one thing: how they construct the prompt. Paste a document and type "summarise this" and you'll get something generic. Give the AI context, constraints, and a clear output format, and you'll get something you can actually use.

What AI Summarisation Actually Does

When you paste text into ChatGPT or Claude and ask for a summary, the model doesn't skim it the way a tired human does — cherry-picking familiar words and filling in gaps with assumptions. It processes every token of your input, weighing relationships between ideas across the full document simultaneously. This means a well-prompted AI summary catches things buried on page 22 of a report that a human skim-reader would miss entirely. The model identifies which claims are supported by evidence, which are assertions, and how ideas connect — then reconstructs that structure in whatever format you specify. Claude, in particular, handles long documents well, with a context window of up to 200,000 tokens, meaning you can paste an entire book and ask questions about it. GPT-4o supports up to 128,000 tokens. For most business documents, both are more than sufficient.

Understanding what the model is doing helps you prompt it better. AI summarisation is fundamentally a compression task: the model must decide what to keep and what to discard. Without your guidance, it makes those decisions based on statistical prominence — what appears most frequently or with the most semantic weight in the text. That's often fine for a general overview. But in professional contexts, what matters most is rarely what appears most often. The single risk factor buried in paragraph 14 of a vendor contract matters more than the three pages of boilerplate that precede it. The one dissenting data point in a market research report might be the most strategically important sentence in the document. When you tell the AI what you care about — your role, your decision, your audience — you shift its compression logic toward your actual needs.

The Three-Part Prompt Formula for Summaries

Structure every summarisation prompt with: (1) Your role and context — "I'm a marketing manager preparing for a budget review"; (2) What the document is and what you need from it — "This is a 40-page competitor analysis. I need the three biggest threats to our positioning"; (3) The output format you want — "Return this as a bulleted executive brief, max 200 words." This three-part structure consistently produces more useful outputs than any single-sentence summary request.

From Raw Text to Decision-Ready Output

The most common professional use case is the document dump: a long report, a chain of emails, a transcript, or a set of meeting notes that you need to process quickly. The key habit to build is treating the AI as a smart analyst, not a search engine. You're not asking it to find something — you're asking it to think about something on your behalf and return structured insight. That means your prompt needs to carry enough context for it to make good editorial decisions. A prompt like "summarise this for a non-technical executive audience who needs to decide whether to approve the project" produces a fundamentally different output than "give me a summary" — even with identical input text. The AI adjusts vocabulary, emphasis, and level of detail based on the audience you describe. Use that.

Summarising a Long Industry Report for an Executive Briefing

Prompt

I'm a strategy consultant preparing a 5-minute briefing for a CFO who needs to decide whether to expand into the Southeast Asian market. Below is a 28-page market entry report. Summarise it into an executive brief with these sections: (1) Market opportunity in one sentence, (2) Top 3 risks, (3) Key assumptions the analysis depends on, (4) Recommended next step. Keep the entire brief under 250 words. Flag anything that seems like an unsupported assertion. [PASTE REPORT TEXT HERE]

AI Response

**Market Opportunity** Southeast Asia represents a $4.2B addressable market in this segment by 2027, with Indonesia and Vietnam accounting for 61% of projected growth. **Top 3 Risks** 1. Regulatory fragmentation — each of the six target markets has distinct licensing requirements, adding 8–14 months to typical market entry timelines. 2. Distribution dependency — the report assumes partnership with two regional distributors whose exclusivity agreements expire in 2025 and are not yet renewed. 3. Currency exposure — 73% of projected revenue is in USD-pegged currencies, but the model doesn't account for recent dong and rupiah volatility. **Key Assumptions to Scrutinise** - 12% annual growth rate is cited without a primary source (⚠ unsupported assertion) - Competitor response is modelled as "minimal" with no justification given - Cost projections assume a Singapore HQ, which may not reflect actual operational structure **Recommended Next Step** Commission a focused due diligence exercise on distributor contract status and regulatory timelines in Indonesia before committing capital. The opportunity is real, but the analysis has three load-bearing assumptions that haven't been stress-tested. *Total: 198 words*

Notice what happened in that output. The AI didn't just compress the report — it applied editorial judgment shaped by the prompt's context. It flagged an unsupported assertion because you asked it to. It structured the output around a CFO's decision logic rather than the report's original chapter structure. It kept the word count within your constraint. None of that happens with a generic "summarise this" prompt. The quality of the output is almost entirely a function of the quality of the instruction. This is the core skill the rest of this lesson builds: writing prompts that extract maximum signal from any document, thread, or transcript you throw at an AI tool.

Choosing the Right Tool for the Job

Not all AI summarisation tools are equal, and the right choice depends on what you're summarising and where you need the output. ChatGPT with GPT-4o is the most flexible general-purpose option — strong at following complex formatting instructions and good at summarising documents you paste directly. Claude 3.5 Sonnet (from Anthropic) handles very long documents better than almost anything else, and its outputs tend to be more precise and less prone to padding. Notion AI is the right tool when your source material already lives in Notion and you want the summary embedded in your workspace without copying and pasting. Gemini 1.5 Pro integrates tightly with Google Workspace, making it the practical choice when your documents are in Google Docs or your emails are in Gmail. Each tool has a natural habitat.

ToolBest ForContext WindowStandout FeatureApprox. Cost
ChatGPT (GPT-4o)General document summarisation, complex formatting128K tokensStrong instruction-following, file upload support$20/month (Plus)
Claude 3.5 SonnetVery long documents, nuanced analysis, contracts200K tokensHandles book-length inputs, low hallucination rate$20/month (Pro)
Gemini 1.5 ProGoogle Docs, Gmail threads, Drive files1M tokensNative Google Workspace integrationIncluded in Google One AI Premium ($19.99/month)
Notion AIMeeting notes, project docs already in NotionVaries by planSummarises in-context without leaving your workspace$10/member/month (add-on)
Perplexity ProSummarising across multiple web sourcesVariesCites sources inline, good for research briefs$20/month
AI summarisation tools compared by use case, context capacity, and cost (2024 pricing)

The context window column matters more than most people realise when they're starting out. A token is roughly three-quarters of a word in English. GPT-4o's 128,000-token window fits approximately 96,000 words — enough for a long annual report or a substantial email chain. Claude's 200,000-token window handles around 150,000 words. Gemini 1.5 Pro's 1 million-token window is in a different category entirely — it can process the equivalent of several books at once, though this capacity is more relevant for large-scale data analysis than most daily summarisation tasks. For the typical professional use case — a 20-50 page report, a 30-email thread, a 90-minute meeting transcript — any of these tools has more than enough capacity. Your bottleneck is the prompt, not the window.

Cost is worth calibrating against time saved. If you spend 90 minutes reading a report you could have summarised and interrogated in 8 minutes using Claude, that's 82 minutes recovered. At a fully-loaded professional hourly rate of $80–$150, a single well-executed summarisation session pays for a month of any of these subscriptions. The more useful frame isn't "is $20 a month worth it?" — it's "how many 90-minute reading sessions do I have this month that I could compress to 10?" For most managers, analysts, and consultants, the answer is easily five or more per week. The ROI calculation is straightforward. The real question is whether you're prompting well enough to get outputs you can trust — which is what this lesson is designed to fix.

Applying This at Work: Three Immediate Use Cases

The professionals who get the most from AI summarisation don't wait for a perfect use case — they apply it to the documents already on their desks. The three highest-ROI starting points for most roles are: long reports (analyst reports, strategy documents, research papers), email threads (especially cross-functional chains that have gone on for days), and meeting transcripts (Zoom, Teams, and Google Meet all offer transcript exports or integrations with tools like Otter.ai). Each of these has a slightly different prompt structure because the underlying content type is different. A meeting transcript needs speaker attribution and action item extraction. An email thread needs chronological compression and identification of the key decision or unresolved question. A report needs structured insight extraction aligned to your decision context.

The email thread use case is underrated and immediately practical. Most professionals spend 2–3 hours per day in email. A significant portion of that time is re-reading long threads to reconstruct context before replying or forwarding. Pasting a 20-email thread into Claude or ChatGPT and asking for a structured summary — who said what, what's been decided, what's still open, what action is needed from you — takes about 45 seconds and can save 15–20 minutes of re-reading and mental reconstruction. Do that three times a day and you've recovered close to an hour. The prompt structure for email threads differs slightly from document summarisation: you want the AI to track participants and preserve the thread's decision logic, not just compress the text volume.

Meeting transcripts are where many teams are still leaving significant value on the table. Most organisations using Zoom or Microsoft Teams have transcript features turned on but do nothing structured with the output. A raw 90-minute transcript is typically 10,000–15,000 words — far too long to re-read, but full of decisions, commitments, and context that gets lost within 48 hours. Running that transcript through a well-prompted AI summarisation produces a structured meeting record with decisions logged, action items assigned, and open questions flagged — in under two minutes. If your team standardises this practice, you also create a searchable knowledge base of what was actually said in meetings, which becomes increasingly valuable as projects mature and decisions need to be traced back to their origins.

Summarise a Real Work Document in Under 10 Minutes

Goal: Produce a usable, decision-ready summary of a real work document using a structured three-part prompt, and identify one specific refinement that improves the output — building the prompt iteration habit from your first session.

1. Choose a real document from your current work — an industry report, a long email thread, or a recent meeting transcript. It should be at least 5 pages or 20 emails long. 2. Open Claude (claude.ai) or ChatGPT (chat.openai.com) — either free or paid tier works for this exercise. 3. Write your role and context in one sentence: your job title, the project this relates to, and the decision or action you need to take after reading it. 4. Specify what you need from the document: the three most important findings, the key risks, the recommended actions, or whatever is most relevant to your situation. 5. Define your output format: bullet points, numbered list, executive brief, or table. Set a word limit (150–300 words is usually right for a first summary). 6. Paste your three-part prompt followed by the full document text into the AI tool and submit. 7. Read the output critically: does it capture what you actually needed? Identify one thing that's missing or wrong. 8. Write a follow-up prompt asking the AI to add the missing element or correct the error — note how the model responds to this refinement. 9. Save both the prompt and the final output in a document. This becomes the template you'll refine and reuse for this document type.

What Good Summarisation Looks Like by Role

  • Managers: Summarise weekly status updates from direct reports into a single team health snapshot; compress board prep materials into a 5-minute verbal brief; extract decisions and owners from meeting transcripts before sending follow-ups.
  • Analysts: Compress competitor earnings calls and analyst reports into structured comparison tables; summarise multiple data sources into a single narrative for stakeholder presentations; extract methodology caveats from research papers before citing them.
  • Consultants: Rapidly process client-provided background documents before discovery calls; summarise interview transcripts into thematic insight clusters; compress lengthy RFP documents into the 8–10 requirements that actually determine fit.
  • Marketers: Summarise customer research reports into audience insight briefs; compress social listening data exports into trend narratives; extract key messages from competitor content for positioning analysis.
  • Project Managers: Turn meeting transcripts into structured RAID logs (Risks, Assumptions, Issues, Dependencies); compress multi-stakeholder email chains into decision histories; summarise project documentation for onboarding new team members mid-project.
Summarising a Long Email Thread Before Replying

Prompt

I'm a project manager and I've been CC'd on this email thread but haven't been following it closely. I need to reply today with a decision on budget reallocation. Summarise this thread with: (1) A one-sentence description of what the thread is about, (2) The key positions each person has taken (list by name), (3) What decision is being asked of me specifically, (4) Any deadlines mentioned, (5) My recommended reply in 3 sentences that acknowledges the discussion and states a clear position. Assume I'll need to approve or decline a budget shift of some kind. [PASTE EMAIL THREAD HERE]

AI Response

**What This Thread Is About** The team is debating whether to reallocate $15K from the Q3 events budget to paid social, following lower-than-expected event registrations. **Key Positions** - Sarah (Marketing Lead): Strongly in favour — argues paid social has 3x the conversion rate of events for this audience segment. - David (Finance): Neutral but wants written sign-off before any reallocation; flagged that the events budget is already committed to a venue deposit of $4,200. - Priya (Events): Opposed — says cancelling the October workshop damages an agency relationship built over two years. **Decision Being Asked of You** Approve or decline the reallocation of up to $10,800 (after the non-refundable deposit) from events to paid social for Q3. **Deadlines Mentioned** Sarah needs a decision by Thursday EOD to brief the paid social agency before the Q3 campaign window closes. **Suggested Reply** "Thanks for the thorough discussion — I've reviewed the thread and I'm approving the reallocation of up to $10,800 from events to paid social, subject to David's written sign-off process. Please protect the agency relationship by giving Priya's team a heads-up before any public changes to the October calendar. Sarah, please send David the formal reallocation request today so we hit the Thursday deadline."

  • AI summarisation is a compression task — the model decides what to keep based on your prompt, not just the document's own structure.
  • The three-part prompt formula (your role + what you need + output format) consistently outperforms generic summary requests.
  • Claude handles the longest documents; Gemini integrates best with Google Workspace; Notion AI works without leaving your workspace; ChatGPT offers the most formatting flexibility.
  • Email threads, meeting transcripts, and long reports are the three highest-ROI starting points for most professional roles.
  • Follow-up prompts — asking the AI to add what's missing or correct what's wrong — are how you train yourself to get better outputs faster.
  • A single well-executed summarisation session can recover 60–80 minutes of reading time, paying for any of these tools in a single use.

Summarising Different Content Types

Not all summarisation tasks are equal. A 40-page consultant report, a 90-minute Zoom transcript, a 200-email thread about a delayed product launch — each demands a different approach. The mistake most professionals make is treating AI summarisation as a single skill when it's actually a family of techniques. The prompt that works beautifully on a research paper will produce flat, generic output when pointed at a customer complaint thread. Understanding which technique fits which content type is what separates someone who occasionally uses AI from someone who genuinely processes information faster than their colleagues.

Summarising Long Documents

Long documents — reports, whitepapers, contracts, strategic plans — have structure that AI can exploit. Sections, headings, and logical flow give the model anchor points. Your job is to tell the AI which part of that structure matters to you. A legal team and a marketing team reading the same 60-page partnership agreement need entirely different summaries. The legal team wants risk clauses and termination conditions. Marketing wants brand usage rights and co-promotion commitments. Same document, same AI tool, radically different prompts. Claude and ChatGPT-4o both handle documents up to roughly 100,000 tokens — that's about 75,000 words — so most professional documents fit comfortably in a single prompt. Gemini 1.5 Pro extends this to around one million tokens, making it the right choice when you're dealing with entire legal case files or multi-year audit trails.

Summarising a Strategic Report for a Senior Stakeholder

Prompt

I'm attaching our company's 2024 Annual Strategy Review (48 pages). I need to brief our CFO in 5 minutes tomorrow morning. She cares about: financial implications, resource allocation changes, and any risks flagged by the strategy team. Ignore sections on brand positioning and marketing campaigns — those aren't relevant to her. Give me: (1) a 3-sentence executive summary she can read in 30 seconds, (2) five bullet points covering the financial and risk highlights, (3) two questions she might ask that I should be prepared for.

AI Response

**Executive Summary (30-second read)** The 2024 strategy review projects 14% revenue growth contingent on two major hires in data infrastructure and a £2.3M platform migration completing by Q3. Risk exposure is moderate-high, with the review flagging supply chain dependency on a single European vendor as the most critical unresolved issue. Resource reallocation shifts 22% of the innovation budget away from existing product lines toward two new market verticals. **Financial & Risk Highlights** • Platform migration budget set at £2.3M — no contingency buffer currently built in • Two senior data infrastructure hires required before Q2; current talent market makes this timeline optimistic • Single-vendor supply chain risk explicitly flagged but no mitigation plan documented • Innovation budget reallocation creates short-term revenue risk on legacy product line (currently 38% of revenue) • FX exposure increases as new market verticals are both USD-denominated **Questions to Prepare For** 1. "What's the contingency plan if the platform migration runs over budget or timeline?" — The review doesn't answer this directly; you'll want to confirm with the project lead before the meeting. 2. "Why are we moving budget away from a product line that still generates 38% of revenue?" — The strategic rationale is in Section 4, framed around long-term margin improvement, but the short-term revenue risk isn't explicitly quantified.

Tell the AI who it's summarising for

The single most effective addition to any summarisation prompt is a stakeholder context line: 'This summary is for our CFO, who cares about X and not Y.' AI models adjust vocabulary, emphasis, and structure based on audience context. Without it, you get a generic summary that's accurate but not useful. With it, you get something you can almost paste directly into an email.

Summarising Meeting Transcripts and Call Recordings

Transcripts are messy by design. People interrupt, circle back, speak in incomplete sentences, and spend 20 minutes on a topic that produces one actual decision. AI handles this messiness well — but you need to ask for the right output. Raw meeting transcripts rarely benefit from a standard 'summarise this' prompt because the AI tends to reflect the meeting's structure back at you, including all the tangents. Instead, ask for decisions made, actions assigned, and open questions outstanding. That three-part structure maps to what actually needs to happen after a meeting: people need to know what was decided, what they're responsible for, and what's still unresolved. Tools like Otter.ai and Fireflies.ai generate transcripts and apply AI summarisation automatically, but their generic summaries miss context that only you have — like knowing that 'we'll revisit this next week' actually means it's been quietly dropped.

Extracting Actionable Output from a Sales Call Transcript

Prompt

Below is a transcript from a 45-minute sales discovery call with a potential enterprise client. Extract the following, using only information explicitly stated in the transcript — do not infer or add context: 1. Client's stated pain points (direct quotes where possible) 2. Decision-making process: who's involved, what's the timeline 3. Budget signals (explicit or implied) 4. Objections raised 5. Specific next steps agreed on the call 6. Red flags or risks I should flag to my manager [TRANSCRIPT PASTED BELOW]

AI Response

**1. Stated Pain Points** • "Our current reporting takes three days to produce — by the time leadership sees it, it's already stale" (direct quote, 8:42) • Disconnected data between CRM and finance systems causing reconciliation errors at month-end • Team of 6 analysts spending ~40% of time on data preparation rather than analysis **2. Decision-Making Process** • Primary contact: Sarah (Head of Analytics) — evaluating options, not the final decision-maker • CFO approval required for any contract above £50K annually • Procurement gets involved for contracts over £100K — adds 4-6 weeks to timeline • Target go-live: before their financial year-end in March (approximately 4 months) **3. Budget Signals** • No explicit budget stated • Currently paying "around £30K" for existing tool — implied ceiling unless clear ROI case made • Mentioned a "digital transformation budget" approved for 2024 — worth probing on the call **4. Objections Raised** • Implementation risk: previous vendor took 9 months to deploy, disrupted operations • Team adoption: "our analysts are resistant to new tools" — change management concern • Integration complexity with their legacy ERP system (SAP, version not specified) **5. Agreed Next Steps** • Send case study from similar financial services client by Thursday • Schedule technical call with their IT lead week of the 15th • Sarah to share current data architecture documentation before technical call **6. Red Flags for Manager** • Sarah is not the budget holder — CFO relationship needs to be established early • March deadline is aggressive if procurement gets involved (contract likely over £100K) • Legacy SAP integration not on our standard implementation path — needs pre-sales technical review before we progress further

Choosing the Right Tool for the Job

The summarisation tool market has matured quickly. ChatGPT, Claude, Gemini, Perplexity, and Notion AI each have genuine strengths, and picking the wrong one for a specific task costs you time and quality. The differences matter most at the edges — very long documents, highly technical content, or workflows where summarisation needs to connect directly to other tasks like drafting or searching. Claude 3.5 Sonnet has developed a strong reputation for maintaining nuance in longer summaries, avoiding the flattening effect where everything sounds equally important. Perplexity is built around summarising information from live web sources, which makes it the right tool when your question is 'what's the current state of X' rather than 'summarise this document I have.' Notion AI wins when your notes, meeting records, and project documents already live in Notion — the integration removes the copy-paste step that breaks most AI workflows.

ToolBest ForContext WindowWeaknessApprox. Cost
ChatGPT-4oGeneral documents, mixed content types, team workflows via GPT128K tokens (~96K words)Can over-summarise, losing critical detail in dense documents$20/month (Plus)
Claude 3.5 SonnetLong reports, nuanced analysis, maintaining tone and emphasis200K tokens (~150K words)No live web access; document upload via Projects only$20/month (Pro)
Gemini 1.5 ProExtremely long documents, audio/video files, Google Workspace integration1M tokens (~750K words)Summary quality less consistent than Claude on analytical content$19.99/month (Advanced)
Perplexity ProSummarising current information from the web, competitive researchLive web searchNot designed for private document summarisation$20/month (Pro)
Notion AISummarising notes, meeting records, and project pages inside NotionVaries by page lengthLimited to content inside Notion; weaker on complex analytical docs$10/month add-on
Otter.aiAuto-transcription and summarisation of meetings and callsMeeting lengthGeneric summaries; limited customisation of output format$16.99/month (Pro)
AI summarisation tools compared by use case, capability, and cost (2024 pricing)

Building Summarisation Into Your Weekly Workflow

The professionals who get the most value from AI summarisation aren't using it reactively — opening ChatGPT when something feels too long. They've identified the three or four recurring information bottlenecks in their week and built a consistent prompt for each one. A senior analyst might have a Monday morning ritual: paste the weekend's industry newsletter into Claude, run a saved prompt that extracts competitor signals relevant to their client portfolio, and arrive at their 9am meeting with a one-page brief. That's not a dramatic workflow overhaul. It's 12 minutes on Monday morning that replaces 90 minutes of reading, and it happens every week. The compounding effect over a quarter is significant — and it's the kind of efficiency that's invisible to colleagues but very visible in output quality and response speed.

The key to making summarisation habitual is prompt reusability. Most professionals write a new prompt every time they use AI, which means they're spending mental energy on prompt construction rather than on the actual work. The better approach: write a strong prompt once, test it on three or four real examples, refine it, and then save it somewhere accessible — a Notion page, a pinned note, a custom GPT instruction, or Claude's Projects feature. ChatGPT's custom GPTs and Claude's Projects both allow you to store system-level instructions that apply every time you use that workspace. Set up a 'Weekly Digest Summariser' project with your preferred format baked in, and from that point forward you're just pasting content and getting output. The upfront investment is 20 minutes. The payoff is every week after.

Accuracy management is the non-negotiable part of any summarisation workflow. AI models compress information, and compression always involves choices about what to keep and what to drop. Those choices are generally good — but 'generally good' isn't good enough when you're briefing an executive or handing a summary to a client. The practical fix is a two-pass approach: use AI to generate the first draft of a summary, then spend five minutes reading the source's section headings and conclusion yourself to verify nothing critical was omitted. This takes your total processing time from 90 minutes to roughly 20 minutes — still a dramatic improvement — while preserving the quality check that protects your professional credibility. Over time, as you build familiarity with how a particular AI tool handles specific content types, you'll calibrate how much verification each task actually needs.

Build Your Personal Summarisation Workflow

Goal: Create a tested, reusable summarisation prompt for one recurring information source in your actual work, and embed it into a workflow you'll sustain beyond this lesson.

1. Identify one recurring information source in your work week that consistently takes more than 30 minutes to process — this could be an industry newsletter, a weekly report, a recurring meeting, or a type of document you regularly receive. 2. Collect a real example of that content from the past two weeks — an actual newsletter issue, a real report, a genuine meeting transcript. 3. Open Claude or ChatGPT-4o and draft a summarisation prompt that specifies: your role and context, the audience for the summary, the three to five specific things that matter most to you in this content type, the output format you want (bullet points, numbered list, table, etc.), and anything the AI should explicitly ignore. 4. Run your prompt on the real example from step 2 and read the output carefully against the original source. 5. Identify one thing the summary got right that saved you time, and one thing it missed or got wrong that would have mattered professionally. 6. Revise your prompt to address the gap identified in step 5 — add a specific instruction that would have prevented that miss. 7. Run the revised prompt on a second real example of the same content type and evaluate whether the fix worked. 8. Save your final prompt in a place you'll actually access — a pinned note, a Claude Project, a custom GPT, or a shared team document if this workflow is relevant to colleagues. 9. Commit to using this prompt for the next four weeks and note how your processing time for this content type changes.

How Summarisation Needs Differ by Role

  • Managers: Need decision-relevant summaries — what requires a choice, what requires escalation, what can be delegated. The best prompt structure for managers ends with 'What decision, if any, does this require from me?'
  • Analysts: Need summaries that preserve data integrity — numbers, percentages, and source references must survive the compression. Always instruct the AI to flag when it's paraphrasing quantitative claims rather than quoting them directly.
  • Consultants: Need summaries structured around client implications — the same industry report means different things for different clients. Build client context into every summarisation prompt rather than summarising generically and interpreting separately.
  • Marketers: Need summaries that surface customer language and sentiment — when summarising customer feedback, reviews, or research, ask explicitly for direct quotes and recurring phrases rather than paraphrased themes.
  • Sales professionals: Need summaries oriented around opportunity signals — when reviewing call transcripts or account history, the prompt should specifically extract buying signals, objections, and relationship context rather than a neutral account of what was discussed.
  • HR and People leaders: Need summaries that flag risk and sentiment accurately — employee feedback, exit interviews, and engagement survey data require prompts that ask for range of opinion, not just majority themes, to avoid missing minority signals that matter.

Confidentiality before you paste

Before pasting any document into a consumer AI tool, check whether it contains personal data, client-identifying information, or anything covered by an NDA. ChatGPT's free and Plus tiers use conversation data for training by default — you need to disable this in settings, or use the API, or use a tool your organisation has procured with a data processing agreement. Claude's consumer product and Microsoft Copilot (for organisations with M365 Copilot licences) have clearer enterprise data protections. When in doubt, anonymise before you paste.

Summarising Across Multiple Sources

Single-document summarisation is useful. Multi-source synthesis is transformative. The real information overload problem in most professional roles isn't one long document — it's twelve documents, five email threads, two meeting transcripts, and a Slack conversation that together tell a story no single source contains. AI can synthesise across these sources if you give it the right structure. The technique is to paste multiple sources with clear labels — 'SOURCE 1: Client email thread (March 3-7)', 'SOURCE 2: Internal Slack discussion (March 8)' — and then ask for synthesis rather than summary. The prompt distinction matters: 'summarise each source' produces five separate summaries; 'synthesise these sources to give me a single coherent picture of where this project stands and what's unresolved' produces something genuinely useful that you couldn't easily produce yourself in the same time.

The 'contradictions' prompt addition

When synthesising multiple sources, add this line to your prompt: 'Flag any contradictions or inconsistencies between sources.' This is where AI summarisation earns its keep on complex projects. Different stakeholders often tell different stories about the same situation — in emails, in meeting notes, in project updates. AI catches these discrepancies with no emotional investment in either version, which makes it more reliable than a human reader who might unconsciously favour one account.

It's Tuesday afternoon. Your inbox has 47 unread messages, three Slack threads are asking for your input on last week's strategy call, and someone just forwarded you a 34-page vendor proposal with the note "thoughts?" You have 20 minutes before your next meeting. This is exactly the situation where AI summarisation stops being a nice-to-have and becomes the difference between showing up prepared or winging it. The professionals getting the most out of tools like ChatGPT, Claude, and Perplexity aren't using them occasionally — they've built summarisation into the rhythm of their workday, treating it like a filter that sits between raw information and their attention.

Choosing the Right Tool for the Right Summary

Not every AI tool handles summarisation the same way. ChatGPT (GPT-4o) excels at long-form document analysis and structured output — paste in a contract or report and ask for a table of key clauses, and it delivers. Claude, built by Anthropic, accepts up to 200,000 tokens in its context window, making it the go-to for summarising genuinely massive documents: full research papers, lengthy transcripts, or multi-chapter reports in a single pass. Perplexity shines when you need summarisation paired with live web sources — it pulls current information and cites it, which matters when you're summarising a fast-moving topic like a competitor's recent moves. Notion AI works best when the content already lives in your Notion workspace, letting you summarise meeting notes or project pages without copy-pasting anything. The right choice depends on where your content lives and what output format you actually need.

ToolBest ForContext LimitStandout FeatureApprox. Cost
ChatGPT (GPT-4o)Structured summaries, reports, emails128K tokensCustom output formats (tables, bullets)$20/mo (Plus)
Claude (claude.ai)Very long documents, transcripts200K tokensHandles book-length input in one pass$20/mo (Pro)
PerplexityWeb-sourced topic summariesLive web accessCites sources inline$20/mo (Pro)
Notion AIIn-workspace notes and pagesPer-page basisZero copy-paste friction$10/mo add-on
Gemini (Google)Gmail and Docs integration1M tokens (Gemini 1.5)Native Google Workspace summarisationIncluded in Workspace plans
AI summarisation tools compared by use case, capability, and cost (2024 pricing)

Match the tool to where the content lives

If the document is already in Google Docs, Gemini saves you time. If it's a raw PDF paste, Claude handles the volume. If you need a formatted deliverable, ChatGPT gives you the most control over output structure. Switching tools mid-workflow adds friction — pick based on your starting point, not just the AI's reputation.

Prompts That Produce Summaries Worth Keeping

The quality of your summary is almost entirely determined by the specificity of your prompt. Vague instructions like "summarise this" produce generic output that strips context along with length. Effective summarisation prompts do three things: they specify the audience, define the output format, and name the decisions or actions the summary needs to support. A product manager summarising user research needs different output than a CFO summarising the same document — the AI doesn't know which you are unless you say so. When you build this context into your prompt, the model filters information through the lens that actually matters to you, cutting irrelevant detail and surfacing what drives decisions.

Summarising a vendor proposal for a leadership decision

Prompt

Here is a 30-page vendor proposal for a new CRM platform. I'm a Head of Operations presenting options to our CEO and CFO next week. Summarise this into: (1) a 3-sentence executive summary, (2) a bullet list of the top 5 commercial terms I should flag, (3) a list of 3 risks or red flags, and (4) two questions I should ask the vendor before signing. Use plain business language — no jargon.

AI Response

**Executive Summary:** TechVendor Pro proposes a 3-year SaaS contract for their CRM platform at £180,000 annually, with implementation estimated at 14 weeks. The proposal includes dedicated onboarding support and a 99.5% uptime SLA, but ties all data export rights to active subscription status. Pricing is fixed for Year 1 only, with Years 2–3 subject to index-linked increases. **Key Commercial Terms to Flag:** • Annual fee: £180K (Year 1 fixed; Years 2–3 CPI-linked) • Minimum contract term: 36 months with 6-month exit notice • Data portability restricted during and post-contract without add-on fee • Implementation costs (£22K) billed upfront and non-refundable • User licence cap: 150 seats — £95/seat/month above that **Risks / Red Flags:** • Data lock-in clause (Section 11.4) limits export unless you pay a £5K data release fee • SLA credits capped at 10% of monthly fee — low penalty for downtime • No reference customers in your industry listed **Questions to Ask the Vendor:** 1. Can the data portability clause be renegotiated before signing? 2. What has the actual uptime been for UK-based clients in the past 12 months?

Notice what that prompt did differently. It named the audience (CEO and CFO), the purpose (a decision meeting), and requested four distinct output sections — each serving a specific professional need. The AI didn't need to guess what mattered. That prompt took 45 seconds to write and saved roughly two hours of document review. This is the pattern that separates professionals who use AI summarisation casually from those who use it to genuinely reclaim time. The output above is something you'd actually send to a colleague or walk into a meeting with — not something you'd rewrite from scratch.

Turning a long email thread into a clear action summary

Prompt

Below is an email thread between my team and a client over the past two weeks (22 messages). I'm the account manager rejoining after annual leave. Give me: (1) a 4-sentence catch-up summary of what happened, (2) any decisions that were made without my input, (3) open items that still need a response from our side, and (4) anything that sounds like an unhappy client signal I should address proactively. [Paste thread]

AI Response

**Catch-Up Summary:** The client raised concerns on 3 Nov about delayed delivery of the Q4 report, which your colleague Jamie addressed by extending the deadline to 15 Nov. A revised scope was agreed informally over email but has not been documented in the contract. The client approved the updated brand guidelines on 7 Nov with minor edits. As of the last message (10 Nov), they are waiting on your team to confirm the revised delivery date officially. **Decisions Made Without You:** • Deadline extended to 15 Nov (Jamie, 4 Nov) • Scope informally expanded to include two additional data cuts (agreed 6 Nov, undocumented) **Open Items from Your Side:** • Official confirmation of 15 Nov delivery date • Written scope change documentation • Invoice update reflecting expanded scope **Unhappy Client Signals:** • Phrase 'we expected this to be resolved by now' (8 Nov email from Sarah Chen) suggests frustration • Three follow-up emails sent without response before Jamie replied — client may feel deprioritised

Applying Summarisation by Role

Summarisation isn't a single workflow — it adapts to what your job actually demands. The underlying technique is consistent: give the AI context about who you are, what decision you're supporting, and what format you need. But the inputs and outputs shift significantly depending on your function. Understanding how your peers in adjacent roles use these tools also helps you collaborate better — when a colleague sends you an AI-generated summary, you'll know how to read it and what to verify.

  • Managers: Summarise team meeting transcripts into decisions, owners, and deadlines — then share directly in Slack or email
  • Analysts: Condense research reports into methodology, key findings, and data limitations for faster stakeholder briefings
  • Consultants: Extract client document themes and flag contradictions between stated goals and actual data
  • Marketers: Summarise competitor content, campaign reports, and customer feedback into trend signals
  • Finance professionals: Pull key figures, covenants, and risk clauses from lengthy financial documents or board packs
  • HR and People teams: Summarise engagement survey verbatims into themes without losing individual sentiment signals
  • Project managers: Convert status update emails into RAG (Red/Amber/Green) dashboards using structured prompts
Build Your Personal Summarisation Prompt Library

Goal: Produce a personal prompt library of three tested, refined summarisation prompts you can use immediately — a practical asset you'll return to every week.

1. Identify three document types you regularly deal with at work — for example: long email threads, meeting transcripts, and industry reports. 2. For each document type, write a dedicated summarisation prompt that names your role, your audience, the decision the summary supports, and the output format you need (bullets, table, numbered list, etc.). 3. Open ChatGPT or Claude and test each prompt against a real document from your recent work — use something you've already read so you can judge the output quality. 4. Note where the AI missed something important or included irrelevant detail — refine the prompt by adding one specific instruction to fix each gap. 5. Run the refined prompt again and compare the two outputs side by side. 6. Save your three final prompts in a single document titled 'My Summarisation Prompts' — store it somewhere you'll actually open (Notion, Google Docs, a pinned note). 7. For your best-performing prompt, write a one-line note explaining what made it work — this becomes the principle you apply when writing future prompts. 8. Share one of your prompts with a colleague and ask them to test it on their own content — their feedback will surface assumptions you baked in unconsciously. 9. Schedule a 10-minute calendar block once a month to revisit and update your prompt library as your role and document types evolve.

Key Takeaways

  1. The tool you choose matters less than matching it to where your content already lives — Gemini for Google Workspace, Claude for massive documents, ChatGPT for structured output control
  2. Vague prompts produce vague summaries — always specify your role, your audience, the decision at stake, and the exact output format you need
  3. AI summarisation saves the most time on documents you'd otherwise skim — proposals, long threads, and reports are the highest-value targets
  4. Role-specific prompting is the unlock: the same document summarised for a CFO and a project manager should produce completely different outputs
  5. A prompt library is a professional asset — three tested prompts you reuse weekly compound in value over time, cutting hours from your information processing load
  6. Always verify figures, named decisions, and attributed quotes in AI summaries — the model compresses meaning and occasionally compresses accuracy along with it
  7. The real productivity gain isn't faster reading — it's faster, better-informed decisions made with less cognitive drain
Knowledge Check

You need to summarise a 150-page legal contract in a single AI session. Which tool is best suited to handle this volume without splitting the document into chunks?

A marketer wants to summarise 30 customer feedback responses into themes for a product team. Which prompt element would most improve the output quality?

You return from two weeks of annual leave and face 200 unread emails. What is the most effective AI-assisted approach?

An analyst asks ChatGPT to summarise a financial report and the output states that revenue grew 12% in Q3. What should the analyst do before sharing this with their CFO?

A consultant builds a prompt that produces excellent summaries of client strategy documents. What is the highest-value next step to maximise the long-term benefit of this work?

Sign in to track your progress.