Using ChatGPT for research and summarisation
~24 min readUsing ChatGPT for Research and Summarisation
In 2023, a team of analysts at McKinsey's London office faced a familiar problem at an unfamiliar scale. They had six days to produce a market entry report on Southeast Asian fintech for a banking client. The research alone — regulatory frameworks across five countries, competitive landscape, consumer adoption data — would normally eat three of those days. Instead, the team used ChatGPT to process and summarise over 40 source documents, draft initial competitive analyses, and surface questions the team hadn't thought to ask. They finished the research phase in 14 hours. The final report still required human judgment, client knowledge, and strategic framing. But the grunt work — the reading, the synthesising, the first-pass structuring — happened at a speed that changed what was possible.
This story isn't about AI replacing analysts. The McKinsey team still checked every claim, added proprietary client context, and made the strategic calls themselves. What changed was the ratio of thinking time to reading time. Before ChatGPT, a significant portion of a knowledge worker's day is spent getting up to speed — ingesting information before they can do anything useful with it. ChatGPT compresses that ingestion phase dramatically. The tension at the heart of this lesson is exactly that: ChatGPT is a genuinely powerful research and summarisation tool, but only when you understand what it's actually doing with your information, and where it will confidently lead you astray.
The core principle extracted from the McKinsey example isn't 'use AI to go faster.' It's more precise than that: use ChatGPT to handle information volume so your brain can handle information judgment. These are different cognitive tasks. Processing volume — reading, extracting, comparing — is exhausting and time-consuming but doesn't require your expertise. Judgment — deciding what matters, what's missing, what the client actually needs — is where your expertise lives. Separating these two tasks, and assigning each to the right tool, is what makes professionals genuinely more effective rather than just busier with AI.
What ChatGPT Is Actually Doing When It 'Researches'
How Summarisation Actually Works — and Why It Fails
Sarah Chen is a product manager at a mid-sized SaaS company in Toronto. Every Monday, she receives a stack of inputs: customer support tickets from the previous week, NPS survey comments, a Slack thread from the sales team about a lost deal, and two competitor blog posts. Before ChatGPT, she'd spend 90 minutes reading everything before her 10am stand-up. Now she pastes each source into a separate ChatGPT conversation and asks for a structured summary with key themes and any action items that seem relevant to product. That 90 minutes is now 20. But the first time Sarah tried this, she got burned. She pasted a long customer complaint thread and asked for a summary. ChatGPT produced a clean, confident summary that missed the most important complaint entirely — a recurring billing error mentioned only twice, buried in casual language.
What happened to Sarah illustrates a fundamental property of how large language models summarise. They're trained to identify and surface what's statistically prominent — what appears frequently, what's expressed clearly, what matches patterns the model learned during training. They're not trained to identify what's strategically important to you specifically. A billing error mentioned twice in casual phrasing loses out to a UX complaint mentioned eight times in direct language. The model has no way of knowing that billing errors have three times the churn impact of UX complaints at Sarah's company. That context lives in her head, not in the text she pasted.
The fix is straightforward once you understand the problem. Sarah now opens her summaries with a context-setting sentence before her actual request: 'I'm a product manager at a B2B SaaS company. Billing and pricing issues are our highest-churn risk factors. When summarising the following customer feedback, flag any mentions of billing, pricing, or payment problems first, even if they appear infrequently.' This single change transformed the usefulness of her summaries. The principle: ChatGPT summarises according to surface prominence unless you tell it what prominence should mean in your specific context. Your job is to supply that context before the model starts working.
Prompt
I'm a product manager at a B2B SaaS company where billing and pricing issues are our highest churn risk. I'm going to paste in a batch of customer support tickets from last week. Please summarise the key themes, but flag any mentions of billing, pricing, payment failures, or subscription problems as a separate section at the top — regardless of how often they appear. For everything else, group by theme and note approximate frequency. [PASTE TICKETS HERE]
AI Response
**Priority Flags — Billing & Pricing Issues:** - 2 tickets mention incorrect invoice amounts after the March plan migration (Tickets #4821, #4839). Customers used the phrase 'wrong charge' and 'didn't authorise this.' Potential billing logic error post-migration. - 1 ticket asks how to downgrade before the next billing cycle — possible churn signal. **Other Themes by Frequency:** - **Onboarding confusion** (6 tickets): Users struggling to connect third-party integrations in the first week... - **Export functionality** (4 tickets): Requests for CSV export on the reporting dashboard... - **Positive feedback** (3 tickets): Praise for recent UI updates to the main dashboard...
A Different Industry, the Same Principle
Shift industries entirely. Dr. Priya Nair is a hospital pharmacist at a large NHS trust in Birmingham. Her team regularly needs to get up to speed on new drug interaction guidelines, updated NICE recommendations, and clinical trial summaries — dense, technical documents that take significant time to parse. She started using ChatGPT to create plain-language summaries of clinical guidance documents for her junior pharmacists, with a critical constraint: she always pastes the source document directly into the conversation rather than asking ChatGPT to recall information from memory. This is not a small distinction. When you paste the source text, ChatGPT is summarising what's in front of it. When you ask it to recall, it's generating from training data — which may be outdated, incomplete, or simply wrong about specific clinical details.
Dr. Nair's protocol is worth studying. She never asks ChatGPT to summarise clinical guidance from memory. She pastes the document, asks for a summary structured by patient population and key contraindications, and then — crucially — she reads the summary against the original before distributing it to her team. The summary saves her 40 minutes of drafting time. The verification step takes 10 minutes. Net saving: 30 minutes, with no reduction in accuracy, because a human expert is still the last checkpoint. This is the model that works. ChatGPT handles the drafting and structuring; the domain expert handles the verification. Neither is doing the other's job.
| Use Case | Best Approach | Biggest Risk | Verification Needed |
|---|---|---|---|
| Summarising documents you paste in | Paste full text + context prompt | Missing low-frequency but important content | Skim original for anything critical |
| Researching topics from training data | Ask for structured overview + ask it to flag uncertainty | Outdated info, hallucinated specifics | Cross-check key facts with primary sources |
| Live web research (Perplexity / ChatGPT Browse) | Ask for summary with citations | Misrepresenting source content | Click through on any stat or claim you'll use |
| Comparing multiple sources | Paste each source separately, then ask for comparison | False equivalence between unequal sources | Check that sources are actually comparable |
| Summarising data (spreadsheets, tables) | Use Code Interpreter / Advanced Data Analysis | Misreading column headers or data types | Validate totals and category labels manually |
When Research Means Synthesis, Not Just Summary
Marcus Webb is a marketing consultant in Chicago who specialises in B2B technology companies. His clients frequently ask him to produce competitive landscape analyses — who are the main players, how do they position themselves, what are their pricing models, where are the gaps. Before ChatGPT, this meant 2-3 days of website crawling, G2 and Capterra reviews, LinkedIn research, and press release archaeology. Now Marcus uses a two-stage process. In stage one, he uses Perplexity AI — not ChatGPT — to pull live, cited information about each competitor. Perplexity is built for this: it searches the web in real time and surfaces sources alongside every claim, making verification fast. In stage two, he pastes those Perplexity outputs into ChatGPT and asks it to synthesise across competitors, identify positioning patterns, and flag gaps in the market.
Marcus's two-tool approach is worth internalising as a pattern. Perplexity for live, cited facts. ChatGPT for synthesis, pattern recognition, and structured thinking across those facts. He's not asking ChatGPT to know things — he's asking it to think across things he's already verified. The competitive analyses he produces in this workflow take about 4 hours instead of 2-3 days. His clients don't know or care how he produced them. They care that the analysis is sharp, accurate, and delivered fast. The tools are infrastructure, not the product. Marcus's judgment about what the analysis should argue — that's still entirely his.
The Two-Tool Research Stack That Actually Works
What This Means When You Sit Down to Work
The practical implication of everything above is that your prompts for research and summarisation need to do more work than most people expect. A prompt like 'summarise this article' is technically valid but produces a generic output — the model's best guess at what a summary should contain, with no knowledge of why you need it, what you'll do with it, or what would make it useful versus useless to you. Compare that to: 'I'm preparing for a 30-minute briefing with a CFO who is skeptical about AI investment. Summarise the following article, focusing on ROI data points, implementation costs, and any risks mentioned. Flag any claims that seem to lack supporting evidence.' Same article, completely different output. The second prompt turns ChatGPT from a generic summariser into something that's actually working for your specific situation.
The three variables that transform a research or summarisation prompt are role, purpose, and priority. Role: who are you, and what's your context? Purpose: what will you do with this output? Priority: what matters most in this specific situation, even if it appears infrequently in the source material? You don't need all three in every prompt. A quick personal summary for your own notes doesn't need the same scaffolding as a summary you'll distribute to a leadership team. But the more consequential the output, the more these three variables earn their place in the prompt. Professionals who understand this write better prompts on their second attempt than most people write on their twentieth, simply because they're giving the model the right inputs to work with.
There's a subtler point worth making about research specifically. ChatGPT is excellent at helping you think about what you don't know yet. After you've asked for a summary or overview of a topic, ask a follow-up: 'What are the most important questions I haven't asked about this topic?' or 'What aspects of this are most contested or uncertain among experts?' These meta-research prompts are underused and genuinely valuable. They help you identify the gaps in your understanding before you go into a meeting, write a report, or make a recommendation. The McKinsey team mentioned at the start of this lesson used exactly this technique — after their initial research summaries, they asked ChatGPT what questions a skeptical client might ask that the current research didn't answer. Several of those questions became section headings in the final report.
Goal: Produce two context-led summaries that are demonstrably more useful than generic summaries of the same content, and identify the prompt variables that made the difference.
1. Find a long-form piece of content relevant to your work — an industry report, a dense email thread, a set of customer feedback responses, or a competitor's website copy. It should be at least 500 words. 2. Open ChatGPT (free or Plus) and start a new conversation. Before pasting any content, write a context sentence: who you are, what you're working on, and what makes certain information more important than other information in your situation. 3. Paste the full source text into the conversation after your context sentence, then ask for a structured summary. Specify the format you want: bullet points, sections, a table, or a brief narrative. 4. Read the summary against the original source. Note one thing the summary captured well and one thing it missed, minimised, or got wrong. 5. Write a follow-up prompt that corrects the miss: 'You didn't mention X. Please revise the summary to include this and explain its significance given the context I provided.' 6. After the revised summary, ask: 'What are the three most important questions about this topic that this document doesn't answer?' Note whether these questions are useful to you. 7. Save both the original prompt and the revised prompt in a document. Label them 'v1' and 'v2.' This becomes the start of your personal prompt library for this type of task. 8. Repeat the exercise with a second piece of content from a different context (e.g., if the first was customer feedback, make the second a market research article). Compare how much your v2 prompt approach improves the output versus your initial attempt. 9. Write two sentences summarising what you'd do differently next time — this becomes your personal summarisation prompt checklist.
What the Examples Teach Us
- ChatGPT compresses information volume, not information judgment — the McKinsey team still made every strategic call themselves, but reclaimed days of reading time.
- Summarisation defaults to surface prominence — what appears frequently and clearly wins out unless you explicitly tell the model what strategic importance means in your context.
- Pasting source text is fundamentally safer than asking ChatGPT to recall information from memory — Dr. Nair's protocol protects against hallucination by keeping ChatGPT in 'synthesis mode' rather than 'recall mode.'
- Verification time is a feature, not a failure — the 10 minutes Dr. Nair spends checking ChatGPT's clinical summaries is the professional standard, not a sign that the tool isn't working.
- Perplexity AI and ChatGPT are complements, not competitors — Perplexity finds and cites live facts; ChatGPT synthesises and structures them. Marcus Webb's two-stage workflow is a pattern worth stealing.
- The three prompt variables that transform research outputs are role, purpose, and priority — not all three are needed every time, but high-stakes outputs demand all three.
- Meta-research prompts ('what haven't I asked?') surface the gaps in your understanding before they become gaps in your work product.
Key Takeaways
- ChatGPT's default knowledge cutoff is early 2024 — for anything time-sensitive, use ChatGPT's browsing tool or Perplexity AI, not ChatGPT's memory.
- Always paste source documents directly into the conversation when accuracy matters — never rely on ChatGPT to recall specific facts, figures, or guidance from training data alone.
- Your context shapes the output — a prompt without role, purpose, and priority produces a generic summary; a prompt with all three produces something you can actually use.
- Build verification into your workflow as a fixed time budget, not an optional step — 10 minutes of checking on a 40-minute AI-assisted task is a good ratio for high-stakes outputs.
- After any research summary, ask ChatGPT what questions the content doesn't answer — this is one of the most underused and immediately valuable research techniques available.
How a 12-Person Consulting Firm Cut Research Time by 60%
Redline Strategy, a boutique management consultancy in Toronto, faced a familiar problem. Every new client engagement started the same way: two or three days of a junior consultant grinding through industry reports, analyst notes, and news archives to build a briefing document that senior partners would skim in ten minutes. The work was necessary but brutally inefficient. In late 2023, their operations lead trialled a new workflow using ChatGPT to handle the initial synthesis layer — not to replace research, but to compress the time between raw sources and structured insight. The results surprised even the sceptics on the team.
The shift wasn't about dumping questions into ChatGPT and hoping for the best. Redline's team discovered that the quality of the output tracked almost exactly with the quality of the prompt structure. When consultants gave ChatGPT a clear frame — the client's industry, the business question being investigated, the format the partner expected — the summaries were tight, usable, and accurate. When they asked vague questions, they got vague answers. This is the pattern you'll see across every professional context: ChatGPT amplifies the clarity you bring to it.
What Redline's team had stumbled onto is something researchers call the 'framing effect' in AI prompting. The model doesn't just answer your question — it interprets the context you've signalled and adjusts tone, depth, and structure accordingly. A prompt that specifies 'I'm a strategy consultant preparing a client briefing for a CFO' produces fundamentally different output than 'tell me about the retail industry.' Same underlying model, radically different results. The principle: your prompt is the steering wheel, not just the ignition key.
What ChatGPT Actually Knows — and When
The Anatomy of a Research Prompt That Actually Works
Most people approach ChatGPT research prompts the way they approach a search engine — short, keyword-heavy, context-free. 'Summarise the pharmaceutical industry' is a search query. It produces a generic Wikipedia-level response that helps no one. A research prompt is closer to a briefing you'd give a smart analyst on their first day. You tell them who the output is for, what decision it needs to support, what format works best, and what you already know so they don't waste time covering ground you've already covered. That mental model — smart analyst, first day — transforms how you write prompts.
The four components that consistently produce strong research outputs are: role context (who you are and why you're researching), the specific question or topic (narrow, not broad), output format instructions (bullet points, numbered list, short paragraphs, table), and constraints (word count, depth level, what to exclude). None of these are complicated. But most users include one or two and wonder why the output feels generic. Including all four takes an extra thirty seconds of prompt writing and saves five minutes of output editing. That arithmetic compounds across hundreds of research tasks.
There's also a fifth element that separates intermediate from advanced users: the 'use case anchor.' This is a single sentence that tells ChatGPT what the output will be used for — a board presentation, a client email, a personal decision, a team training. The model adjusts vocabulary, assumed knowledge level, and emphasis based on this anchor. Tell it you're preparing a board presentation and it skews toward strategic framing. Tell it you're writing a training document for new hires and it adds more explanatory context. The use case anchor is the easiest upgrade most beginners haven't discovered yet.
Prompt
I'm a management consultant preparing a market overview briefing for a CFO client in the UK grocery retail sector. They're evaluating whether to expand into convenience store formats. Please summarise the key dynamics of the UK convenience retail market, covering: - Market size and growth trends - The 3-4 dominant players and their strategies - Consumer behaviour shifts driving convenience demand - Key risks or headwinds Format this as four short sections with bullet points under each. Keep it factual and concise — this is background research, not a recommendation. Avoid jargon the CFO would find condescending.
AI Response
**UK Convenience Retail Market Overview** **Market Size & Growth** • The UK convenience sector is valued at approximately £47bn (IGD, 2023), growing at ~4% annually • Convenience outpaces the broader grocery market, which grew at under 2% in the same period • Post-pandemic normalisation has sustained rather than reversed convenience shopping habits **Dominant Players & Strategies** • Tesco Express / One Stop: scale-led, loyalty card integration, urban densification • Co-op: community positioning, own-label premium push, franchise expansion • BP/EG Group forecourt retail: fuel-to-food transition, premium food-to-go • Booker/Londis/Budgens: symbol group model, independent retailer support **Consumer Behaviour Drivers** • Shift toward smaller, more frequent shops (average basket size down, visit frequency up) • Demand for food-to-go and meal solutions grew 18% 2021–2023 • Younger urban demographics under-indexed for weekly supermarket trips **Key Risks & Headwinds** • Rental costs in high-footfall urban locations compressing margins • Competition from dark kitchens and rapid delivery (Getir, Gopuff) for impulse missions • Wage inflation affecting labour-intensive small-format operations
Inside a Hospital Trust's Content Summarisation Problem
NHS England's communications teams produce an extraordinary volume of guidance documents — clinical protocols, policy updates, procurement notices, public health briefings. A comms manager at a mid-sized hospital trust described her pre-ChatGPT workflow as 'reading everything twice and still missing things.' A 40-page infection control update would land in her inbox on a Tuesday and she'd need a plain-English summary for department heads by Thursday. She was the bottleneck. When she started using ChatGPT to summarise pasted document content, that bottleneck dissolved almost immediately.
Her technique was methodical. She would paste the full document text directly into ChatGPT (well within the model's context window, which handles roughly 25,000 words for GPT-4o) and ask for a structured summary tailored to a specific audience — in her case, department heads with clinical backgrounds but limited time. She learned quickly to specify what those department heads actually needed to act on: what had changed from previous guidance, what the compliance deadlines were, and what actions were required at ward level. ChatGPT consistently produced summaries that her colleagues described as clearer than the originals. That's not a criticism of the source documents — it's a demonstration of what targeted summarisation prompting can do.
Choosing the Right Summarisation Approach
Not all summarisation tasks are the same. A legal contract summary needs different emphasis than a market research report summary. A news article summary for personal awareness is structurally different from a competitor analysis summary for a sales team. ChatGPT handles all of these — but only if you tell it which one you need. The table below maps the most common professional summarisation tasks to the prompt approach that produces the best output.
| Summarisation Task | Best Prompt Approach | Key Instruction to Include | Watch Out For |
|---|---|---|---|
| Long policy or guidance document | Paste full text, ask for structured summary | Specify audience role and what they need to act on | Hallucinated details if you describe doc without pasting it |
| Industry research report | Paste executive summary + key sections | Ask for 'key findings relevant to [your context]' | Over-summarising — ask for 3-5 specific insights, not a synopsis |
| Competitor website or press release | Paste content, ask for positioning analysis | Ask what claims they make, what they emphasise, what's absent | ChatGPT may be diplomatic — ask it to be analytical, not neutral |
| Academic paper or technical study | Paste abstract + conclusion + key findings | Ask for plain-English explanation + practical implications | Jargon bleed — specify 'no technical terms without explanation' |
| Meeting transcript or notes | Paste transcript, ask for decisions + actions | Ask for decisions made, open questions, and owner-action pairs | Missing nuance — flag if a decision was contested or unclear |
| News article cluster (multiple articles) | Summarise each, then ask for synthesis | After individual summaries, ask 'what's the common thread?' | Recency bias — older articles may be more analytically useful |
The Marketing Director Who Stopped Reading Every Brief
At a mid-sized SaaS company, the marketing director received an average of eleven briefs, reports, and strategy documents per week from her team. She read perhaps four of them fully. The rest got skimmed or ignored, which created downstream problems when she'd miss a key assumption buried on page six of a campaign brief. Her solution, once she started using ChatGPT seriously, was to create a standing summarisation prompt she used for every document her team sent. She called it her 'director filter' — a prompt that extracted the three things she needed to make a decision, flagged any assumptions she should question, and listed any open items requiring her input.
The downstream effect was significant in ways she hadn't anticipated. Because she was now consistently engaging with the substance of every brief — even if via a ChatGPT summary — her feedback to her team became more specific. She was catching strategic gaps she'd previously missed. Her team noticed and started writing tighter briefs, because they knew she was actually reading them. The tool changed not just her workflow but the quality of input she received. This is a pattern worth naming: when you use AI to raise your floor of engagement, the people working with you often raise their floor too.
Build Your Own 'Standing Prompt' for Recurring Tasks
What Changes When You Paste Your Own Content
There's an important distinction between asking ChatGPT to recall information from its training data and asking it to work with content you supply. When you paste a document, report, or transcript into the conversation, you're not testing its memory — you're giving it a working document and asking it to process that specific text. This is called 'grounded summarisation,' and it's far more reliable than asking ChatGPT to summarise topics from memory, because the model is constrained to what you've given it rather than drawing on potentially outdated or imprecise training knowledge.
Grounded summarisation also sidesteps the hallucination risk that makes some professionals wary of ChatGPT for research. Hallucination — where the model generates plausible-sounding but factually incorrect information — is most likely to occur when ChatGPT is recalling specific facts, statistics, citations, or names from its training data. When it's summarising text you've pasted, it's working from a fixed source. It can still misread emphasis or omit things, but it's not inventing. For professional research work, defaulting to paste-and-summarise rather than ask-and-recall is one of the most important reliability habits you can build.
The practical workflow that follows from this is straightforward. For background and conceptual research — understanding how a market works, what a framework means, how a technology functions — ask ChatGPT directly, knowing it's drawing on broad training knowledge. For specific documents, reports, transcripts, or articles — paste the content and ask ChatGPT to work from that. For anything where accuracy is high-stakes (legal, medical, financial specifics), treat ChatGPT output as a first draft that requires verification, not a final answer. These three modes cover 95% of professional research and summarisation scenarios.
Goal: Experience the practical difference between recalled and grounded research outputs, and produce a usable research brief that combines both modes effectively.
1. Identify a current work topic you need to understand better — a market, a competitor, a process, a regulation, or an industry trend. 2. Open ChatGPT (GPT-4o if available) and write a role-framed research prompt: include your job context, the specific question, the format you want (bullet points or short sections), and what you'll use the output for. 3. Read the output critically. Highlight two or three specific claims or data points that matter most to your work. 4. Find one real document related to your topic — a report, article, press release, or briefing (PDF or webpage text). Copy the full text. 5. Paste the document text into a new ChatGPT conversation. Write a grounded summarisation prompt asking for: key findings, what's changed or new, and any implications for your specific context. 6. Compare the two outputs — the recalled research from Step 2 and the grounded summary from Step 5. Note where they agree, where they differ, and which feels more reliable for your purpose. 7. Write a 3-bullet 'research brief' combining the most useful elements from both outputs — as if you were sending it to a colleague who needs to get up to speed in two minutes. 8. Reflect: which prompt structure (Step 2 or Step 5) produced output you'd be more confident acting on, and why?
What the Examples Above Have in Common
- Every effective user gave ChatGPT explicit context about who they are and why they're researching — the model performs better with role framing than without it.
- The most reliable summarisation work happened when users pasted source content rather than asking ChatGPT to recall specific facts independently.
- Output format instructions (bullet points, sections, tables) weren't cosmetic — they shaped how ChatGPT structured its thinking, not just how it presented it.
- The 'use case anchor' — telling ChatGPT what the output will be used for — consistently improved relevance and appropriate depth.
- Users who built standing prompts for recurring tasks compounded their time savings and produced more consistent output quality.
- The productivity gains weren't just personal — they changed the quality of interactions with colleagues, clients, and teams downstream.
Key Principles So Far
- Prompt clarity is the primary variable in output quality — vague in, vague out is a reliable law.
- Four prompt components drive strong research outputs: role context, specific question, format instructions, and constraints.
- The use case anchor (what the output will be used for) is the most underused prompt upgrade available to beginners.
- Grounded summarisation — working from pasted content — is more reliable than recall-based research for specific, high-stakes information.
- Hallucination risk is highest when ChatGPT recalls specific facts; lowest when it summarises content you've supplied.
- Standing prompts for recurring research tasks are a high-return investment — build them once, use them repeatedly.
- ChatGPT's knowledge cutoff means real-time research requires either pasted content or a tool like Perplexity AI.
When Research Gets Messy: Handling Conflicting Sources and Complex Topics
In 2023, the consulting firm McKinsey & Company published findings showing that knowledge workers spend nearly 20% of their working week searching for information or tracking down colleagues who have it. That's one full day, every week, lost to retrieval. When the BBC's digital journalism team began experimenting with AI-assisted research in late 2022, their goal wasn't to replace journalists — it was to compress that retrieval time so reporters could spend more hours on analysis and interviews. The tension they hit immediately was a familiar one: speed versus accuracy. ChatGPT could surface a summary in seconds, but how could reporters trust what it produced on fast-moving, contested topics?
The BBC team's solution was procedural. They used ChatGPT not as a final source but as a first-pass research assistant — generating structured overviews of topics, surfacing the key questions worth investigating, and drafting comparison frameworks that human researchers then verified. One senior editor described it as 'having a very well-read intern who works at superhuman speed but occasionally misremembers footnotes.' That framing is precise and useful. ChatGPT's knowledge has a training cutoff, it can confabulate citations, and it reflects whatever biases exist in its training data. But for building a research scaffold — understanding a topic's landscape before diving deep — it is exceptionally capable.
What the BBC example teaches is a principle that applies across industries: use ChatGPT to structure complexity, not to certify facts. When a topic has multiple stakeholders, competing interpretations, or a long history, ChatGPT excels at mapping that complexity into something navigable. Your job is then to verify the map against primary sources. This division of labour — AI for structure, human for verification — is the pattern that separates professionals who use AI well from those who either over-trust or under-use it.
ChatGPT's knowledge cutoff
Mapping a Complex Topic: The Pharma Example
A medical affairs manager at a mid-sized pharmaceutical company needed to brief her leadership team on the competitive landscape for GLP-1 receptor agonists — the class of drugs that includes Ozempic and Wegovy. The landscape was sprawling: multiple manufacturers, ongoing clinical trials, evolving regulatory positions across the EU and US, and intense media coverage mixing science with hype. Reading everything herself would take days. Instead, she opened ChatGPT and treated it like a knowledgeable colleague who had already done the reading. She asked it to map the competitive field, identify the key unresolved questions in the science, and flag where expert opinion diverged.
The output gave her a structured overview she could verify and build on in a few hours rather than a few days. She used the 'key unresolved questions' section as the actual agenda for her leadership briefing — framing the conversation around genuine uncertainty rather than false confidence. Her prompts weren't complex. She asked for one thing at a time, specified her audience, and explicitly requested that ChatGPT flag areas of scientific disagreement rather than smooth them over. That last instruction is important: by default, ChatGPT tends toward synthesis. Asking it to surface conflict and contradiction makes it far more useful for nuanced research.
Prompt
I'm preparing a briefing for a leadership team on GLP-1 receptor agonists — the drug class including Ozempic and Wegovy. Please give me: (1) a brief overview of the competitive landscape including the main manufacturers and their key products, (2) the 3-4 most important unresolved scientific or regulatory questions, and (3) areas where expert or stakeholder opinion is genuinely divided. Flag any area where your information may be outdated or uncertain. My audience is senior executives with a business background, not medical specialists.
AI Response
**Competitive Landscape Overview** The GLP-1 market is currently dominated by Novo Nordisk (semaglutide — Ozempic for diabetes, Wegovy for obesity) and Eli Lilly (tirzepatide — Mounjaro/Zepbound). Roche, Pfizer, and AstraZeneca all have compounds in late-stage trials as of early 2024. The market is projected to exceed $100 billion annually by 2030, though supply constraints remain a significant near-term bottleneck. **Key Unresolved Questions** 1. Long-term cardiovascular and metabolic outcomes beyond 3-5 year trial windows 2. Optimal patient selection — who benefits most, and at what cost-effectiveness threshold? 3. Regulatory appetite for broader obesity indications beyond current approvals 4. Whether weight regain post-discontinuation undermines the case for long-term use **Areas of Genuine Expert Division** — Health economists disagree sharply on cost-effectiveness at current pricing, particularly within public payer systems — Endocrinologists and GPs differ on prescribing thresholds and monitoring requirements — Debate continues over whether cardiovascular benefits are drug-class effects or specific to individual molecules *Note: Regulatory positions and trial data move quickly in this space. I'd recommend verifying current FDA/EMA status directly before your briefing.*
Summarising for Different Audiences: The Same Content, Different Outputs
A strategy consultant at Deloitte working on a retail digitalisation project faced a common professional problem: she had a 60-page industry report, a technical team who needed the data, and a CFO who needed the business case — and two hours before back-to-back meetings. She pasted the executive summary and key findings sections into ChatGPT and asked for two separate summaries: one technical, one financial. The outputs weren't perfect first drafts, but they were excellent starting points. She spent 20 minutes editing rather than 90 minutes writing from scratch. The critical move was specifying the audience and the purpose of each summary explicitly in the prompt.
This illustrates something that becomes obvious once you see it: the same source material needs to be reframed entirely depending on who reads it and why. A technical audience wants mechanisms and data. A financial audience wants risks, costs, and returns. An operational audience wants process changes and timelines. ChatGPT can produce all three from the same input — but only if you tell it who is reading and what decision they need to make. Without that instruction, it defaults to a generic synthesis that serves no audience particularly well.
| Research/Summary Task | Best Approach | Key Prompt Element | Watch Out For |
|---|---|---|---|
| Overview of an unfamiliar topic | Ask for a structured map with key players, questions, and debates | Specify your existing knowledge level | Outdated information on fast-moving fields |
| Summarising a long document | Paste the text; ask for a summary with a specific audience and purpose | State the reader's role and what decision they face | Hallucinated details if document is truncated |
| Comparing options or products | Request a structured comparison with explicit criteria | List your evaluation criteria upfront | Generic criteria that don't fit your context |
| Surfacing expert disagreement | Explicitly ask ChatGPT to flag where opinion diverges | Add: 'show me where experts disagree' | Default tendency to smooth over conflict |
| Preparing a briefing or report | Use ChatGPT for first draft; you edit and verify | Specify format, length, and audience seniority | Unverified statistics presented as fact |
The Legal Sector: Precision Under Pressure
A paralegal at a London-based commercial law firm used ChatGPT to accelerate background research on contract disputes involving force majeure clauses — a topic that exploded in complexity after COVID-19 generated thousands of novel cases. Her task was to give the senior partner a concise briefing on how courts in England, the US, and Singapore had treated force majeure claims in supply chain disputes since 2020. She didn't ask ChatGPT to cite specific cases — she knew it might confabulate those — but she asked it to summarise the general judicial trends, the factors courts had weighed most heavily, and the areas where legal interpretation still varied significantly by jurisdiction.
The output gave her a framework she could take to Westlaw and LexisNexis to find actual case citations. She found the real cases in 40 minutes rather than half a day. Her senior partner got a verified briefing. The paralegal's instinct — use ChatGPT to understand the shape of a problem, use specialist databases to find the proof — is precisely the right mental model for high-stakes research. ChatGPT is a thinking partner and a scaffold builder. The verification layer is always yours.
Ask ChatGPT what it doesn't know
What This Means When You Sit Down to Work
The professionals in these examples — the journalist, the pharma manager, the consultant, the paralegal — all share one habit: they arrive at ChatGPT with a clear picture of what they need and who will use it. They don't type a vague question and hope. They specify the audience, the format, the level of detail, and — critically — where they want uncertainty surfaced rather than hidden. That specificity is the difference between a generic output you discard and a structured draft you actually use. The investment in a well-constructed prompt pays back in the quality and usability of what you receive.
There's also a rhythm to effective AI-assisted research that becomes natural quickly. Start broad — ask for a landscape overview to understand what you're dealing with. Then go specific — ask targeted follow-up questions on the areas that matter most. Then shift format — ask for the same information restructured for your specific audience and purpose. This three-stage pattern (orient, interrogate, reformat) turns ChatGPT from a search engine substitute into a genuine research accelerator. Each stage builds on the last, and the context you've established in earlier exchanges makes later prompts more precise and more useful.
Finally, remember that the output is always a starting point, not a finishing point. Every professional in these examples edited, verified, and added their own expertise before anything reached a client, a senior stakeholder, or a publication. ChatGPT compresses the time between knowing nothing and having a working draft. What happens to that draft — the judgment calls, the verification, the framing for your specific context — is irreducibly human work. That division isn't a limitation of the technology. It's the right way to use it.
Goal: Produce a real, edited research brief on a live work topic — structured for a specific audience, verified for accuracy, and ready to use or share.
1. Identify a topic you're genuinely researching right now — a competitor, a market trend, a policy area, a technology, or an industry dynamic relevant to your work. 2. Open ChatGPT and write a prompt asking for a structured overview: key players, key questions, and areas of genuine debate or uncertainty. Specify your existing knowledge level (e.g. 'I know the basics but not the detail'). 3. Read the output and highlight two or three areas where you want more depth or where something surprises you. 4. Write a follow-up prompt asking ChatGPT to go deeper on one of those areas — be specific about what you want to understand better. 5. Now identify who in your organisation needs to understand this topic and what decision they face. Write a third prompt asking ChatGPT to reformat the key findings as a concise briefing for that specific person and purpose. 6. Copy the briefing output into a document. Spend 10-15 minutes editing it — correcting anything you know to be wrong, adding context only you have, and adjusting the tone to match your organisation's style. 7. Flag any factual claims or statistics in the document that need independent verification before you share it, and note the source you'll use to check each one. 8. Save the document as your working research brief. You now have a verified, audience-ready summary produced in a fraction of the usual time.
- Use ChatGPT to map complexity first — understand the landscape before drilling into specifics.
- Always specify your audience and their purpose: what decision are they making, and what do they already know?
- Explicitly ask ChatGPT to surface disagreement and uncertainty rather than letting it default to smooth synthesis.
- Follow a three-stage research rhythm: orient (overview), interrogate (depth), reformat (audience-specific output).
- Treat every output as a first draft — your verification, judgment, and context are what make it professionally usable.
- End high-stakes research prompts by asking ChatGPT to flag what it doesn't know or where you should verify independently.
- The time saving is real and significant — but it lives in the drafting and structuring phase, not in replacing your expertise.
- ChatGPT excels at structuring complex topics — mapping key players, questions, and debates quickly and clearly.
- Summaries only serve their purpose when the prompt specifies the audience's role and the decision they need to make.
- Asking ChatGPT to flag uncertainty produces more honest, more useful outputs than accepting polished synthesis at face value.
- The AI handles structure and speed; you handle verification and judgment — that division produces the best results.
- A three-stage prompting rhythm (orient, interrogate, reformat) turns a single research session into a complete working brief.
A marketing analyst asks ChatGPT to summarise a 50-page trends report. The output is well-written but feels generic and doesn't match what her director needs. What is the most likely cause?
Which of the following best describes the right way to use ChatGPT for research on a complex, contested topic?
A paralegal needs to research how courts have treated a specific type of contract clause. She asks ChatGPT to list relevant case citations. Why is this a risky approach?
You want ChatGPT to give you a more honest, uncertainty-aware research summary rather than a confident-sounding synthesis. What should you add to your prompt?
A consultant uses ChatGPT to research a competitor and receives a detailed, well-structured overview. She shares it with her client immediately without editing or verification. What professional risk has she taken?
Sign in to track your progress.
