Multi-step workflows: using AI across a complex project
~20 min readMulti-Step Workflows: Using AI Across a Complex Project
It's Tuesday morning. Your VP has just asked you to produce a competitive analysis, a client-facing summary deck, and a set of recommendations — all by Friday. Three weeks ago, that request would have meant four late nights, a lot of copy-pasting between browser tabs, and a final document that felt rushed. Last week, a colleague on your team ran a nearly identical project in two days using ChatGPT and Claude in sequence, handing outputs from one stage directly into the next. She didn't work faster by typing faster. She worked faster by treating AI as a relay team, not a single sprinter. That shift — from one-off AI queries to connected, multi-step workflows — is what separates professionals who get marginal time savings from those who fundamentally change how complex work gets done.
What a Multi-Step AI Workflow Actually Is
A multi-step workflow means deliberately chaining AI interactions so that the output of one prompt becomes the structured input for the next. This is different from a long conversation with a single AI tool. You might use Perplexity to gather and cite current market data, paste that research into Claude to synthesize patterns and tensions, then move the synthesized analysis into ChatGPT to reframe it for a specific audience — say, a skeptical CFO. Each tool is doing what it does best. Perplexity's real-time web access makes it the right choice for live data. Claude handles long documents and nuanced reasoning exceptionally well. ChatGPT excels at tone-shifting and audience adaptation. The workflow is the strategy; the individual prompts are the tactics.
Most professionals, when they first start using AI tools, treat each session as a fresh start. They open ChatGPT, ask a question, read the answer, close the tab. That approach captures maybe 20% of the value available. The real productivity gains come when you plan the stages of a project before you open any AI tool, identify which stage benefits most from which capability, and design your prompts so outputs are structured for handoff — not just readable, but usable. A bullet-point research summary is far more useful as input to the next stage than a flowing paragraph, because bullets are easier to reference, reorder, and selectively include. Thinking about the shape of your outputs — not just their content — is a skill that compounds across every project you run.
Design for Handoff, Not Just Readability
The Four Stages Most Complex Projects Share
Complex professional projects — competitive analyses, strategy decks, client proposals, research reports — almost always move through four recognizable stages: research and information gathering, synthesis and pattern recognition, drafting and structuring, and then refinement for a specific audience or format. AI tools can assist at every stage, but the type of assistance differs sharply. In the research stage, you need accurate, current, citable information — which is why Perplexity, with its live web search and source citations, outperforms a knowledge-cutoff model like GPT-4 Turbo for this job. In the synthesis stage, you need a model that can hold a large amount of text in context and identify non-obvious patterns — which is where Claude 3 Opus or GPT-4o, with their large context windows, earn their place. Knowing which tool fits which stage is not a technical skill. It's a project management skill.
The drafting stage is where most people default to AI assistance — they open ChatGPT and ask it to 'write a report.' That's the least efficient entry point. A draft built on unstructured thinking produces generic content that requires heavy editing. A draft built on a clean synthesis — a structured list of key findings, tensions, and recommended angles — produces targeted content that needs light editing. The difference in revision time between these two approaches is significant: professionals who front-load the research and synthesis stages consistently report spending 40–50% less time editing their AI-generated drafts. The final stage, audience refinement, is where tools like ChatGPT shine because of their flexibility in adjusting register, tone, and complexity level on command. That flexibility is worth almost nothing if the underlying content isn't solid.
Prompt
I'm preparing a competitive analysis of project management software for a mid-sized consulting firm (200 employees) currently using spreadsheets. Focus on Asana, Monday.com, and Notion. For each tool, give me: current pricing for teams of 50–200 users, the top 3 use cases it's specifically designed for, one publicly known weakness or limitation, and any major product updates in the last 6 months. Format as a structured table, then add a short paragraph of 3–4 sentences summarizing the key differentiator between these tools.
AI Response
| Tool | Pricing (50–200 users) | Top 3 Use Cases | Known Weakness | Recent Updates | |---|---|---|---|---| | Asana | $10.99–$24.99/user/month | Task tracking, cross-team projects, workflow automation | Steep learning curve for non-technical users | AI task prioritization launched Q1 2024 | | Monday.com | $9–$19/user/month | CRM, project pipelines, resource management | Expensive at scale; costs rise sharply above 100 users | Monday AI added to all tiers, March 2024 | | Notion | $8–$15/user/month | Documentation, wikis, lightweight project tracking | Weak native reporting and analytics | Notion AI included in paid plans from $16/user | The sharpest differentiator is purpose versus flexibility. Asana is purpose-built for structured project execution and scales well for teams with defined processes. Monday.com is the strongest choice when project data needs to connect to sales or operational pipelines. Notion trades depth for breadth — it's the right choice when a team's primary need is a shared knowledge base with light project tracking layered on top, not the reverse.
Choosing the Right Tool for Each Stage
The AI tool landscape in 2024 is mature enough that meaningful differences exist between products — differences that matter when you're building a workflow rather than running a single query. Perplexity AI pulls live web data and cites sources, making it the most reliable starting point for any project that requires current market information, pricing, or recent events. Claude 3 Opus accepts up to 200,000 tokens of context — roughly 150,000 words — which means you can feed it an entire research report, a set of interview transcripts, and a competitor's annual report simultaneously, and ask it to synthesize across all of them. ChatGPT with GPT-4o is the most versatile drafting and communication tool, with strong performance on tone adaptation, structured formatting, and iterative revision. Gemini 1.5 Pro integrates natively with Google Workspace, making it the logical choice if your deliverable lives in Google Docs or Slides. Knowing these distinctions lets you route work intelligently.
| AI Tool | Best Workflow Stage | Key Strength | Limitation to Know | Typical Cost |
|---|---|---|---|---|
| Perplexity AI | Research & fact-gathering | Live web search with citations | Less capable at long-form synthesis | Free tier; Pro $20/month |
| Claude 3 Opus | Synthesis & analysis | 200K token context window | No live web access by default | $15 per million input tokens |
| ChatGPT (GPT-4o) | Drafting & audience adaptation | Versatile, strong tone control | Context window smaller than Claude | $20/month (Plus); API pricing varies |
| Gemini 1.5 Pro | Drafting inside Google Workspace | Native Docs/Slides integration | Weaker reasoning than GPT-4o on complex tasks | Included in Google Workspace Business |
| GitHub Copilot | Technical/code-heavy projects | In-editor code generation | Narrow use case — not for prose | $10–$19/month |
| Notion AI | Documentation & refinement | Edits within existing Notion pages | Limited to Notion ecosystem | Add-on at $8–$10/user/month |
Putting the Workflow Together in Practice
Here's what a real Tuesday-to-Thursday workflow looks like for the competitive analysis scenario. On Tuesday morning, you spend 20 minutes in Perplexity running three to four targeted research prompts — one per competitor, one for market trends. You're not reading for pleasure; you're building structured inputs. Each Perplexity output is formatted as a table or numbered list before you move on. By Tuesday afternoon, you open Claude and paste all four research outputs into a single prompt, asking it to identify the top tensions, unresolved questions, and strategic implications for a consulting firm your client's size. Claude's large context window means it holds all four documents simultaneously and synthesizes across them — something you'd spend hours doing manually. The synthesis prompt takes you 10 minutes to write and 3 minutes to run. You now have a structured analytical brief that would have taken a junior analyst a full day to produce.
Wednesday is drafting day. You take Claude's synthesis and open ChatGPT. Your first prompt establishes context: who the client is, what they care about, what decision this analysis is supposed to support. Your second prompt asks for the executive summary. Your third asks for the recommendations section. You're not asking ChatGPT to invent analysis — you're asking it to shape and communicate the analysis Claude already produced. This division of labor is intentional. Claude reasoned; ChatGPT articulates. The draft you get back from ChatGPT on Wednesday afternoon is 70–80% of the way to final, which leaves Thursday for human judgment: checking the logic, adding context only you have, and making sure the recommendations actually match what you know about your client's political situation inside their organization. AI handles the volume. You handle the insight.
Thursday's refinement step is where many professionals underuse AI. They've done the heavy lifting and default back to solo editing. But this is exactly where a targeted audience-adaptation prompt in ChatGPT — 'rewrite this recommendation section for a CFO who is skeptical of technology spending and prefers quantified ROI arguments' — produces a meaningfully better document in under two minutes. Or where pasting your Notion draft into Notion AI and asking it to 'tighten each paragraph to under 80 words without losing the core argument' saves 45 minutes of manual editing. The workflow isn't over when the draft exists. Refinement is a stage, and AI belongs in it.
Goal: Produce a structured first draft of a real work deliverable using at least three different AI tools in sequence, with each tool assigned to the stage it's best suited for.
1. Identify a real project you're currently working on or will start this week — it should have at least three distinct deliverables or stages (e.g., research, analysis, written output). 2. Write down the four stages for your project: what counts as 'research,' 'synthesis,' 'drafting,' and 'refinement' in this specific context. 3. For each stage, name the AI tool you'll use based on the comparison table above, and write one sentence explaining why that tool fits that stage. 4. Open Perplexity and run a research prompt for your project's first topic area. Ask for structured output — a table or numbered list — not flowing prose. 5. Copy the Perplexity output and open Claude (claude.ai). Write a synthesis prompt that includes the research output and asks Claude to identify the top 3 tensions or strategic questions your project needs to address. 6. Review Claude's synthesis and highlight the two or three points that feel most important for your audience. Add any context Claude couldn't know — internal politics, client history, budget constraints. 7. Open ChatGPT and write a context-setting prompt (3–4 sentences describing your audience, their priorities, and the decision this document needs to support). Then paste Claude's synthesis and ask for a structured first draft of your key section. 8. Read the ChatGPT draft against your highlighted points from step 6. Note which gaps require your human judgment and edit those sections directly. 9. Run one audience-adaptation prompt in ChatGPT — ask it to rewrite one section specifically for the most skeptical person in the room when this document gets presented.
How This Looks Different by Role
- Managers: Use the four-stage workflow to run team retrospectives faster — Perplexity for benchmarking, Claude for synthesizing team feedback documents, ChatGPT for drafting the action plan in a format your team will actually read.
- Analysts: Front-load with Perplexity for data sourcing and citations, move to Claude for cross-dataset pattern recognition on pasted outputs, and use ChatGPT to translate technical findings into executive language.
- Consultants: The synthesis stage is your highest-value add — use Claude's large context window to process client documents, interview notes, and market data simultaneously before any drafting begins.
- Marketers: Run competitor research in Perplexity, synthesize positioning gaps in Claude, draft campaign concepts in ChatGPT, and use Notion AI or Gemini to refine copy directly inside your content management workflow.
- Project managers: Use the workflow to produce status reports and stakeholder updates — research (pull project data), synthesis (identify risks and blockers), draft (create the update), refine (adapt tone for exec vs. team audience).
Prompt
Below is structured research on three project management tools (Asana, Monday.com, Notion) compiled for a consulting firm with 200 employees currently using spreadsheets. Your job is not to summarize this research — I already have the summary. Instead, identify: (1) the top 3 strategic tensions a firm this size would face when choosing between these tools, (2) the two assumptions embedded in the research that I should pressure-test before making a recommendation, and (3) one question this research does NOT answer that would significantly change the recommendation. Format each section with a bold heading and no more than 4 bullet points per section. [Paste Perplexity research table and summary paragraph here]
AI Response
**Strategic Tensions** - Flexibility vs. adoption speed: Notion's breadth appeals to leadership but its lack of structure creates inconsistent usage across teams — high adoption risk for a firm without a dedicated operations role to manage it. - Current cost vs. total cost: Monday.com appears mid-range at entry pricing but becomes the most expensive option above 100 users once premium features (required for client-facing reporting) are included. - Tool consolidation vs. best-in-class: Choosing Asana for project execution while keeping a separate documentation tool (Confluence, Notion) may serve the firm better than forcing one tool to do both jobs — but increases integration complexity. **Assumptions to Pressure-Test** - The research assumes the firm's primary need is task tracking. If the real bottleneck is knowledge management and institutional memory, Notion moves from third choice to first. - Pricing comparisons assume standard tier features meet the firm's needs. Consulting firms typically require SSO, audit logs, and advanced permissions — which push all three tools to higher tiers. **The Unanswered Question** - What does the firm's existing tech stack look like? If they're Microsoft-heavy (Teams, SharePoint), none of these three tools integrates as cleanly as Microsoft Planner or Loop — which aren't in this comparison at all. That gap could make the entire analysis moot.
What Changes on Monday
- Before opening any AI tool, write down the four stages of your project and which tool you'll use for each — this 5-minute planning step prevents the most common workflow mistake: using one tool for everything.
- Run your research prompts in Perplexity, not ChatGPT — live web access and citations make research outputs more reliable and easier to defend to stakeholders.
- Always request structured output (tables, numbered lists) at the end of any research or synthesis prompt, so your outputs are ready to paste into the next stage without reformatting.
- Use Claude for any synthesis task that requires holding multiple documents in context simultaneously — paste everything in at once rather than asking follow-up questions piecemeal.
- Reserve your own judgment for the gaps AI cannot fill: internal politics, client relationships, strategic context that isn't written down anywhere. That's where your expertise creates value that no workflow can replicate.
Handing Off Between Tools Without Losing Momentum
Picture this: it's Tuesday afternoon and you've just finished a client discovery call. You have 40 minutes of rough notes, a whiteboard photo, and a vague brief from your manager asking for a proposal by Friday. In Part 1, you mapped the project phases and identified where AI fits. Now comes the harder skill — actually moving work from one AI tool to another without the output degrading at each handoff. Most professionals hit a wall here. They get a solid first draft from ChatGPT, paste it into Notion AI for structuring, and somehow end up with something blander than what they started with. The problem isn't the tools. It's that each handoff strips context, and stripped context produces generic output.
The fix is what practitioners call a context packet — a short, dense block of text you carry forward into every new tool or conversation. It contains your project goal in one sentence, the audience, any constraints (budget, tone, deadline), and the decisions already made. Think of it as a briefing note you'd give a new team member joining mid-project. When you open a fresh Claude session to work on your proposal structure after using ChatGPT for research synthesis, you paste the context packet first. Suddenly the new session isn't starting cold — it already knows you're writing a 6-page consulting proposal for a logistics firm skeptical of AI investment, not a generic business document.
Build Your Context Packet Once, Use It Everywhere
Research and Synthesis: Where the Workflow Gets Interesting
Research is where multi-tool workflows earn their value. A single AI assistant gives you one perspective filtered through one training dataset. Perplexity AI searches live sources and returns cited answers — ideal for market sizing, recent statistics, or competitor moves. Claude handles long documents exceptionally well, processing PDFs up to 200,000 tokens, which means you can upload an entire industry report and ask specific questions about it. ChatGPT with browsing enabled sits somewhere between the two — good for general synthesis but less rigorous about citations than Perplexity. The professional move is to use these tools in sequence rather than picking one and hoping for the best.
Here's how that sequence works in practice. You start in Perplexity to get grounded in verified, current facts — market numbers, recent news, named competitors. You screenshot or copy the citations so you can verify them later. Then you move to Claude or ChatGPT with your context packet plus the Perplexity findings, and you ask for synthesis: patterns, implications, gaps, and angles your client hasn't considered. The AI isn't searching at this stage — it's reasoning over what you've fed it. This division of labour respects each tool's actual strength. Perplexity finds; Claude or ChatGPT thinks. Mixing those functions in one tool gives you mediocre results from both.
Prompt
Context: I'm building a proposal for a mid-size UK logistics firm (250 employees, £40M revenue) on whether to adopt AI-driven route optimisation. They're cost-focused and skeptical of tech vendors. I've gathered these findings from Perplexity: - Route optimisation AI typically reduces fuel costs 10–15% (McKinsey, 2023) - Three main vendors: Optym, Wise Systems, Descartes — all targeting fleets over 100 vehicles - Competitor DHL reported 12% reduction in last-mile costs after 18-month deployment - Main implementation risk: driver adoption and legacy TMS integration Using these findings, give me: 1. Three insight angles the client probably hasn't considered 2. The strongest objection they'll raise and a data-backed response 3. A recommended proposal narrative arc in 5 bullet points
AI Response
**Three insight angles the client probably hasn't considered:** 1. **The 18-month lag is the real risk, not the technology.** DHL's 12% saving took 18 months. For a £40M business, the question isn't 'does this work?' — it's 'can we sustain the transition cost while waiting for returns?' Frame this upfront so you control the conversation rather than the vendor. 2. **Driver adoption determines ROI more than algorithm quality.** All three vendors have comparable route logic. The differentiator is change management tooling — does the system give drivers override capability, and does it learn from their local knowledge? A skeptical workforce can nullify a 12% theoretical saving entirely. 3. **TMS integration cost is routinely underquoted.** Legacy transport management systems often require custom middleware. Ask each vendor for a reference customer with a comparable TMS age (8+ years) and actual integration timeline. **Strongest objection and response:** *Objection:* 'We tried a tech project like this before and it went over budget and under-delivered.' *Response:* 'That's exactly why we're recommending a phased pilot on your 3 highest-cost routes before any full commitment. You'll have real fuel data from your own fleet within 90 days — not a vendor's case study.' **Proposal narrative arc:** - Open with their current fuel cost as a number they recognise (use their data if available, or industry benchmark) - Establish that the technology is proven — the question is implementation approach - Introduce the three vendor options with a clear comparison on the dimensions they care about: cost, integration risk, timeline - Present a phased recommendation that limits upfront commitment - Close with a decision framework, not a hard recommendation — let them feel ownership
Choosing the Right Tool for Each Project Phase
Not every AI tool is equally good at every stage of a complex project. The table below reflects real capability differences based on how these tools perform across professional workflows — not marketing claims. The goal isn't to use all of them. It's to know which one to reach for at each phase so you're not fighting the tool's limitations when you should be making progress.
| Project Phase | Best Tool | Why It Wins Here | Avoid Using |
|---|---|---|---|
| Initial scoping & brainstorming | ChatGPT (GPT-4o) | Fast, wide-ranging ideation; good at generating options and frameworks from minimal input | Perplexity — too literal for open-ended exploration |
| Live research & fact-finding | Perplexity AI | Real-time web search with citations; significantly more accurate on recent data than training-based models | Claude — training cutoff limits current data |
| Document analysis & long-read synthesis | Claude (claude-3-5-sonnet) | 200K token context window handles full reports, contracts, or research papers in one pass | ChatGPT free tier — 8K context too small for long docs |
| Structured writing & drafting | ChatGPT or Claude | Both strong; Claude produces more nuanced prose, ChatGPT faster at structured formats like decks and reports | Gemini — less consistent on professional tone |
| Data analysis & pattern spotting | ChatGPT with Code Interpreter | Runs actual Python analysis on uploaded CSVs; produces charts and statistical summaries | Perplexity — not designed for data manipulation |
| Content formatting & workspace integration | Notion AI / Microsoft Copilot | Works inside your existing documents; reformats, summarises, and generates within your workflow | Standalone LLMs — extra copy-paste friction |
| Code generation & technical tasks | GitHub Copilot / ChatGPT | Copilot integrates directly into IDE with project context; ChatGPT good for standalone scripts and explanations | Claude — slightly weaker on code completion in editor |
Structuring the Middle of a Project: Drafting Without Drift
The middle of any complex project is where AI workflows most commonly fall apart. You've done the research, you have a clear goal, and then you start drafting — and three prompts later, the document has drifted from your original intent. The AI isn't being difficult. It's being responsive. Every new prompt slightly reorients its output toward whatever you just asked, slowly pulling the document away from the strategic position you established in the first session. This is called prompt drift, and it's the primary reason professionals end up with AI output that feels like it was written by a committee.
The antidote is a structural anchor prompt — a single prompt you write before any drafting begins that defines the complete document architecture. You're not asking the AI to write yet. You're asking it to agree on a blueprint. This might be a section-by-section outline with word counts, the key argument each section must make, and the tone calibration for the audience. Once the AI confirms the structure, you draft section by section, always referencing back to the agreed blueprint. If the output drifts, you don't edit the content — you re-anchor: 'Section 3 should argue X, not Y. Revise with that intent.' This keeps the reasoning layer separate from the writing layer.
For longer documents — anything over 1,500 words — consider breaking the draft into separate AI sessions, one per major section. This sounds counterintuitive, but it prevents context window dilution. When a conversation gets long, earlier instructions carry less weight than recent ones. A fresh session with your context packet and the specific section brief produces tighter, more focused output than asking a fatigued 40-message thread to write the conclusion. Claude's longer context window helps here, but even Claude benefits from fresh sessions on genuinely distinct document sections.
Watch for Confident Hallucination in the Middle Phases
How Different Roles Use This Differently
- Consultants: Use the research-synthesis handoff (Perplexity → Claude) most heavily. The structural anchor prompt is critical for proposals where every section must reinforce a single strategic recommendation. Hallucination risk is highest in market sizing sections.
- Marketing managers: Tend to run parallel creative tracks — brief ChatGPT on three different messaging angles simultaneously, then bring the best elements into a single Notion AI document for refinement. Prompt drift is less dangerous here because creative variation is welcome.
- Analysts: Live in ChatGPT's Code Interpreter for data work, then switch to a writing-focused model for the narrative layer. The context packet is especially important because the analytical findings and the written interpretation must stay aligned across different tools.
- Project managers: Use Notion AI or Microsoft Copilot for status reports and stakeholder updates — the in-workflow integration saves significant time. Structural anchor prompts matter less here; the document types are shorter and more templated.
- HR and L&D professionals: Often use Claude for document analysis (policy reviews, survey synthesis) and ChatGPT for drafting employee-facing content. The tone calibration step is critical — AI defaults to a register that's either too formal or too casual for most HR communications.
Iteration Without Starting Over: The Refinement Loop
Once you have a draft, the instinct is to keep prompting the same AI session for revisions. Sometimes that works. More often, you end up in a loop where each revision fixes one thing and subtly breaks another — the AI is trying to satisfy your latest instruction without fully understanding why the previous version failed. The more productive approach is a structured critique prompt: ask the AI to evaluate the existing draft against specific criteria before asking it to revise. This separates diagnosis from treatment, and the diagnosis is usually more valuable than you expect.
Prompt
Here is a draft executive summary for a proposal recommending AI route optimisation to a logistics client. Before revising anything, evaluate it against these four criteria: 1. **Clarity of recommendation** — Is the core recommendation unambiguous within the first 100 words? 2. **Audience calibration** — Does the tone and vocabulary suit a cost-focused operations director, not a tech audience? 3. **Evidence density** — Are claims supported with specific data, or are they assertions? 4. **Objection anticipation** — Does the summary acknowledge the client's likely skepticism about implementation risk? For each criterion, give a score (1–5) and one specific sentence from the draft that most illustrates the problem or strength. Then list the three highest-priority revisions, in order of impact. [DRAFT EXECUTIVE SUMMARY PASTED HERE]
AI Response
**Evaluation against your four criteria:** **1. Clarity of recommendation — 3/5** The recommendation doesn't appear until the fourth sentence: *'Based on the above, we believe a phased pilot may be worth exploring.'* 'May be worth exploring' is not a recommendation — it's a hedge. Rewrite the opening sentence to state the recommendation directly: what you're proposing, at what scale, and with what expected outcome. **2. Audience calibration — 4/5** Strong overall. The sentence *'ROI is modelled on your current fuel spend, not industry averages'* is exactly right for a cost-focused reader. One exception: *'algorithmic optimisation layer'* in paragraph two will cause an operations director to disengage. Replace with 'routing software.' **3. Evidence density — 2/5** Three of the five claims in this summary are unsubstantiated assertions: 'significant cost savings,' 'proven technology,' and 'manageable implementation risk.' Each needs a specific number or a named reference. Even a single data point per claim transforms the perceived credibility of the whole document. **4. Objection anticipation — 2/5** Implementation risk is mentioned once, in the final sentence, as a footnote. This client's primary concern is *'we've been burned before.'* That objection needs to be acknowledged in the second paragraph, not buried at the end. **Three highest-priority revisions:** 1. Rewrite the opening sentence as a direct, specific recommendation with a number attached (cost saving, timeline, or investment figure) 2. Add one sentence in paragraph two that names the implementation risk and immediately explains how the phased approach limits it 3. Replace the three unsubstantiated claims with specific data points — use the Perplexity findings from your research phase
Building the Handoff Document
As you move through the middle phases of a project, you're generating outputs in multiple tools — a research synthesis in Claude, a draft structure from ChatGPT, a data analysis from Code Interpreter, maybe a competitor table from Perplexity. These outputs exist in separate windows, separate conversations, separate files. Without a deliberate consolidation step, you arrive at the final phase with scattered material and no clear throughline. The handoff document solves this. It's a single living document — a Google Doc, Notion page, or Word file — where you paste the key outputs from each AI session, clearly labelled by phase and tool.
The handoff document isn't just a filing system. It becomes the source of truth for the final AI session where you pull everything together. When you're ready to produce the final deliverable, you open a fresh Claude or ChatGPT session, paste your context packet, and then paste the relevant sections of the handoff document with a synthesis instruction. You're giving the AI a complete, curated brief rather than asking it to remember a 3-day conversation across multiple tools. The quality jump between this approach and ad-hoc prompting is immediately visible — the output is coherent, consistent, and actually reflects the work you've done.
Goal: Produce one polished, AI-assisted section of a real work deliverable using the research-synthesis-draft-critique loop — with a handoff document that captures every stage for use in the final assembly phase.
1. Identify a project you're currently working on that has at least two distinct phases remaining — research or analysis, plus a writing or presentation deliverable. 2. Write your context packet: project goal (one sentence), audience, tone, key constraints, and decisions already made. Keep it under 120 words and save it somewhere accessible. 3. Open Perplexity AI and enter 2–3 specific research questions your project needs answered. Copy the results, including the source citations, into a new section of a handoff document. 4. Open a fresh ChatGPT or Claude session. Paste your context packet first, then paste the Perplexity findings. Ask the tool to identify three insights or angles your audience is unlikely to have already considered. 5. Write a structural anchor prompt for your deliverable — define each section, the argument it must make, and the approximate length. Ask the AI to confirm or improve the structure before any drafting begins. 6. Draft one section using the agreed structure. If the output drifts from the section's stated purpose, use a re-anchor instruction ('This section should argue X — revise with that intent') rather than editing the content manually. 7. Paste the completed section into your handoff document, labelled with the tool used and the date. 8. Run the structured critique prompt on your drafted section using the four criteria from the prompt example above. Note the two highest-priority revisions. 9. Apply those revisions in a follow-up prompt and paste the final version into the handoff document alongside the original draft, so you can see the delta.
Picture this: it's Thursday afternoon and your director asks for a competitive analysis, a stakeholder presentation, and a project risk register — all by Monday morning. Three weeks ago, that request would have triggered a weekend of grinding. Now, with a properly sequenced AI workflow, you can produce all three artifacts in a single focused Friday session. The shift isn't about working faster on individual tasks. It's about orchestrating AI tools across a connected chain of work, where each output feeds the next input, and the whole project moves like a production line rather than a series of separate sprints.
Closing the Loop: From Raw Output to Polished Deliverable
The most common mistake professionals make in multi-step AI workflows is treating each tool interaction as isolated. They ask ChatGPT to draft a market summary, then open a blank slide deck and start again from scratch, losing all the structured thinking the model just produced. The smarter move is explicit handoff prompting — ending each AI session with a prompt that packages the output specifically for the next stage. Ask Claude to summarize its own analysis as five bullet points suitable for a slide title. Ask ChatGPT to reformat its risk list as a table with columns for likelihood, impact, and owner. The model does the translation work; you do the judgment calls.
Explicit handoff prompting also forces you to stay in control of quality at every junction. When you instruct an AI to compress its own 800-word analysis into three executive-level sentences, you immediately see what it considers most important — and you can correct that framing before it propagates through your entire deliverable. This is where your domain expertise earns its keep. The AI handles volume and structure; you handle relevance and political awareness. A consultant who knows that the CFO cares about payback period, not IRR, will catch a misframed summary instantly. A manager who knows their team's actual capacity will flag an AI-generated timeline that looks clean on paper but is completely unrealistic.
Build a Prompt Relay System
Choosing the Right Tool at Each Stage
Not all AI assistants perform equally across every stage of a complex project. Perplexity AI with its live web search is the right choice when you need current market data or recent competitor moves — it cites sources, which matters when you're handing a report to a skeptical CFO. Claude handles long-document analysis exceptionally well; its 200,000-token context window means you can feed it an entire RFP, an annual report, and your internal strategy brief simultaneously and ask it to identify conflicts. ChatGPT with Code Interpreter transforms raw spreadsheet data into visualizations without requiring you to write a single line of Python. Knowing which tool to reach for — and when — is the core skill of an AI-fluent professional.
| Project Stage | Best Tool | Why It Wins Here | Watch Out For |
|---|---|---|---|
| Initial research & sourcing | Perplexity AI | Live web search with citations | Verify stats — it can still hallucinate figures |
| Long-document synthesis | Claude (claude.ai) | 200K token context, nuanced summarization | Less strong on real-time data |
| Drafting & rewriting | ChatGPT (GPT-4o) | Versatile tone control, fast iteration | Generic output without strong system prompts |
| Data analysis & charts | ChatGPT + Code Interpreter | Runs Python on your files, produces visuals | Complex models need human validation |
| Slide narrative structure | Gemini in Google Slides | Drafts speaker notes and layout suggestions natively | Design output still needs manual polish |
| Code generation & review | GitHub Copilot | Context-aware, trained on code repositories | Always review before committing to production |
Prompt
Here is the competitive research summary you just produced [paste text]. Now reformat this as a structured project brief with these four sections: Situation (2 sentences), Key Findings (5 bullets, max 15 words each), Strategic Implications (3 bullets), and Recommended Next Steps (numbered list of 3 actions). Use direct, professional language suitable for a VP-level audience. Flag any finding where you have low confidence.
AI Response
SITUATION The mid-market CRM segment is consolidating rapidly, with three acquisitions in Q1 2024 alone. Our primary competitor launched AI-native features in February, now cited by 34% of prospects as a purchase driver. KEY FINDINGS • Competitor X's AI tier priced 22% below our equivalent offering • Customer churn in our 50-200 seat segment up 8% YoY • Integration partnerships now influence 61% of final purchase decisions • [LOW CONFIDENCE] Competitor Y reportedly planning enterprise push — unconfirmed • Analyst consensus: market grows 14% CAGR through 2027 STRATEGIC IMPLICATIONS • Pricing pressure will intensify before it eases — margin defense needed now • Integration ecosystem is a defensible moat if we move in the next two quarters • AI feature gap is closing but requires a credible roadmap to retain mid-market RECOMMENDED NEXT STEPS 1. Commission pricing sensitivity study with current 50-200 seat customers 2. Map top 10 integration partners by prospect mention frequency 3. Schedule AI roadmap review with product team before next board cycle
What Changes on Monday
The professional who has absorbed this workflow doesn't open ChatGPT and start typing. They open their project brief, identify the five to seven distinct work stages the deliverable requires, and assign a tool and a prompt template to each stage before writing a single word. This front-loaded planning — which takes about fifteen minutes — eliminates the most expensive AI mistake: discovering halfway through that your outputs don't connect. It's the difference between a workflow and a collection of experiments.
Collaboration changes too. Instead of sharing final documents with colleagues, you share prompt relay documents. Your analyst can pick up your Perplexity research output and run it through Claude's synthesis stage without duplicating effort. Your manager can review the structured brief you've built and redirect the next AI stage before you've spent three hours drafting in the wrong direction. AI workflows, when documented properly, become team infrastructure — not individual productivity tricks. Organizations that figure this out early will have a compounding advantage as model capabilities continue to improve.
Prompt
Based on the structured brief above, write seven talking points for a 10-minute stakeholder presentation to a CFO and COO. Each talking point should be one sentence of 20 words or fewer. Lead with the business implication, not the data. End with one slide-title suggestion for each point. Format as a numbered list.
AI Response
1. Competitor pricing is eroding our mid-market margin faster than our current plan accounts for. → Slide: 'The Margin Pressure Is Already Here' 2. Eight percent churn increase in our core segment signals a retention problem, not an acquisition one. → Slide: 'We're Losing the Customers We Already Won' 3. Integrations now drive 61% of purchase decisions — our ecosystem strategy is a revenue lever. → Slide: 'Partnerships Are the New Sales Channel' 4. The AI feature gap is closable within two quarters with focused investment. → Slide: 'We Can Close the Gap — Here's the Timeline' 5. Market growth of 14% CAGR gives us room to act, but the window is not unlimited. → Slide: 'The Market Is Growing — Are We?' 6. A pricing sensitivity study will cost weeks, not months, and protect millions in ARR. → Slide: 'Low-Cost Research, High-Value Decision' 7. Our recommended path requires three decisions from this group before the next board cycle. → Slide: 'Three Decisions. One Quarter. Clear Ownership.'
How This Applies by Role
- Managers: Use the stage-mapping approach to delegate AI-assisted research to direct reports, then apply your own judgment at the synthesis and stakeholder stages — where context and relationships matter most.
- Analysts: Build your prompt relay library around your three most recurring deliverable types (market analysis, performance reports, scenario models) so setup time drops to near zero on repeat work.
- Consultants: Document your multi-step workflows as client-facing methodology — the structured brief and handoff prompts demonstrate rigor and become part of your value-add, not just internal scaffolding.
- Marketers: Chain Perplexity research → Claude synthesis → ChatGPT copy drafting → Gemini slide narrative for campaign briefs that go from briefing to board-ready in a single session.
- Project managers: Use the risk register and timeline stages specifically — AI is fast at generating comprehensive lists; you add the organizational knowledge about what's actually likely to blow up.
Key Takeaways
- Map your project stages before you open any AI tool — identify five to seven distinct work stages and assign the right tool to each one.
- Use explicit handoff prompting to package each AI output for the next stage; this keeps quality consistent and errors visible before they compound.
- Match tools to tasks: Perplexity for live research, Claude for long-document synthesis, ChatGPT for drafting and data, Gemini for slide narrative, GitHub Copilot for code.
- Your domain expertise is the quality gate at every junction — AI handles volume and structure, you handle relevance, accuracy, and stakeholder awareness.
- Document your prompt relay sequences and share them with your team; workflows that live only in one person's browser tabs are fragility, not capability.
- Lead with business implications, not data, at every stakeholder-facing stage — use AI to translate findings into decision-ready language before any human sees it.
Goal: Produce a fully documented multi-step AI workflow for a real work deliverable, including all prompts, outputs, and a reusable prompt relay record you can share with colleagues or repeat on future projects.
1. Choose a real deliverable you need to produce in the next two weeks — a report, analysis, presentation, or proposal. Write its name and deadline at the top of a blank document. 2. Break the deliverable into five to seven distinct work stages. Write each stage as a one-line description (e.g., 'Stage 1: Gather current market data on X'). 3. For each stage, assign one AI tool from this lesson's comparison table and write one sentence explaining why that tool fits that stage. 4. Write your Stage 1 prompt in full — include context, format instructions, and a flag for low-confidence outputs. Run it and paste the output into your document. 5. Write a handoff prompt that packages the Stage 1 output for Stage 2 — compress, reformat, or translate it as needed. Run it and paste the result. 6. Continue through at least three stages, saving each prompt and output in sequence. Note where you had to correct or redirect the AI. 7. At the final stage, produce a stakeholder-facing output (talking points, executive summary, or slide titles) using the CFO/COO prompt structure from this lesson. 8. Add a 'Lessons Learned' section at the bottom: three things you'd do differently in the next workflow. 9. Save the completed document as your first Prompt Relay Playbook entry — it becomes the template for the next time you run a similar project.
You need to analyze a 150-page RFP alongside your company's internal strategy document to identify conflicts. Which tool is best suited for this stage?
What is 'explicit handoff prompting' and why does it matter in a multi-step workflow?
A marketer wants to produce a campaign brief efficiently. Which tool sequence from this lesson best fits that workflow?
Your CFO reviews a stakeholder presentation and says it 'reads like a data dump.' Which principle from this lesson would have prevented that problem?
A consultant documents their multi-step AI prompts and shares them with the client as part of the project methodology. What is the primary professional benefit of this approach?
This lesson requires Pro
Upgrade your plan to unlock this lesson and all other Pro content on the platform.
You're currently on the Free plan.
