AI for complex analysis: working through hard problems
~19 min readIt's Tuesday afternoon. Your VP just forwarded a competitor's press release about a major product pivot, with a one-line message: 'What does this mean for us?' You have three hours before the leadership standup. You could spend 90 minutes pulling together a slide with surface-level observations, or you could use AI to run a structured competitive analysis that surfaces implications your competitors probably haven't mapped yet. The difference between those two outcomes isn't access to better data — it's knowing how to push an AI assistant through a genuinely hard analytical problem rather than asking it for a summary.
Why Complex Analysis Is Different
Most people use ChatGPT or Claude the way they'd use a search engine — one question, one answer, done. That works fine for simple lookups. But complex analysis isn't a single question. It's a chain of reasoning: gather evidence, identify patterns, stress-test assumptions, weigh competing interpretations, and land on a defensible position. Human analysts do this iteratively over hours or days. AI assistants can compress that cycle dramatically, but only if you structure the conversation to mirror that iterative process. A single broad prompt like 'analyze this market' produces a generic response. A structured sequence of focused prompts produces something you can actually use in a board meeting.
The underlying reason is how large language models process requests. Models like GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro generate responses token by token, each token influenced by what came before. When you front-load all your complexity into one prompt, the model has to make dozens of interpretive decisions upfront — what angle matters, what depth is appropriate, what the output format should be — and it often guesses wrong on at least one of them. When you break the problem into a deliberate sequence, you're steering those decisions explicitly at each stage. You get cumulative depth instead of averaged shallowness.
This also changes what 'a good prompt' means for complex work. In basic AI use, a good prompt is clear and specific. For complex analysis, a good prompt also assigns a role, establishes constraints, references earlier reasoning, and asks for explicit uncertainty flagging. You're not just asking a question — you're commissioning an analyst. The professionals who get the most value from AI tools in 2024 treat the AI as a thinking partner with particular strengths (tireless synthesis, wide knowledge, fast structuring) and particular weaknesses (no access to your proprietary data unless you share it, tendency to sound confident even when uncertain, knowledge cutoffs). Working around those weaknesses is a skill.
The 3-Stage Analysis Framework
Framing the Problem Before You Analyze
Framing is the step most professionals skip because it feels like delay. It isn't. When you give Claude or ChatGPT a well-framed problem — specific context, defined scope, stated constraints, and a clear output format — the analysis it produces is an order of magnitude more actionable. Think about the Tuesday afternoon scenario above. 'What does this competitor pivot mean for us?' is unanswerable without context. But 'We sell mid-market HR software. A competitor just announced they're moving upmarket to enterprise. Analyze the strategic implications for our pipeline, our product roadmap, and our positioning — and flag where your reasoning is speculative' is a prompt that produces usable analysis in one pass.
Framing has four components you should cover before any substantive analysis. First, context: who you are, what the business does, what the stakes are. Second, scope: what's in bounds and what isn't. Trying to analyze everything at once is how you get a 500-word essay that covers nothing. Third, constraints: time horizon, data available, decisions this analysis will inform. Fourth, output specification: what format serves you — a structured memo, a risk matrix, a set of ranked options with rationale? Specifying the format isn't just cosmetic. It forces the model to organize its reasoning into a structure that matches how decisions actually get made in your organization.
Prompt
You are a senior strategy consultant. I'm the Head of Product at a 200-person B2B SaaS company selling project management software to mid-market professional services firms (50-500 employees). Annual contract value averages $18,000. A major competitor (Asana) just announced a new AI-powered features suite focused on automated project health scoring and resource forecasting, priced at a 40% premium to their current Pro tier. Analyze the strategic implications for us across three dimensions: 1. Immediate pipeline risk (deals in the next 90 days) 2. Product roadmap pressure (what we need to build or accelerate) 3. Positioning response (how we talk about this to prospects and existing customers) For each dimension, give me: the core implication, the evidence behind it, and your confidence level (high/medium/low). Flag anything where you're reasoning from limited information.
AI Response
**Strategic Implications Analysis: Asana AI Suite Launch** **1. Immediate Pipeline Risk — Confidence: Medium** Core implication: Deals with evaluation timelines beyond 45 days face moderate risk of scope creep, where prospects add 'AI features' to their requirements checklist mid-evaluation. Evidence: Asana's announcement creates a reference point. Buyers who weren't asking about AI will now ask. The 40% premium positioning actually helps you — it signals Asana is moving upmarket, which may accelerate decisions among mid-market buyers who don't want enterprise pricing. Deals closing in under 30 days are largely insulated. Limited information flag: I don't know your current win/loss patterns against Asana or which features prospects cite when they choose a competitor. That data would sharpen this significantly. **2. Product Roadmap Pressure — Confidence: High** Core implication: Resource forecasting is the higher-threat feature; project health scoring is table stakes within 18 months across all major players. Evidence: Resource forecasting solves a pain point (utilization visibility) that professional services firms consistently rank in their top three operational problems. If you don't have a credible roadmap story for this capability within 6 months, it becomes a checkbox loss in competitive deals. Health scoring is more cosmetic — buyers respond to it in demos but rarely cite it as a decision driver. **3. Positioning Response — Confidence: High** Core implication: Reframe the conversation around total cost of adoption, not features. Evidence: A 40% price premium for AI features gives you a concrete anchor. Your message to prospects: 'We're watching this space carefully, but we're not going to charge you 40% more for features that are still maturing. Here's what we're building and when.' To existing customers: proactive outreach before they read about it elsewhere, same message. Silence on a competitor announcement reads as weakness.
Choosing the Right Tool for the Analysis
Not every AI assistant handles complex analysis equally well. The differences aren't marketing — they're architectural and practical. Claude 3.5 Sonnet has a 200,000-token context window, which means you can paste in an entire 60-page report and ask it to reason across the whole document simultaneously. GPT-4o has stronger code execution capabilities through its built-in Python interpreter, making it better for analyses that involve numerical modeling or data transformation. Gemini 1.5 Pro integrates natively with Google Workspace, which matters if your evidence lives in Drive, Docs, or Sheets. Perplexity AI adds live web search to its reasoning, useful when your analysis requires current market data rather than just the reasoning your team can supply. Knowing which tool to reach for before you start saves significant time.
| Tool | Best For | Context Window | Key Limitation | Approx. Cost |
|---|---|---|---|---|
| ChatGPT (GPT-4o) | Numerical analysis, code, structured data | 128K tokens | No live web by default; knowledge cutoff | $20/mo (Plus) |
| Claude 3.5 Sonnet | Long document analysis, nuanced reasoning, writing | 200K tokens | No native web search without integrations | $20/mo (Pro) |
| Gemini 1.5 Pro | Google Workspace integration, multimodal inputs | 1M tokens | Reasoning depth lags Claude/GPT-4o on hard problems | $20/mo (Advanced) |
| Perplexity AI (Pro) | Market research requiring current data | ~32K tokens | Less effective for document-heavy or proprietary analysis | $20/mo (Pro) |
| Microsoft Copilot | Analysis inside M365 apps (Word, Excel, Teams) | Varies by app | Requires M365 Business license; less flexible prompting | From $30/user/mo |
The practical implication: match the tool to the evidence type, not just the question. If you're analyzing a long contract, a detailed RFP, or a multi-chapter report, Claude's context window gives it a structural advantage — it can hold the entire document in working memory and reason across sections without losing earlier context. If your analysis requires pulling current competitor pricing, recent news, or live market data, start with Perplexity to gather and synthesize current information, then bring those findings into Claude or ChatGPT for deeper structural reasoning. Professionals who use only one tool for all analysis work are leaving significant capability on the table.
One pattern that consistently outperforms single-tool approaches: use Perplexity or Gemini to gather and organize current external information, then transfer that structured summary into Claude or GPT-4o with your internal context added. This hybrid approach sidesteps the knowledge cutoff problem (both GPT-4o and Claude have training data cutoffs that mean they don't know what happened last month) while preserving the superior reasoning depth of those models for the hard interpretive work. It adds five minutes to your setup time and often transforms the quality of the output.
Putting It Into Practice: The Probe Stage
Once you've framed the problem, the Probe stage is where most of the analytical work happens. This is a sequence of targeted sub-prompts, each building on the output of the last. The key discipline here is restraint — ask for one thing at a time, in the order that makes logical sense. If you're doing a risk analysis, you establish the risk landscape before you assess probability. If you're evaluating strategic options, you enumerate options completely before ranking them. Jumping straight to 'what should we do?' without first mapping the option space is how you miss the best answer.
The Probe stage is also where you should explicitly ask the AI to challenge its own reasoning. Models like Claude and GPT-4o are capable of steelmanning counterarguments and identifying the weakest links in their own analysis — but they do this far more reliably when you ask for it directly than when you hope they'll volunteer it. A prompt as simple as 'Now identify the three biggest weaknesses in the analysis you just gave me' reliably produces substantive self-critique rather than perfunctory caveats. This is the analytical equivalent of a pre-mortem, and it's one of the highest-value techniques in this entire course.
Another high-value probe technique is perspective-switching. Ask the AI to analyze the same situation from the viewpoint of a customer, a competitor, a skeptical board member, or a regulator. Each lens surfaces considerations that a single-perspective analysis misses. A consultant running a market entry analysis might probe: current customers' perspective on switching costs, the target market's perspective on unmet needs, and a well-funded competitor's perspective on how they'd respond to your entry. Three prompts, maybe 15 minutes, and you've covered ground that would take a junior analyst team a full day to map manually.
Prompt
Based on the competitive analysis you just ran, I want to stress-test the conclusions. 1. Steelman the opposing view: make the strongest possible case that Asana's AI suite launch is actually a net positive for us in the next 12 months. 2. Identify the three assumptions in your previous analysis that, if wrong, would most significantly change the conclusions. 3. What is the single most important thing we could learn or verify in the next 2 weeks that would reduce the uncertainty most?
AI Response
**Stress-Test: Asana AI Suite Launch** **1. Steelman — Why This Could Be Net Positive** Asana is signaling upmarket ambition. Every vendor that moves upmarket creates a vacuum in the segment they're leaving. Mid-market buyers who were considering Asana may now see it as 'too enterprise' — too complex, too expensive, too much implementation overhead. If you position quickly and correctly, you capture that displaced consideration. Additionally, Asana validating AI as a category feature gives you permission to accelerate your own roadmap without having to educate the market on why AI matters. They're doing your awareness work for you. **2. Three Critical Assumptions** - *Assumption 1: The 40% premium deters mid-market buyers.* If Asana bundles the AI tier into existing contracts during renewals (common practice), the price signal disappears and competitive pressure increases. - *Assumption 2: Resource forecasting is the higher-threat feature.* This is based on general professional services pain points. If your specific customers care more about health scoring, the roadmap priority is wrong. - *Assumption 3: Your existing customers won't proactively research alternatives.* If Asana runs a targeted campaign to your customer base (they have the data to do this), passive customers become active evaluators. **3. Highest-Value Thing to Verify** Call five current customers in the next 10 business days and ask one question: 'Have you heard about Asana's new AI features, and has it changed how you're thinking about your project management stack?' Their answers either confirm or collapse the pipeline risk assessment. This is faster and more reliable than any market analysis.
How This Changes by Role
- Managers: Use the Frame → Probe → Synthesize structure for recurring decisions — weekly risk reviews, team capacity planning, vendor assessments. Build reusable prompt templates for your most common analysis types so you're not starting from scratch each time.
- Analysts: Focus on the Probe stage. Use AI to rapidly enumerate hypotheses before you commit to a data pull. A 10-minute Claude session identifying 8 plausible explanations for an anomaly beats 3 hours pulling data to test 2 hypotheses you guessed at.
- Marketers: Perspective-switching is your highest-value technique. Prompt AI to analyze your campaign brief, landing page, or messaging from the viewpoint of a skeptical first-time buyer, a churned customer, and a competitor's marketing team. Each lens improves the final output.
- Consultants: Use the steelman and assumption-testing prompts from the Probe stage as a structured pre-mortem before any client deliverable. Clients pay for the ability to defend a recommendation under pressure — AI can pressure-test it before you're in the room.
- Product Managers: Chain the analysis directly to decision outputs. After competitive analysis, ask the AI to translate findings into a prioritized 'build vs. buy vs. partner' recommendation with explicit rationale tied back to each analytical finding.
Goal: Produce one complete Frame + Probe + Assumption-test analysis sequence on a real work problem, and identify at least one concrete output you can use or share this week.
1. Identify a genuine analytical challenge from your current workweek — a competitive threat, a decision with unclear tradeoffs, a performance anomaly, or a strategic question that's been sitting on your list. 2. Open Claude (claude.ai) or ChatGPT (chat.openai.com) — use whichever you have access to. 3. Write a Frame prompt using all four components: your context (role, company type, stakes), the scope of the analysis, the constraints (time horizon, available data, decision this feeds), and the output format you want. 4. Send the Frame prompt and read the response critically. Note one place where the AI made an assumption you didn't intend. 5. Write and send a Probe prompt that targets the most important sub-question within the analysis — the one insight that would most change your thinking. 6. After receiving the Probe response, send this follow-up: 'Identify the three assumptions in your analysis that, if wrong, would most change the conclusions.' 7. Review the assumption-challenge response and identify one assumption you can verify with real data or a quick conversation with a colleague in the next 48 hours. 8. Write a one-paragraph summary of what the AI analysis produced that you wouldn't have reached as quickly through your normal analytical process. 9. Save your Frame prompt as a reusable template — you'll refine it over the next three uses until it reliably produces first-pass output you can work with directly.
- Complex analysis requires a structured sequence of prompts — Frame, Probe, Synthesize — not a single broad question.
- Framing covers four elements: context, scope, constraints, and output format. Missing any one of them degrades the analysis quality significantly.
- Tool choice matters: Claude 3.5 Sonnet for long documents, GPT-4o for numerical work, Perplexity for current data, Gemini for Google Workspace integration.
- The Probe stage produces the most analytical value — ask for one focused sub-question at a time, in logical sequence.
- Explicitly asking the AI to challenge its own reasoning ('what are the weakest assumptions here?') produces substantive critique that models don't reliably volunteer without prompting.
- Perspective-switching — asking the AI to analyze a situation from multiple stakeholder viewpoints — surfaces blind spots that single-perspective prompts miss every time.
- A hybrid approach using Perplexity or Gemini for current information plus Claude or GPT-4o for deep reasoning outperforms any single-tool approach for time-sensitive analysis.
Breaking Down the Irreducible: Multi-Step Analysis in Practice
Picture this: it's Wednesday afternoon, and your director has just forwarded a 47-page competitor report with a single line — 'thoughts before Friday?' You have three other deliverables, a team check-in tomorrow, and exactly no mental bandwidth to synthesise half a century of pages into a coherent strategic view. This is not an edge case. It's the default condition of professional life. The instinct is to skim, extract a few bullet points, and hope they hold up in the meeting. The better move — the one you'll use on Monday — is to hand the analytical scaffolding to an AI assistant and direct it precisely, layer by layer, rather than asking for a single sweeping answer that collapses nuance into mush.
Part 1 established why decomposition matters: complex problems resist single-prompt solutions because they contain multiple analytical layers, each requiring different reasoning. Now we go deeper into how that decomposition actually runs in professional practice. The key shift is moving from prompting as a one-shot interaction to prompting as a structured analytical conversation — where each exchange builds on the last and you're steering the reasoning, not just receiving it. Think of it as the difference between asking a junior analyst to 'write up something on the market' versus walking them through a proper brief with scoped questions and explicit deliverables at each stage.
The Analytical Conversation: How to Build on Each Response
When Claude or ChatGPT responds to your first prompt in a complex analysis thread, that response is not the endpoint — it's raw material. The professional move is to treat it as a draft first memo from a capable but uninformed analyst. You read it critically, identify where the reasoning is thin, where an assumption has slipped through unchallenged, or where a crucial variable is missing. Then your next prompt does one of three things: it challenges a specific claim, it asks the model to go deeper on one strand, or it introduces a constraint the model didn't account for. This iterative method consistently produces sharper outputs than any single well-crafted prompt, because it mirrors how expert human analysis actually works — through progressive refinement and productive friction.
The 'Steel Man' Follow-Up
The mechanics of a productive analytical conversation follow a consistent pattern. Your opening prompt sets context and requests a structured first-pass analysis with explicit output format. Your second prompt narrows to the most uncertain or high-stakes element of that analysis. Your third prompt introduces real-world constraints — budget, timeline, org politics, data gaps — that the model couldn't have known without you providing them. By the fourth or fifth exchange, you're typically working with analysis that is genuinely differentiated: it reflects your specific situation, incorporates the friction of realistic constraints, and has been stress-tested against counterarguments. That's a different quality of output than anything a single prompt delivers, regardless of how carefully crafted.
Prompt
I'm a strategy manager at a mid-size B2B SaaS company (€40M ARR, 200 employees) competing in the project management space. Our main competitors are Asana, Monday.com, and a newer entrant called Linear. I need to brief our executive team on where we're most vulnerable to competitive pressure over the next 18 months. Step 1 only: Identify the three competitive dimensions where B2B SaaS companies in this category most commonly lose ground. For each dimension, describe what early warning signals look like in practice. Keep it to 350 words.
AI Response
**Three High-Risk Competitive Dimensions in B2B Project Management SaaS** **1. Pricing Architecture vs. Perceived Value** Mid-market SaaS companies most often lose ground here when a competitor reframes the value metric — shifting from per-seat to per-project, or introducing a generous free tier that captures the bottom of your ICP. Early warnings: rising churn in the 10–50 seat segment, increased price objections in renewal calls, and sales cycles lengthening as prospects 'just check out one other option.' **2. Integration Ecosystem Depth** Enterprise buyers increasingly evaluate tools based on how cleanly they fit existing stacks (Salesforce, Slack, Jira, HubSpot). When a competitor deepens native integrations you rely on third-party connectors for, your stickiness erodes. Early warnings: support tickets citing integration friction, feature requests clustering around specific tools, and deals lost to competitors citing 'better fit with our stack.' **3. Product-Led Growth Execution** Linear has built a significant developer and engineering audience through PLG — individual users adopt the tool bottom-up before any enterprise conversation happens. If your product lacks a frictionless individual-use entry point, you're invisible to the next generation of buyers before they have budget authority. Early warnings: low organic trial-to-paid conversion, absence of your brand in practitioner communities (Reddit, Hacker News, LinkedIn), and new enterprise deals where your champion didn't use the product before the sales process began. --- Ready for Step 2 when you are — I can map these against your specific competitive set or assess which dimension represents your sharpest near-term exposure.
Notice what that response does: it delivers exactly the scoped output requested, signals the analytical framework being used, and closes with an explicit invitation to continue the thread. That closing line isn't politeness — it's a structural cue you can use to direct the next step. Your follow-up can either accept the model's offer ('Yes, map these against Asana and Linear specifically') or redirect entirely ('Actually, skip the mapping — go deeper on the PLG dimension because that's where our board is most anxious'). You're the analyst-in-charge. The model is your fast, tireless research associate.
Choosing the Right Tool for Complex Analysis
Not all AI assistants handle complex analytical work equally. The differences matter when you're building a chain of reasoning across multiple prompts, working with long documents, or needing outputs that will be scrutinised in a senior meeting. The table below reflects real performance characteristics as of mid-2025 — these shift as models update, but the relative strengths are consistent enough to guide your tool selection for serious analytical work.
| Tool | Strengths for Complex Analysis | Watch Out For | Best Use Case | Approx. Cost |
|---|---|---|---|---|
| Claude (Anthropic) | Long-context reasoning (200K tokens), nuanced multi-step logic, careful hedging of uncertainty | Can over-qualify conclusions; sometimes buries the lead | Document-heavy analysis, strategy memos, anything requiring sustained reasoning across large inputs | $20/mo (Pro) |
| ChatGPT-4o (OpenAI) | Strong structured output, good at frameworks and tables, broad knowledge base | Confidence can exceed accuracy; watch for plausible-sounding fabrications | Framework generation, scenario modelling, cross-domain synthesis | $20/mo (Plus) |
| Gemini Advanced (Google) | Deep integration with Google Workspace, strong at data-adjacent reasoning, real-time web access | Less consistent on nuanced argument chains than Claude or GPT-4o | Analysing data in Sheets, summarising Drive docs, research with live sources | $19.99/mo |
| Perplexity Pro | Real-time sourced research with citations, good at competitive intelligence gathering | Analysis depth shallower than Claude/GPT-4o; better for research than reasoning | Gathering current competitor data, market sizing, sourcing recent reports | $20/mo |
| Microsoft Copilot (M365) | Native in Word, Excel, PowerPoint, Teams; works on your actual files | Reasoning quality below standalone models; context window limits | Summarising existing documents, drafting from your own data, meeting notes | Included in M365 Business plans |
Don't Chain Tools Blindly
Working with Ambiguity: When the Problem Itself Isn't Clear
Some of the hardest professional problems aren't hard because the analysis is complex — they're hard because the problem itself is poorly defined. Your stakeholder says 'we need to figure out our data strategy' or 'something is wrong with the sales process' and you're expected to both define the problem and solve it. This is actually where AI assistants provide disproportionate value, because a good model can help you structure an ambiguous brief into a set of answerable questions before any analysis begins. This is problem framing, and it's a skill that separates senior analysts from junior ones. You can now do it faster and more rigorously with AI support.
The technique is simple but powerful: describe the messy situation to the model and ask it to generate a set of clarifying questions that, if answered, would make the problem tractable. You're not asking the AI to solve anything yet — you're using it as a structured thinking partner to surface the assumptions, unknowns, and scope decisions buried inside a vague brief. Claude is particularly good at this because of how it handles uncertainty; it tends to flag what it doesn't know rather than paper over it. Once you've worked through that question set — either answering them yourself or taking them back to your stakeholder — you have a defined problem that can be analysed systematically. The ambiguity hasn't disappeared, but it's been made explicit and manageable.
Prompt
My VP of Sales has told me 'we're losing too many enterprise deals late in the sales cycle and I want to understand why.' I've been asked to lead the analysis. The problem is I don't know if this is a pricing issue, a product gap, a process issue, or something else entirely. Don't analyse anything yet. Instead, generate the 8 most important clarifying questions I need to answer before I can design a proper analysis. Organise them by category (data questions, stakeholder questions, scope questions). Flag which two questions are most critical to answer first.
AI Response
**Clarifying Questions Before Analysis Begins** **Data Questions** 1. What does 'late stage' mean in your CRM — is there a defined stage name, and how many deals are we talking about over what time period? 2. Do you have loss reason data captured at deal close, and how consistently is it filled in by reps? 3. Can you segment losses by deal size, industry vertical, and rep? (Patterns often cluster in ways that immediately point to root cause.) **Stakeholder Questions** 4. ⭐ **(Critical first)** What does your VP of Sales believe the cause is? Their hypothesis shapes what evidence will actually change their mind — and what they'll dismiss. 5. Have the enterprise AEs been asked directly? Front-line reps often know exactly what's happening and haven't been asked. 6. Are there specific competitors showing up repeatedly in late-stage losses, or is it a spread? **Scope Questions** 7. ⭐ **(Critical first)** What's the decision this analysis needs to support — a board presentation, a process change, a product roadmap input? The answer determines how rigorous and how fast you need to be. 8. Is 'too many' defined by a benchmark (e.g., win rate below X%) or is this a gut-feel concern from the VP? If there's no baseline, you'll spend time establishing one before you can show change. --- Once you have answers to questions 4 and 7, the shape of the right analysis becomes much clearer. Want me to sketch out two or three possible analytical approaches based on different answers to those?
Applying This Differently by Role
- Strategy managers: Use the multi-step approach to build scenario analyses — prompt for base case first, then stress-test each assumption in separate exchanges rather than asking for all scenarios at once. Output quality is dramatically higher.
- Marketing analysts: When diagnosing campaign underperformance, use AI to generate competing hypotheses (audience mismatch vs. creative fatigue vs. channel saturation) before pulling data — it prevents you from only testing the hypothesis you already believe.
- Management consultants: Use problem-framing prompts in client discovery phases to generate sharp clarifying questions before stakeholder interviews. You'll ask better questions and appear more prepared.
- Financial analysts: Chain prompts to build assumption trees — start with the key drivers of a model, then interrogate each driver's sensitivity in separate prompts. Catch flawed assumptions before they're baked into a model.
- Operations managers: When mapping process failures, use AI to generate a structured fishbone of possible root causes across people, process, technology, and data — then use each branch as a separate investigation thread.
- HR and people leaders: For complex org design or talent problems, use AI to map second-order effects of proposed changes ('if we restructure this team, what are the likely downstream effects on these three adjacent functions?') before committing to a direction.
The pattern across all these roles is consistent: AI doesn't replace the domain expertise you bring to a problem. It accelerates the structural thinking that precedes and organises that expertise. A financial analyst who knows their business still needs to identify the right drivers and the right sensitivities — the AI just helps map that territory faster and more completely than working alone. Your judgment about what matters, what's realistic, and what will actually land with your stakeholders remains the scarce input. The AI handles the analytical scaffolding so you can spend your cognitive budget on the parts only you can do.
Goal: Experience the full multi-step analytical conversation workflow on a real professional problem, producing an output that is both more rigorous and more honest about its own limitations than single-prompt analysis delivers.
1. Identify one genuinely complex problem you're currently facing at work — something with multiple possible causes, uncertain data, or competing stakeholder views. Write it down in two or three sentences exactly as you'd describe it to a colleague. 2. Open Claude or ChatGPT-4o (either works; use whichever you have access to). Paste your problem description and ask the model to generate 6–8 clarifying questions — categorised by type — that would need answering before a proper analysis could begin. 3. Read the questions critically. Identify any that are wrong, irrelevant, or missing something important. Note what that tells you about what context you failed to provide. 4. Answer the clarifying questions yourself, in writing. This forces you to make your assumptions explicit — which is valuable regardless of what comes next. 5. Now write a decomposed analytical prompt: break your problem into three distinct sub-questions, each requiring different reasoning or data. Frame each as a separate step, and ask the model to address Step 1 only. 6. Receive the Step 1 response. Identify the single weakest point in the reasoning — an assumption, a missing variable, or an overly confident claim. Write a follow-up prompt that directly challenges that point. 7. After the model responds to your challenge, ask it to produce a one-page structured summary of the analysis so far, with explicit flagging of what remains uncertain and what data would resolve those uncertainties. 8. Save the full conversation thread. Compare the final output to what you would have produced with a single prompt. Note specifically what the iterative approach surfaced that a single prompt would have missed. 9. Identify one insight from this analysis that you'll bring into a real meeting or decision this week.
It's Monday morning. Your VP has forwarded a competitor's press release with one line: 'Thoughts?' You have 90 minutes before your 1:1. The press release announces a pricing shift that could undercut your mid-tier product line, but you're not sure by how much, for which customers, or whether it's a real threat or a positioning move. This is exactly the situation where most professionals either panic-Google for 20 minutes or draft a half-baked response. With the right AI workflow, you can walk into that meeting with a structured threat assessment, three scenarios, and a recommended response — all built in under an hour.
Turning Raw Uncertainty Into Structured Analysis
Complex problems resist simple prompts. The competitor pricing scenario above has at least four moving parts: the competitor's intent, your customer segments, your cost structure, and the likely market response. If you paste the press release into ChatGPT and ask 'what should I do?', you'll get something generic. The unlock is decomposing the problem before you prompt — identifying the sub-questions that, once answered, make the big question tractable. AI is exceptionally good at answering bounded sub-questions. Your job is to define those boundaries clearly, then synthesise the outputs into a coherent position.
This is where the chain-of-analysis technique pays off. You run the AI through the problem in deliberate stages: first, extract and structure the facts; second, identify what's unknown or assumed; third, generate scenarios; fourth, stress-test the most likely scenario; fifth, recommend actions with explicit trade-offs. Each stage uses the output of the previous one as context. By stage four, the AI is working with a rich, problem-specific knowledge base that you've built together. The final recommendation is grounded in your actual situation — not a generic best-practice answer scraped from training data.
Claude 3.5 Sonnet and GPT-4o both handle long-context reasoning well, meaning you can paste substantial background documents — a competitor's full pricing page, your own product tiers, recent customer feedback — and the model holds all of it in working memory across a multi-turn conversation. Claude's 200,000-token context window is particularly useful when you're feeding in lengthy reports or transcripts. Perplexity AI adds real-time web search, making it the right tool when you need current market data alongside the reasoning. Knowing which tool fits which stage saves you from wrestling with limitations mid-analysis.
Always Name Your Constraints
The Competitive Threat Analysis in Practice
Prompt
Here's a competitor press release. Work through this in three stages. Stage 1 — Extract: List every concrete claim they make about pricing, features, and target customers. Separate stated facts from implied ones. Stage 2 — Identify gaps: What are the three most important things this announcement doesn't tell us that would change our response? Stage 3 — Scenarios: Based on what's stated, give me three scenarios for what this move actually means strategically — from least to most threatening to our mid-tier product line. [PRESS RELEASE TEXT PASTED HERE] Our mid-tier product is priced at $149/month, targets SMBs with 10–50 employees, and our main differentiator is onboarding support.
AI Response
STAGE 1 — EXTRACTED CLAIMS Stated facts: New 'Professional' tier at $99/month. Launch date Q2. Targets 'growing businesses.' Includes API access previously only in enterprise tier. Implied: They're moving downmarket. The $99 price point suggests margin pressure or a land-and-expand play. STAGE 2 — CRITICAL UNKNOWNS 1. Is $99 introductory pricing or permanent? Introductory pricing is a acquisition tactic, not a structural threat. 2. What's their churn rate at existing tiers? A desperate retention move looks like aggression from the outside. 3. Does 'growing businesses' mean they're targeting your current customers or pre-customers? STAGE 3 — SCENARIOS Least threatening: Defensive move to stop losing SMB trials to you. They're not winning new customers — they're trying to keep existing ones. Moderately threatening: Deliberate downmarket expansion. They've decided SMB is a growth segment and are pricing to capture it before you consolidate. Most threatening: Loss-leader strategy to build SMB installed base for upsell to enterprise. In 18 months, your customers become their enterprise pipeline.
Choosing the Right Tool for Each Analysis Layer
| Analysis Layer | Best Tool | Why | Watch Out For |
|---|---|---|---|
| Fact extraction from documents | Claude 3.5 Sonnet | 200K context, precise citation | Doesn't have real-time data |
| Current market/competitor data | Perplexity AI | Live web search + synthesis | Sources vary in quality — verify |
| Scenario generation | GPT-4o or Claude | Strong structured reasoning | May favour plausible over surprising scenarios |
| Financial modelling logic | GPT-4o with Code Interpreter | Can run actual calculations | Always sanity-check the maths |
| Executive communication draft | ChatGPT or Claude | Tone control, format flexibility | Generic without specific context |
| Real-time brainstorming | Gemini 1.5 Pro | Fast, Google Workspace integration | Less precise on nuanced reasoning |
The table above reflects a practical reality: professionals who get the most from AI analysis treat these tools like a specialist team, not a single generalist assistant. You might start in Perplexity to gather current data on a competitor's recent moves, shift to Claude to reason through a long internal report, then use GPT-4o's Code Interpreter to model out revenue scenarios numerically. The friction of switching tools is real but small compared to the cost of forcing one tool to do everything. Build a personal routing rule — a simple decision: does this task need fresh data, long documents, or calculation? — and the right tool becomes obvious.
Once you have your scenarios and analysis, the final step is synthesis: turning the AI's structured output into a point of view you're willing to defend. This is non-negotiable. The AI generates options and surfaces considerations — you make the call. In the competitive threat example, you might conclude the 'moderately threatening' scenario is most likely given what you know about that competitor's recent hiring on LinkedIn, and recommend a 90-day monitoring period before any pricing response. That judgment is yours. The AI gave you the structure to reach it faster and with more rigour than you could alone.
Prompt
Based on our analysis, I think Scenario 2 (deliberate downmarket expansion) is most likely. Now do two things: 1. Steel-man the case AGAINST my conclusion — what's the strongest argument that I'm wrong and it's actually Scenario 1 (defensive move)? 2. Assuming Scenario 2 is correct, give me three response options ranked by risk level. For each option, state: what we do, what we're betting on, and what the failure mode is. Keep each option to 4 sentences maximum.
AI Response
STEEL-MAN FOR SCENARIO 1 (you're wrong): Their last two product launches both underperformed. Their enterprise NPS dropped 12 points last year per G2 reviews. Companies in defensive mode often mimic aggressive pricing to signal strength they don't have. The API inclusion could be a retention feature for at-risk accounts, not a new acquisition tool. RESPONSE OPTIONS IF SCENARIO 2 IS CORRECT: Low Risk — Reinforce differentiation: Double down on onboarding support messaging in all SMB touchpoints. We're betting price-sensitive buyers aren't our real customers. Failure mode: we're wrong and lose 15% of new trials over 6 months. Medium Risk — Introduce a lower entry tier at $99 with limited onboarding: Match their price point but protect margin by gating our differentiator. We're betting some customers need the price anchor to convert. Failure mode: cannibalises existing $149 customers without adding net-new revenue. High Risk — Accelerate enterprise move: Pull resources from SMB acquisition and push upmarket before they do. We're betting the SMB segment will commoditise and margin lives in enterprise. Failure mode: we abandon a profitable segment too early and have nothing to show upmarket for 12 months.
What Changes at Work on Monday
- Managers: Use the chain-of-analysis approach before any strategic discussion — walk in with scenarios and trade-offs, not just a summary of the problem
- Analysts: Replace single-prompt analysis with staged sessions; use Claude for document-heavy work, Perplexity when you need current data, GPT-4o Code Interpreter for anything numerical
- Consultants: Build the steel-man step into every client recommendation — it sharpens your argument and pre-empts the toughest pushback
- Marketers: Apply scenario generation to campaign planning — model the 'what if this doesn't work' case before launch, not after
- All roles: Start every complex session by stating your constraints explicitly — it's the single highest-leverage habit change from this lesson
Goal: Produce a real, defensible strategic recommendation on an actual work challenge — a 150-word summary you can use in a meeting, send to a stakeholder, or build into a longer report.
1. Identify a real challenge you're facing at work right now — a competitor move, a strategic decision, a market shift, or an internal problem with multiple stakeholders. Write one sentence describing it. 2. Open Claude 3.5 Sonnet or GPT-4o and paste in any relevant background: a document, a news item, an email thread, or your own written summary of the situation. 3. Run Stage 1: Ask the AI to extract all stated facts, separate them from assumptions, and list the three most important unknowns. 4. Run Stage 2: Ask it to generate three scenarios — from least to most serious — based only on what's actually known. 5. Choose the scenario you believe is most likely and tell the AI why in 2–3 sentences. Ask it to steel-man the case against your view. 6. Run Stage 3: Ask for three response options at different risk levels, each with a clear 'what we're betting on' and 'failure mode.' 7. Write a 150-word summary of your recommended response, citing which scenario you're acting on and why. This is the document you keep. 8. Share your summary with one colleague or your manager and note one piece of feedback that changes or sharpens your view. 9. Save the full AI conversation thread — it's a reusable template for the next time a similar problem lands on your desk.
- Decompose before you prompt — break hard problems into bounded sub-questions that the AI can answer with precision
- Use chain-of-analysis: extract facts, identify unknowns, generate scenarios, stress-test, then recommend — in that order
- State your constraints at the start of every complex session; it's what makes recommendations actually actionable
- Route by task: Perplexity for live data, Claude for long documents, GPT-4o Code Interpreter for calculations, any major model for scenario reasoning
- Always run the steel-man step — asking the AI to argue against your conclusion is the fastest way to find the holes in your thinking
- The AI builds the structure; you make the call — synthesis and judgment are yours, and that's what makes the output worth defending
- Save your analysis sessions as templates — a well-structured competitive threat conversation works again the next time a similar problem appears
You receive an urgent request to assess a potential market risk and have 90 minutes. What's the highest-leverage first step when using AI for this analysis?
A consultant is preparing a recommendation for a client and wants to pre-empt the strongest objection to their conclusion. Which AI technique is most directly designed for this?
You need to analyse a 150-page industry report alongside your company's internal strategy document and reason across both. Which tool is best suited for this task?
An analyst uses AI to generate three competitive scenarios and identifies the most likely one. What must happen before the output becomes a usable recommendation?
A manager tells the AI: 'We can't reduce headcount, can't change pricing before Q3, and our engineering team is at capacity.' What is this technique called and why does it matter?
This lesson requires Pro
Upgrade your plan to unlock this lesson and all other Pro content on the platform.
You're currently on the Free plan.
