Skip to main content
Back to Setting Up Your AI Workflow
Lesson 1 of 10

Designing your personal AI stack

~22 min read

It's Tuesday afternoon. You have four browser tabs open: ChatGPT for drafting a stakeholder update, a separate window where you're asking Perplexity to research a competitor, your company's Notion workspace where someone installed Notion AI last month, and a half-finished Excel sheet you keep meaning to automate. You're switching between them every eight minutes, losing context each time, and the outputs feel disconnected — like four different assistants who've never met. This is not an AI workflow. This is AI chaos. And it's costing you more time than it saves.

What an AI Stack Actually Is

An AI stack is the specific combination of tools, models, and usage patterns you deliberately assemble to handle your professional workload. The word 'stack' comes from software engineering, where developers layer technologies that each do one thing well. The same logic applies here. ChatGPT is not the same tool as Perplexity, even though both answer questions. Claude handles long documents differently than Gemini handles spreadsheet data. GitHub Copilot exists for code; Midjourney exists for images. Each tool has a ceiling and a floor, and understanding both is what separates professionals who get 10x output from AI from those who get 10% gains and frustration.

Most professionals default to a single tool — usually ChatGPT, because it launched first and the name stuck. That's like running your entire business through email because it was the first digital communication tool you adopted. A deliberate stack means you've made conscious choices: this tool for research, this tool for writing, this tool for data. Those choices are based on what each model actually does well, what it costs, and how it fits your specific job. A marketing manager's stack looks different from a financial analyst's stack. Both look different from a management consultant's stack. One size does not fit any of them well.

The goal of this lesson is to help you design your stack intentionally — not collect tools randomly as they trend on LinkedIn. By the end, you'll have a clear picture of which AI tools belong in your workflow, what role each plays, and how to stop duplicating effort across platforms. You'll also understand the hidden costs of a poorly designed stack: context-switching friction, inconsistent outputs, and the cognitive overhead of remembering which tool is open in which tab. A well-designed stack eliminates those costs and makes AI feel like an integrated part of how you work, not an add-on you have to remember to use.

Start with your pain, not the product

Before choosing any tool, list the three tasks in your week that consume the most time relative to the value they produce. Repetitive drafting? Research synthesis? Data summarization? Your stack should attack those specific bottlenecks. Professionals who pick tools first and look for problems second end up with expensive subscriptions they barely use.

The Four Functional Layers of a Professional AI Stack

Think of your stack as having four distinct layers, each serving a different function. The first is the reasoning and drafting layer — this is where you do your heavy cognitive lifting: writing, analyzing, planning, and problem-solving. ChatGPT (GPT-4o) and Claude (claude-3-5-sonnet) dominate this layer for most professionals. The second is the research and retrieval layer, where tools like Perplexity AI pull live web data and cite sources, saving you from the hallucination risk that comes with asking a base language model for current facts. The third is the embedded productivity layer — tools like Notion AI, Microsoft Copilot, or Google Gemini in Workspace that live inside software you already use daily. The fourth is the specialist layer: tools like GitHub Copilot for code, Midjourney for visuals, or ElevenLabs for voice — purpose-built for specific output types.

Most professionals need tools from the first two layers at minimum. The third layer adds value if you're already paying for Notion, Microsoft 365, or Google Workspace — in which case Copilot or Gemini are often included at no extra cost, and ignoring them is leaving money on the table. The specialist layer is optional and role-dependent. A consultant who never produces code doesn't need GitHub Copilot. A brand manager who commissions visual content regularly should absolutely be using Midjourney. The mistake is paying for all four layers without being intentional about which tools in each layer actually match your output types.

Drafting a stakeholder communication with context (ChatGPT / Claude)

Prompt

I'm a senior operations manager at a logistics company. We missed our Q3 on-time delivery target by 6 percentage points (target: 94%, actual: 88%). The main causes were a port strike in August and a WMS software failure in week 37. I need to write a 200-word update for our executive team that's honest about the miss, explains the external vs internal causes clearly, and ends with three specific corrective actions we're already taking. Tone: direct, professional, no spin.

AI Response

Q3 On-Time Delivery Performance Update Our Q3 on-time delivery rate came in at 88%, against a target of 94%. We're addressing this directly. Two factors drove the gap. The August port strike — an industry-wide disruption — accounted for approximately 4 points of the shortfall, affecting inbound freight across all carriers operating through the affected terminals. The remaining 2-point gap traces to a WMS software failure in week 37, which caused misrouted picks across three distribution centers for nine days before full resolution. We're not waiting for Q4 planning to respond. Three actions are already underway: 1. Carrier diversification: We've added two alternative routing partners effective October 1, reducing single-port dependency for inbound volumes by 40%. 2. WMS resilience: IT has implemented a manual override protocol and is completing a full audit of the week 37 failure by October 15. 3. Weekly cadence: Operations and logistics leadership now meet every Monday to review delivery variance before it compounds. We expect Q4 to return above 93%. I'll provide a mid-quarter update in November.

Notice what made that prompt effective. It included the speaker's role, the specific numbers, the causal breakdown, the desired word count, and the tone. The AI didn't need to guess at any of those variables. This is the core discipline of the drafting layer: you bring the facts and judgment, the model brings the structure and language. The output above would take most managers 20-35 minutes to write from scratch. With a well-structured prompt, it takes under two minutes — and the editing pass is minimal because the context was complete upfront.

Comparing the Core Tools: What Each One Actually Does

ToolBest ForKey StrengthKey LimitationCost (2024)
ChatGPT (GPT-4o)Drafting, analysis, coding, image genBroadest capability set; huge plugin ecosystemCan hallucinate facts; no live web by default in APIFree tier; $20/mo Plus; $25/mo Team
Claude (claude-3-5-sonnet)Long documents, nuanced writing, instruction-following200K token context window; follows complex instructions preciselyNo image generation; smaller ecosystem than OpenAIFree tier; $20/mo Pro
Perplexity AIReal-time research, fact-finding, source citationCites live web sources; reduces hallucination risk significantlyNot built for drafting or creative tasksFree tier; $20/mo Pro
Gemini (Google Workspace)Gmail, Docs, Sheets integration; multimodal tasksNative integration with Google tools; strong with data in SheetsReasoning depth trails GPT-4o and Claude on complex tasksIncluded in Google Workspace Business ($12+/user/mo)
Microsoft Copilot (M365)Word, Excel, PowerPoint, Teams, OutlookDeep integration with Office files and meeting transcriptsQuality varies by app; requires M365 Business subscription$30/user/mo add-on to M365
Notion AIMeeting notes, project docs, knowledge base summarizationLives inside your existing Notion workspaceLimited to Notion context; not a general-purpose model$10/mo add-on per workspace
GitHub CopilotCode completion, code review, debuggingBest-in-class for developers; integrates with VS Code and JetBrainsIrrelevant if you don't write code$10/mo individual; $19/mo Business
MidjourneyVisual content, concept art, marketing imageryHighest quality AI image generation availableImages only; no text, no analysis$10-$60/mo depending on usage
Core AI tools by function, strength, limitation, and cost — as of late 2024. Prices subject to change; verify on each provider's website.

The table above reveals a pattern that most AI newcomers miss: there is no single tool that leads across all categories. ChatGPT's breadth makes it the default starting point, but Claude's 200,000-token context window means it can process an entire 150-page report in a single conversation — something GPT-4o handles less reliably at extreme lengths. Perplexity's live web access makes it the right tool when you need a fact from last week, not a plausible-sounding fact from training data that may be 12-18 months old. The professionals who get the most from AI aren't using more tools — they're using the right tool for each job type.

Cost is a real variable here, not a footnote. If your organization already pays for Microsoft 365 Business, you may have access to Copilot features without an additional subscription. Google Workspace Business users often have Gemini capabilities already provisioned. Before spending $20/month on a personal ChatGPT Plus subscription, audit what's already available through your employer. Many professionals are paying out of pocket for tools their company already licenses — or they're using free tiers of personal tools when enterprise versions with better privacy protections are available through IT.

Data privacy before you type

Free tiers of ChatGPT, Claude, and Perplexity may use your conversations to train future models by default. Before pasting client names, financial figures, or confidential strategy documents into any AI tool, check the privacy settings. ChatGPT lets you disable training in Settings > Data Controls. Claude's free tier has similar opt-out options. Enterprise and Team plans typically include stronger data protection by default — confirm with your IT or legal team before handling sensitive material.

Building Your Stack: The Practical Decision

Designing your stack is a three-step decision process. First, audit your current tool usage honestly — write down every AI tool you've touched in the last month and what you used it for. Most professionals discover they've been using ChatGPT for everything, including tasks where Perplexity or Claude would produce meaningfully better outputs. Second, map your highest-frequency work tasks to the functional layers described earlier. If you spend 40% of your week writing — emails, reports, proposals, presentations — the drafting layer is your highest-value investment. If your role requires constant competitive intelligence or market research, the research layer earns its cost immediately.

Third, apply a ruthless constraint: start with a maximum of three tools. One from the drafting layer, one from the research layer, and one embedded tool if your productivity suite already includes it. Adding more tools before mastering three is how professionals end up with the tab chaos described at the opening of this lesson. Depth of skill with one tool compounds faster than surface familiarity with six. A manager who knows how to write precision prompts in Claude — with proper context, role framing, and output specifications — will outperform a manager who dabbles across five platforms without mastering any of them.

The three-tool constraint also forces you to make explicit choices about what you're not using — and why. If you decide Perplexity handles your research needs, you're deliberately not using ChatGPT's Browse feature for that purpose. That's a valid choice, not a gap. If you're using Claude for long-document analysis, you're accepting that it won't generate images — and you've decided that's fine for your role. These deliberate trade-offs are the signature of a professional who understands their tools rather than someone who's simply collecting them.

Design Your Personal AI Stack

Goal: Produce a documented, intentional AI stack of 2-3 tools with a clear rationale for each choice, grounded in your actual work tasks rather than tool hype.

1. Open a blank document or note and write the heading 'My AI Stack Audit.' List every AI tool you've used in the past 30 days — include free tools, browser extensions, and anything embedded in software you already use (e.g., Notion AI, Copilot in Word). 2. Next to each tool, write one sentence describing what you actually used it for. Be specific: 'summarized meeting notes' is better than 'productivity.' 3. Draw a simple two-column table: 'Tasks I do most often' and 'Time spent per week (estimate).' List your top 5 recurring work tasks and estimate hours per week honestly. 4. Using the four-layer framework from this lesson (drafting, research, embedded productivity, specialist), assign each of your top 5 tasks to a layer. Some tasks may fit multiple layers — pick the primary one. 5. Cross-reference your task layers with the comparison table in this lesson. For each layer you need, identify one tool that fits your budget and access level. Note whether your employer already provides any of these tools. 6. Write a one-paragraph 'Stack Decision' that names your chosen tools (maximum three), states what each one will handle, and notes one tool you're deliberately leaving out and why. 7. Set a calendar reminder for three weeks from today titled 'Stack Review.' At that point, assess whether each tool is earning its place — replace any that haven't been used more than twice.

How Your Role Shapes Your Stack

  • Marketing managers: Prioritize the drafting layer (ChatGPT or Claude for copy and campaign briefs) plus Midjourney or Adobe Firefly if visual content is part of your output. Perplexity is valuable for real-time competitor and trend research before campaign planning.
  • Financial analysts: Microsoft Copilot in Excel handles formula generation and data summarization if you're on M365. Claude's long context window is ideal for processing lengthy financial reports or regulatory documents. Perplexity for live market data lookups.
  • Management consultants: Claude for synthesizing long RFPs, strategy documents, and client reports. ChatGPT for slide structure and executive summaries. Perplexity for rapid industry benchmarking. Avoid Midjourney unless you're producing visual deliverables — it's a distraction otherwise.
  • Operations managers: Microsoft Copilot or Gemini for meeting summaries and action-item extraction if you're in Teams or Google Meet. ChatGPT for drafting SOPs and process documentation. Perplexity for supplier research and logistics benchmarking.
  • HR and people managers: Claude for drafting sensitive communications (performance feedback, policy documents) where nuanced tone matters. Notion AI if your HR team uses Notion for documentation. ChatGPT for job description drafting and interview question generation.
  • Product managers: ChatGPT for PRD drafting, user story generation, and competitive analysis frameworks. Perplexity for real-time product and market research. GitHub Copilot only if you write or review code as part of your role — don't add it otherwise.
Research synthesis with source grounding (Perplexity AI)

Prompt

I'm preparing for a client meeting next week. They're a mid-sized retail chain considering whether to invest in AI-powered demand forecasting. Give me a current summary of: (1) the leading vendors in retail AI demand forecasting, (2) typical ROI ranges cited in recent case studies, and (3) the main implementation risks retailers have reported. Cite your sources.

AI Response

**Retail AI Demand Forecasting: Current Landscape (2024)** **Leading Vendors** The market is currently led by Blue Yonder (owned by Panasonic), o9 Solutions, and Relex Solutions for enterprise retail. For mid-market retailers, Toolsgroup and Lokad are frequently cited alternatives. Microsoft and Google both offer demand forecasting modules within their cloud platforms (Azure Machine Learning, Google Cloud Retail API) that retailers are increasingly building on top of. **Typical ROI Ranges** Recent implementations report inventory reduction of 15-30% and forecast accuracy improvements of 20-40% over baseline statistical models. A 2023 McKinsey analysis of AI in retail supply chain cited a median 10-15% reduction in stockouts for retailers in the first 18 months post-deployment. Payback periods vary widely: 12-24 months for retailers with clean historical data, 30+ months for those requiring significant data infrastructure investment upfront. **Key Implementation Risks** Retailers consistently report three failure modes: (1) insufficient historical transaction data — models need 2-3 years of clean SKU-level sales data to perform reliably; (2) integration complexity with legacy ERP systems, particularly older SAP and Oracle deployments; (3) change management — store-level planners resisting AI recommendations in favor of manual overrides, which degrades model learning over time. *Sources: McKinsey & Company Retail AI Report 2023; Gartner Magic Quadrant for Supply Chain Planning 2024; Relex Solutions case study library; Blue Yonder ROI documentation.*

The Perplexity example above shows exactly why the research layer is distinct from the drafting layer. That response cites specific vendors, specific ROI numbers, and specific failure modes — grounded in sources the model can point to. If you asked ChatGPT the same question without web access enabled, you'd get plausible-sounding answers that may reflect market conditions from 12-18 months ago. For a client meeting happening next week, currency matters. Perplexity's design — building search and synthesis together — makes it the right tool for this job, not because it drafts better than ChatGPT, but because it retrieves more reliably.

What Changes on Monday

  1. You stop using ChatGPT as your default for everything and assign specific tools to specific job types based on their actual strengths.
  2. You audit whether your employer already provides AI tools through Microsoft 365 or Google Workspace — before spending personal money on duplicate capabilities.
  3. You apply the three-tool constraint: one drafting tool, one research tool, one embedded tool if available. No more than three until you've mastered what you have.
  4. You write your Stack Decision document from the task above and keep it somewhere visible — this is your reference point when a new AI tool trends and you feel the pull to add it.
  5. When preparing research-heavy deliverables, you route your fact-finding through Perplexity rather than a base language model, then use ChatGPT or Claude to synthesize and structure the output.
  6. You check privacy settings on any free-tier tool before pasting client or company data — and flag your IT team if you're unsure what your organization's policy is.

It's Tuesday afternoon. You have a client deliverable due Thursday, three Slack threads waiting for responses, and a strategy document that needs a complete rewrite before the weekly leadership meeting. You open ChatGPT, type a vague question, get a generic answer, close the tab, and go back to staring at your blank document. Sound familiar? The problem isn't the AI — it's that you're treating a precision instrument like a search engine. Once you map specific tools to specific workflow moments, that Tuesday scenario plays out completely differently.

Matching Tools to Workflow Moments

Part 1 established the core principle: your AI stack should be built around the actual shape of your work, not around hype or feature lists. Now the practical question is how to map tools to the specific friction points in your week. Every professional role has recurring task categories — synthesis, drafting, analysis, research, ideation, and communication. Each of these has a different cognitive profile, and different AI tools are optimized for different cognitive profiles. Synthesis tasks, where you need to compress large volumes of information into structured insight, play to Claude's strength with long documents. Fast iterative drafting, where you need a thinking partner that builds on previous exchanges, is where ChatGPT's conversational memory shines. Getting this mapping right is what separates people who get 20 minutes of value per week from AI versus people who reclaim 90 minutes every single day.

Research tasks deserve special attention because professionals often default to ChatGPT for research, which is the wrong call for anything requiring current data. ChatGPT's training data has a knowledge cutoff, meaning anything that happened in the last 12-18 months may be missing or wrong. Perplexity AI is built specifically for research with live web access, source citations, and a focus interface that surfaces relevant links alongside synthesized answers. For a consultant building a competitive landscape or a marketer researching category trends, Perplexity returns cited, verifiable information in under 60 seconds. That's not a minor upgrade — it's the difference between a research task that takes 40 minutes and one that takes 8. The citation layer also matters professionally: you can see exactly which sources the answer draws from, which means you can verify claims before putting them in a client deck.

The 3-Tool Rule for New Stack Builders

Start with three tools, not seven. One for drafting and thinking (ChatGPT or Claude), one for research (Perplexity), and one embedded in software you already use daily (Notion AI, Copilot in Microsoft 365, or Gemini in Google Workspace). Three tools used deeply outperform seven tools used occasionally. Add a fourth only when you hit a clear ceiling with your current three.

The Core Professional Stack: A Tool-by-Tool Breakdown

ChatGPT Pro at $20/month gives you GPT-4o access, which handles complex reasoning, code interpretation, image analysis, and file uploads. The model processes up to 128,000 tokens in a single context window — roughly 96,000 words — meaning you can paste an entire strategy document and have a real conversation about it. Claude Pro, also $20/month from Anthropic, runs the Claude 3.5 Sonnet model and is widely regarded as producing more nuanced, carefully structured long-form writing. Many consultants and analysts keep both subscriptions active and route tasks accordingly: ChatGPT for structured analysis and iterative problem-solving, Claude for anything that ends up in front of a client or executive. The $40 combined monthly cost is recoverable in time savings within the first week of disciplined use.

Embedded AI tools — those living inside the software you already work in — carry a different kind of value. Notion AI at $10/month per user doesn't make you switch context. You're already in Notion writing a meeting summary; you highlight the rough notes and hit the AI button. Microsoft Copilot, which is included in Microsoft 365 Business Standard at $12.50/month or available as a $30/month add-on for enterprise, works directly inside Word, Excel, Outlook, and Teams. A financial analyst who spends 60% of their day in Excel gets disproportionate value from Copilot because the model can read their actual spreadsheet data and generate formulas, summaries, or charts without any copy-pasting. GitHub Copilot at $10/month is the same principle applied to code: the model sees your actual codebase, not just a description of it. Context-aware tools compound value because the AI has real information to work with.

ToolBest ForContext WindowMonthly CostStandout Feature
ChatGPT ProIterative reasoning, structured analysis, file uploads128K tokens$20GPT-4o multimodal (text, image, voice, code)
Claude ProLong-form writing, nuanced drafts, document synthesis200K tokens$20Largest context window; superior prose quality
Perplexity ProCurrent research, competitive intel, cited sourcesLive web$20Real-time web access with source citations
Notion AIIn-context drafting, meeting notes, knowledge basesPer-document$10/userZero context-switching; lives in your workspace
Microsoft CopilotExcel, Word, Outlook, Teams workflowsPer-file$30 add-onReads your actual files; works across M365 suite
Google GeminiDocs, Sheets, Gmail, Meet workflows1M tokens (Gemini 1.5)Included / $20 for AdvancedDeepest Google Workspace integration
GitHub CopilotCode generation, debugging, documentationPer-file$10Codebase-aware autocomplete and chat
Core professional AI tools as of late 2024. Costs shown are individual/monthly. Enterprise pricing differs significantly.

Designing Prompts That Fit Your Stack

Choosing the right tool is half the equation. The other half is learning to prompt with enough specificity that the tool actually performs the task you need, not a generic version of it. The professionals who get the most from their AI stack have internalized a simple prompt structure: Role + Context + Task + Format + Constraint. You tell the model who it's acting as, what background information it needs, what you actually want it to produce, what format the output should take, and any constraints that matter (length, tone, what to avoid). This structure works across tools. Whether you're in ChatGPT, Claude, or Notion AI, a well-structured prompt consistently outperforms a vague one by a measurable margin — both in quality and in the number of revision cycles needed before the output is usable.

Analyst: Synthesizing Research Into an Executive Briefing

Prompt

You are a senior strategy analyst. I've pasted below 4 articles about the current state of the European EV battery market. Your task is to write a 300-word executive briefing for a CFO who needs to decide whether to greenlight a €2M supply chain audit. Structure it as: Situation (2 sentences), Key Findings (3 bullet points with numbers where present), Risk (1 sentence), Recommended Next Step (1 sentence). Write in direct, non-jargon language. Do not editorialize beyond the facts in the source material. [PASTE ARTICLES HERE]

AI Response

**Situation** European EV battery demand is projected to grow 340% by 2030, but supply chain concentration risk has intensified sharply in 2024, with 68% of critical minerals still sourced from three countries. **Key Findings** • Lithium carbonate spot prices dropped 75% YoY through Q3 2024, improving short-term margin but masking structural sourcing fragility • The EU Battery Regulation (effective February 2024) requires full supply chain due diligence documentation for batteries over 2kWh — non-compliance penalties reach 4% of EU turnover • Three of the five major Tier 1 suppliers in your current vendor pool have unaudited sub-tier relationships in high-risk jurisdictions per the OECD conflict minerals framework **Risk** Without a formal audit, the company faces regulatory exposure and potential contract disqualification from OEM customers who are now requiring supplier compliance documentation. **Recommended Next Step** Commission a scoped supply chain audit focused on Tier 1 and Tier 2 suppliers before the Q1 2025 OEM contract renewal window.

Notice what made that prompt work. The role framing (senior strategy analyst) sets the model's register and expertise level before it writes a single word. The CFO audience definition means the model calibrates complexity and vocabulary without being told explicitly to simplify. The structural constraint — Situation, Key Findings, Risk, Next Step — eliminated the model's tendency to write discursive prose when a busy executive needs scannable structure. And the word count plus the 'do not editorialize' constraint prevented the output from bloating into a 900-word essay. That output, with your real source material, is 85% ready to send. You spend your time on the 15% — checking the numbers, adding your judgment — instead of writing from a blank page.

How Role Shapes Stack Priority

  • Managers: Prioritize embedded tools (Copilot, Notion AI) for meeting summaries, performance review drafts, and team communication. ChatGPT for thinking through difficult conversations or org design problems where you need a sounding board with no political stakes.
  • Consultants: Claude for client-facing deliverables where prose quality matters. Perplexity for live market research and competitor analysis. ChatGPT for structuring frameworks and slide logic. Budget for all three — the ROI at $60/month is measured in billable hours recovered.
  • Analysts: Copilot in Excel or Gemini in Sheets for formula generation and data pattern narration. ChatGPT Code Interpreter (built into ChatGPT Pro) for exploratory data analysis without writing Python from scratch. Perplexity for supplementing internal data with external market context.
  • Marketers: ChatGPT for campaign ideation, copy variants, and brief writing. Midjourney or DALL-E 3 (built into ChatGPT) for concept visuals and mood boards before briefing a designer. Perplexity for trend research and audience insight. Notion AI for content calendars and campaign documentation.
  • HR and People Leaders: Claude for sensitive communications — redundancy letters, performance improvement plans, policy documents — where tone and precision matter most. ChatGPT for job description drafts, interview question banks, and onboarding materials. Copilot for synthesizing engagement survey data in Excel.

The role differences above aren't arbitrary — they follow a logic. Tools with strong prose quality (Claude) go to tasks that end up in front of stakeholders. Tools with live data access (Perplexity) go to tasks that require current facts. Tools with deep integration (Copilot, Gemini) go to tasks that are already happening inside specific software. When you build your stack with this logic, you stop asking 'which AI should I use' as a general question and start knowing the answer before you've even opened your laptop. That shift from deliberation to habit is where the real productivity compound starts.

Manager: Preparing for a Difficult Performance Conversation

Prompt

You are an experienced HR business partner with 15 years of coaching managers through performance conversations. I need to prepare for a conversation with a direct report, Jordan, who has missed three project deadlines in the past six weeks. Jordan is technically strong but seems disengaged. I want the conversation to be direct and constructive — not a dressing-down, but not so gentle that the seriousness doesn't land. Give me: (1) an opening statement I can use verbatim, (2) three open questions to understand what's causing the pattern, (3) two phrases to use if Jordan becomes defensive, and (4) a closing statement that sets a clear expectation with a specific follow-up date. Keep the tone warm but unambiguous.

AI Response

**(1) Opening Statement** "Jordan, I want to talk about the last six weeks — specifically the three deadline misses on the Henderson brief, the Q3 report, and the vendor review. I'm not here to put you on the spot. I want to understand what's going on and figure out what we need to change." **(2) Open Questions** • "When you think about the last six weeks, what's felt different compared to earlier in the year?" • "Is there anything about how we're structuring these projects that's making it harder to hit the timelines?" • "What would need to be true for you to feel like you're back in a groove by end of quarter?" **(3) If Jordan Becomes Defensive** • "I hear you — and I'm not trying to assign blame. I want to solve this together, but I do need us to be honest about what's happening." • "The pattern is what I'm focused on, not any single miss. Three in six weeks tells me something has shifted, and I'd rather figure out what than let it continue." **(4) Closing Statement** "Here's where I land: I need to see the next two deliverables — the October 18th client summary and the October 25th team review — hit on time. I'll check in with you on October 11th, one week out, so we can remove any blockers early. Does that feel clear and fair?"

That second example demonstrates something important about where AI adds value in management work. The model isn't replacing your judgment about Jordan — you still have to read the room, adjust your tone in real time, and make the call about what comes next. What the model removed is the 25 minutes you'd otherwise spend staring at a blank notes page trying to pre-write an opening that doesn't come out as either an HR cliché or an accusation. You arrive at the conversation more prepared, with language you've already stress-tested in your head. The AI did the scaffolding; you do the human part.

Building Repeatable Workflows, Not One-Off Prompts

The professionals extracting the most value from their AI stacks aren't writing new prompts from scratch each time. They've built a personal prompt library — a small collection of 10 to 15 prompts that map to their most frequent, high-value tasks. In ChatGPT, you can save custom instructions that set your role, context, and preferences for every conversation without retyping them. In Claude, you can create Projects that hold persistent context about your work, your clients, and your preferred output formats. Notion AI users often keep a 'prompts' page in their workspace that they copy from rather than reinventing each time. This library approach transforms AI from a tool you have to think about using into a tool that's already loaded and ready when the task arrives.

A prompt library also solves the consistency problem that emerges when teams start using AI. If five analysts are each prompting ChatGPT differently to produce market summaries, you get five different formats landing in the same report. Shared prompt templates — even just a team Notion page with five standard prompts for the most common deliverable types — create output consistency without requiring everyone to become a prompt engineering expert. This is a lever that managers and team leads can pull immediately: spend 90 minutes building five shared prompts for your team's most repeated tasks, share them in your team's knowledge base, and watch the quality floor rise across the board within the first week.

Build Your First Prompt Library

Goal: Produce a working prompt library of five tested, refined prompts mapped to your real recurring tasks, ready to use and share with your team immediately.

1. Open a blank document in Notion, Google Docs, or wherever your team stores shared resources. Title it 'AI Prompt Library — [Your Name or Team Name].' 2. List your five most frequent, time-consuming professional writing or analysis tasks. Examples: weekly status update, client briefing, meeting summary, job description, data narrative for a report. 3. For each task, write a prompt using the Role + Context + Task + Format + Constraint structure. Leave placeholder brackets like [CLIENT NAME] or [PASTE DATA HERE] for the variable parts. 4. Test each prompt in ChatGPT or Claude using a real recent example from your work. Run the prompt, review the output, and note what worked and what needs adjusting. 5. Revise each prompt once based on what you observed. Pay attention to format instructions — if the output was too long, add a word count constraint. If the tone was wrong, add an explicit tone instruction. 6. Add a 'When to use' note to each prompt: one sentence describing the exact work situation that triggers this prompt. 7. Share the document with one colleague and ask them to test two of the prompts with their own real tasks. Collect their feedback on what the output got right and what it missed. 8. Update the prompts based on their feedback and note the version date. Commit to reviewing the library once a month as your work and the tools evolve. 9. Identify which prompt saves the most time per use and make that one your default starting point when introducing AI to someone new on your team.

Don't Let Your Stack Drift Into Overlap

Every tool you add should cover a distinct job-to-be-done. If you're using both ChatGPT and Claude for the same drafting tasks, you're not building a stack — you're building a habit of indecision. Audit your stack every 90 days: for each tool, name the specific task category it owns. If two tools share the same category and you can't articulate why, cut one. Stack bloat is a real productivity drain — more tabs open, more context-switching, more subscription costs for marginal gains.

It's Tuesday morning. You have a client deck due by noon, three unread Slack threads asking for status updates, and a data export from last week's campaign sitting untouched in your downloads folder. A colleague asks how you're using AI tools. You say "ChatGPT sometimes" — and immediately feel like you're leaving something on the table. That feeling is accurate. The difference between someone who uses AI occasionally and someone who has a personal AI stack isn't access to better tools. It's having made deliberate decisions about which tool handles which job, and building that into how the week actually runs.

A personal AI stack is not a list of apps you've downloaded. It's a small, intentional set of AI tools assigned to specific workflow roles — research, drafting, analysis, code, or creative — where each tool earns its place by doing one thing faster or better than you could alone. Most professionals who build a working stack use three to five tools maximum. More than that and you spend more time choosing tools than using them. The goal is coverage without overlap: no two tools doing the same job, and no job left unassigned. Think of it the way a consultant thinks about a project team — everyone has a clear lane.

The three foundational roles in most professional stacks are: a primary language model for drafting and reasoning (ChatGPT, Claude, or Gemini), a research and synthesis tool (Perplexity AI for live-web answers, or ChatGPT with browsing), and a productivity integration that works inside tools you already use (Notion AI, Microsoft Copilot, or GitHub Copilot if you touch code). These three cover around 80% of daily knowledge-work tasks. From there, specialists like Midjourney for visual work or Claude for long-document analysis get added only if your role genuinely demands them. Stacks built around actual job tasks outperform stacks built around excitement about new tools.

Choosing between the main language models comes down to task fit, not brand loyalty. Claude 3 Opus handles long, complex documents — it processes up to 200,000 tokens, meaning a 150-page report fits in one prompt. GPT-4o is faster, cheaper per token, and integrates with the widest plugin and API ecosystem. Gemini 1.5 Pro connects natively to Google Workspace, making it the lowest-friction choice for teams already on Docs and Sheets. None of these is universally superior. A marketing manager drafting campaign briefs has different needs than a financial analyst summarizing earnings calls. Your stack reflects your actual work, not the current benchmark leaderboard.

Build your stack around verbs, not tools

List the five things you do most often at work — draft, analyze, research, present, code, summarize. Assign one AI tool to each verb. If two verbs point to the same tool, that's fine. If a verb has no tool, that's a gap worth filling. This verb-first approach stops you from accumulating tools you don't actually use.

Matching Tools to Professional Output

The real test of a stack is whether it changes what you produce on a specific Tuesday — not whether it impresses someone at a conference. A consultant running a competitive analysis used to spend two hours gathering sources, an hour reading, and another hour writing the summary. With Perplexity pulling sourced research in minutes and Claude synthesizing a structured brief from pasted content, that same output takes under 45 minutes. The quality often improves because more time goes into judgment and framing rather than mechanical gathering. That's the calculus worth making explicit when you design your own stack.

Competitive positioning brief — Claude or GPT-4o

Prompt

I'm a strategy consultant preparing a competitive brief for a mid-sized SaaS company entering the project management software market. Their differentiator is AI-native task automation. Summarize the positioning of Asana, Monday.com, and ClickUp across these dimensions: target customer, core value proposition, AI feature maturity, and pricing model. Then identify two positioning gaps our client could own.

AI Response

**Competitive Positioning Summary — Project Management Software** **Asana** targets mid-to-large enterprises emphasizing workflow governance. Its AI features (launched 2023) focus on goal tracking and status summaries. Pricing starts at $10.99/user/month (Premium). Core pitch: organizational clarity at scale. **Monday.com** targets ops-heavy teams across company sizes. AI features include formula generation and meeting summaries. Pricing from $9/user/month. Core pitch: visual flexibility with low technical overhead. **ClickUp** targets SMBs and startups wanting one tool for everything. Most aggressive AI roadmap of the three, with AI writing and task generation. Free tier available; paid from $7/user/month. Core pitch: tool consolidation. **Positioning Gaps for Your Client:** 1. *Proactive task automation* — all three tools summarize and assist; none autonomously re-prioritizes work based on project signals. An AI-native entrant that acts, not just reports, owns unclaimed ground. 2. *Mid-market professional services* — Asana skews enterprise, ClickUp skews startup. Consultancies and agencies (50–500 people) with billable-hour complexity are underserved by current AI features.

ToolBest Role in StackStandout StrengthPricing (2024)Watch Out For
ChatGPT (GPT-4o)Primary drafting & reasoningSpeed, plugin ecosystem, image input$20/month (Plus)Can confidently state outdated facts
Claude 3 OpusLong-doc analysis, nuanced writing200K token context, careful reasoning$20/month (Pro)Slower than GPT-4o on simple tasks
Gemini 1.5 ProGoogle Workspace integrationNative Docs/Sheets/Gmail accessIncluded in Google One AI Premium ($19.99/mo)Still maturing outside Google ecosystem
Perplexity AILive research with citationsReal-time web + source linksFree tier; $20/month (Pro)Not ideal for long-form drafting
Notion AIIn-context writing & summarizationWorks inside your existing Notion pages$10/member/month add-onOnly useful if you already use Notion
GitHub CopilotCode generation & reviewIDE integration, code explanation$10/month (Individual)Narrow use case — code-adjacent roles only
Core tools for a professional AI stack — roles, strengths, and real 2024 pricing

Putting the Stack Into Practice

A stack only works if it's frictionless enough to use under deadline pressure. The professionals who get the most from their AI tools aren't necessarily the most technically skilled — they're the ones who've made the tools accessible at the moment of need. That means browser bookmarks for your two or three primary tools, saved prompt templates for your most repeated tasks (weekly reports, meeting summaries, stakeholder emails), and a habit of defaulting to AI as a first draft engine rather than a blank-page companion. The blank page is where most people lose time. AI eliminates it.

Saved prompts are underused and disproportionately valuable. Most professionals repeat the same ten to fifteen task types every week: summarize this meeting, draft this update, analyze this data, respond to this complaint. Writing a strong prompt once and saving it in a notes app or Notion page means every future version of that task starts from a proven foundation. ChatGPT's custom instructions feature and Claude's Projects feature both let you store context about your role, your company, and your preferences so you don't re-explain yourself every session. Set these up once and every subsequent conversation starts smarter.

The last design decision in your stack is knowing what not to automate. AI tools hallucinate — GPT-4 has a documented error rate on factual recall tasks, and all current models can produce confident-sounding wrong answers. Any output going to a client, a board, or a regulator needs human verification. Your stack should have an explicit rule: AI drafts, you verify and own. This isn't a limitation to work around; it's the professional standard that separates people who use AI well from people who get burned by it. Speed is the benefit. Judgment is still yours.

Weekly status update — any primary language model

Prompt

You are helping a senior marketing manager write a weekly status update for their VP. Tone: direct, confident, no corporate fluff. Format: 3 sections — Progress This Week (3 bullets), Blockers (1-2 bullets), Focus Next Week (2 bullets). Here's the raw input: Campaign A launched Monday, CTR is 3.2% against a 2.5% target. Campaign B copy is delayed because legal hasn't reviewed it yet — originally due Wednesday. Next week I need to finish Q3 budget reforecast and kick off the agency briefing for the product launch.

AI Response

**Marketing Update — Week of [Date]** **Progress This Week** - Campaign A launched on schedule; CTR hitting 3.2% vs. 2.5% target — 28% above benchmark - Finalized creative assets for Campaign C; handed to design for production - Completed channel performance review; findings shared with media team **Blockers** - Campaign B delayed: legal review outstanding since Wednesday; follow-up sent, awaiting response — risk to next week's launch window **Focus Next Week** - Complete Q3 budget reforecast; submit to Finance by Thursday - Initiate agency briefing process for product launch campaign

How the Stack Differs by Role

  • Managers: Primary model (ChatGPT or Claude) for comms and decision documents + Notion AI or Copilot for meeting notes and action tracking
  • Analysts: GPT-4o with Code Interpreter for data exploration + Perplexity for market context + Claude for long-report synthesis
  • Consultants: Claude for document-heavy research + Perplexity for live competitive data + ChatGPT for client-facing drafts and slide structures
  • Marketers: ChatGPT for copy and campaign briefs + Midjourney or DALL-E 3 for visual concepting + Perplexity for trend research
  • Product managers: GitHub Copilot for understanding technical specs + ChatGPT for PRD drafting + Notion AI for roadmap documentation

Key Takeaways

  1. A personal AI stack is three to five tools with defined roles — not an app collection
  2. Assign tools to verbs (draft, research, analyze) rather than choosing tools by popularity
  3. Claude handles long documents best; GPT-4o wins on speed and ecosystem; Gemini fits Google-native teams
  4. Saved prompts for your ten most repeated tasks multiply your efficiency faster than finding new tools
  5. Custom instructions and Projects features in ChatGPT and Claude store your context permanently — set them up once
  6. AI drafts; you verify — any output going to clients or leadership requires human review before it leaves your hands
  7. The stack that works is the one you actually use under pressure, not the one that looks impressive on paper
Build Your Personal AI Stack Document

Goal: Produce a personal AI stack document you actively maintain — a living reference that maps your real work tasks to specific tools, includes at least one tested saved prompt, and defines clear boundaries for human oversight.

1. Open a blank document in Notion, Google Docs, or any notes app — title it 'My AI Stack'. 2. List the five tasks you perform most frequently at work. Be specific: not 'writing' but 'drafting client update emails'. 3. For each task, assign one tool from this lesson's comparison table. Write one sentence explaining why that tool fits that task. 4. Choose your primary language model (ChatGPT, Claude, or Gemini). Open it now and configure custom instructions or a Project with your role, company context, and preferred output style. 5. Write one saved prompt for your single most repeated task. Test it with real content from your current work and refine until the output is 80% usable without editing. 6. Add a 'Do Not Automate' section to your stack document. List two to three output types in your role that must always have human verification before sharing externally. 7. Set a browser bookmark folder called 'AI Stack' containing your two or three primary tools — make them one click away. 8. Share your stack document with one colleague and ask them what task they'd add. Update accordingly. 9. Revisit the document in 30 days — note which tools you actually used and remove any that didn't earn their place.

Knowledge Check

A financial analyst needs to summarize a 120-page earnings report in a single prompt. Which tool is best suited for this task and why?

You draft a competitive analysis using ChatGPT and plan to send it directly to a client. What does this lesson recommend you do first?

A marketing manager already uses Google Docs, Gmail, and Sheets daily. Which primary language model creates the least friction for their workflow?

What is the primary advantage of saving prompt templates for your ten most repeated work tasks?

A consultant builds an AI stack with ChatGPT, Claude, Perplexity, Notion AI, Midjourney, Gemini, and GitHub Copilot. What problem does this stack most likely create?

Sign in to track your progress.