Skip to main content
Back to Getting the Most from AI Assistants
Lesson 3 of 10

Using AI for creative work: brainstorming, writing, and ideation

~16 min read

Using AI for Creative Work: Brainstorming, Writing, and Ideation

Creative work is where AI assistants earn their keep fastest. ChatGPT, Claude, and Gemini aren't just search engines with better grammar — they're thinking partners that can generate 20 angles on a problem in 30 seconds, draft a campaign brief while you drink your coffee, or break a two-week creative block in a single conversation. The catch: vague inputs produce vague outputs. Professionals who get the most from AI creative work treat these tools like a brilliant but literal-minded collaborator — one that needs clear context, constraints, and a defined role before the good stuff starts flowing.

7 Things You Need to Know Before You Start

  1. AI generates plausible, not necessarily correct — always verify facts, statistics, and claims it produces during creative sessions.
  2. The model has no memory between sessions by default — re-establishing context at the start of each chat saves you from generic outputs.
  3. Specificity multiplies quality: giving Claude a target audience, tone, word count, and format produces 3-4x more usable output than open-ended requests.
  4. You can iterate in the same conversation — follow-up prompts like 'make option 3 more provocative' build on what the model already knows about your project.
  5. ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro all handle long-form creative tasks well; each has a distinct default voice you'll learn to recognize.
  6. AI is strongest at divergence (generating many ideas) and weakest at taste — human judgment is still the filter that makes output publishable.
  7. Prompt length matters less than prompt clarity — a 20-word prompt with sharp constraints beats a 200-word prompt that wanders.

How AI Handles Creative Requests

When you send a creative prompt, the model predicts the most statistically useful next tokens based on your input and its training data. This means it defaults toward the center of what it has seen — competent, average, safe. That's why your first draft often feels generic. The fix isn't a better model; it's a more constrained prompt. Tell the model what genre, what audience, what emotional register, and what to avoid. Those constraints push it away from the statistical average and toward something distinctive.

This also explains why AI creative output improves dramatically mid-conversation. As you react, redirect, and refine, the model accumulates context about your preferences and project. A brainstorming session that starts with 'give me tagline ideas for a fintech app' and ends 20 exchanges later has far sharper outputs than the opening round — because every message you send trains the model on what 'good' looks like for your specific task. Treat the first response as a rough draft, not a deliverable.

  • Default outputs skew toward the average — constraints are your primary lever for quality.
  • Iterating in one conversation compounds context and improves results over time.
  • The model reflects your prompt's energy: flat input produces flat output.
  • Asking for multiple variants (e.g., 'give me 5 versions') surfaces range and helps you identify what direction you actually want.
  • Negative constraints work well: 'avoid corporate jargon' or 'don't use rhymes' sharpens outputs faster than describing what you do want.

The Role Assignment Trick

Start any creative session by assigning a role: 'You are a senior copywriter who specializes in B2B SaaS with a dry, witty tone.' This single sentence shifts the model's default register significantly. Claude and ChatGPT-4o both respond strongly to role framing — you'll notice the difference in the first response.

Creative Task Reference: What AI Does Well vs. Poorly

Creative TaskAI PerformanceBest ToolHuman Role
Brainstorming / idea generationExcellentChatGPT-4o, Claude 3.5Filter and select
First-draft blog postsGoodClaude 3.5 SonnetEdit voice and facts
Taglines and headlinesGood with iterationChatGPT-4oJudge and refine
Long-form narrative / storytellingModerateClaude 3.5 SonnetHeavy structural editing
Creative briefsExcellentGemini 1.5 Pro, ClaudeAdd strategic context
Poetry and lyric writingModerateClaude 3.5 SonnetTaste and authenticity
Brand voice developmentGood as starting pointChatGPT-4oValidate against brand
Social media copy (short-form)ExcellentChatGPT-4o, GeminiTone check, platform fit
Original research or dataPoor — do not rely on itNoneFull human ownership
AI creative task performance guide — use as a quick reference when scoping projects

Brainstorming: Getting Volume and Variety

The highest-ROI use of AI in creative work is pure brainstorming. A skilled prompt can generate 20 distinct campaign concepts, 15 newsletter subject lines, or 10 product positioning angles in under a minute. The key discipline is resisting the urge to stop at the first good idea. Ask for more. Ask for weirder. Ask the model to approach the same brief from the perspective of a skeptic, a competitor, or a customer who almost didn't buy. Each reframe unlocks a different slice of possibility space.

Structured brainstorming prompts consistently outperform open-ended ones. Instead of 'give me ideas for a product launch,' try 'give me 10 product launch concepts for a $200 smart water bottle targeting endurance athletes aged 28-45, ranging from conservative to provocative.' The range instruction is particularly powerful — it forces the model to move beyond its default safe zone and produce contrast, which makes it far easier for you to identify where your instincts land.

  1. Define the deliverable type: concepts, headlines, angles, formats, names — be specific about what you're brainstorming.
  2. Set a quantity: asking for 10-15 items forces range and prevents the model from stopping at the first plausible answer.
  3. Add a spectrum instruction: 'from safe to provocative' or 'from rational to emotional' produces contrast that helps you navigate.
  4. Include audience specifics: demographics, psychographics, pain points, or a job-to-be-done statement.
  5. Use perspective shifts: ask the model to ideate as a skeptic, a competitor, or a specific archetype customer.
  6. Follow up on the best 2-3: 'expand on concept 4 with three execution variants' deepens the most promising directions.
  7. Ask for what's missing: 'what angles haven't we covered yet?' often surfaces the most original ideas.

Brainstorming Prompt Structures

Prompt PatternExampleBest For
Quantity + RangeGive me 12 taglines from professional to irreverentHeadlines, names, concepts
Perspective ShiftBrainstorm from the POV of a skeptical CFOObjection handling, positioning
Constraint ForcingGenerate 8 ideas using only social proof as the hookCopy, campaigns, content
Reverse BrainstormList 10 ways this product launch could fail, then invert eachRisk identification, differentiation
Category MappingGive me one idea per channel: email, paid, organic, event, partnershipCampaign planning, GTM strategy
Analogical ThinkingHow would Apple, Patagonia, and a challenger brand each position this?Brand strategy, tone exploration
Six reusable brainstorming prompt patterns with example applications

Claude vs. ChatGPT for Brainstorming

ChatGPT-4o tends to produce more varied, punchy brainstorm lists with sharper marketing instincts. Claude 3.5 Sonnet produces more nuanced, thoughtful options with better conceptual depth. For rapid-fire ideation in marketing, ChatGPT-4o edges ahead. For strategy-level brainstorming where you need ideas that hold up under scrutiny, Claude is the stronger pick. Run the same prompt in both when stakes are high.
Structured Brainstorm: Campaign Concepts

Prompt

You are a senior creative strategist. I'm launching a project management tool for architecture firms — teams of 5-20 people who currently track projects in spreadsheets and email. Give me 10 campaign concept directions, ranging from practical/rational to emotional/aspirational. For each, write a one-sentence concept description and a rough headline.

AI Response

1. **The Hidden Cost Angle (Rational)** — Target the billable hours lost to admin chaos. Headline: 'Your best architects are doing spreadsheet work.' 2. **The Deadline Miss** (Rational-Emotional) — Lead with the moment a project slips. Headline: 'The 2am email nobody wants to send.' 3. **The Craft Reclaimed** (Aspirational) — Position the tool as giving architects back their creative time. Headline: 'Less coordination. More architecture.' 4. **The Firm That Scaled** (Social Proof) — Feature a firm that grew from 8 to 30 people without chaos. Headline: 'How Meridian Studio took on twice the projects.' 5. **The Competitor Dig** (Provocative) — Directly call out spreadsheet culture as a bottleneck. Headline: 'Excel wasn't built for this. Neither were you.' [...and 5 more concepts with headlines]

Writing with AI: Drafting, Editing, and Voice

AI writing assistants work best when you treat them as a first-draft engine, not a finished-copy machine. Claude 3.5 Sonnet is particularly strong at long-form drafting — blog posts, executive summaries, creative briefs — and maintains coherence across 1,500-2,000 words better than most alternatives. The practical workflow: prompt for a complete draft, read it end-to-end for structure and argument, then rewrite the sections that don't sound like you. This is almost always faster than writing from scratch, even when you rewrite 40% of the output.

Voice is the hardest thing to transfer to AI. Without examples, models default to a competent-but-bland register that feels like a corporate blog from 2019. The solution is few-shot prompting: paste 2-3 samples of writing you want to match — your own previous work, a publication's style, a specific author's tone — and ask the model to draft in that style. ChatGPT-4o and Claude both analyze stylistic patterns from examples with impressive accuracy. Three samples of 200-300 words each is usually enough to shift the output meaningfully.

The AI Voice Drift Problem

In long documents, AI models gradually drift back toward their default voice — especially past the 800-word mark. If you're generating a 1,500-word article, check the second half separately. A fast fix: paste the second half back in and prompt 'rewrite this section to match the tone of the opening, which I'll paste below.' Don't assume tone consistency throughout long outputs without checking.
Quick Practice: Run a Structured Brainstorm

Goal: Experience the compounding effect of iterative prompting in one session, and identify which brainstorm pattern works best for your specific creative context.

1. Pick a real project you're currently working on — a presentation, campaign, product feature, or piece of content. 2. Open ChatGPT-4o or Claude 3.5 Sonnet in a new conversation. 3. Assign a role at the start: write 'You are a [relevant expert role] with deep experience in [your industry].' 4. Write a brainstorm prompt using the Quantity + Range pattern from the table above — ask for at least 10 items spanning a defined spectrum. 5. Review the output and mark your top 3 items. 6. Send a follow-up prompt: 'Expand on items [X, Y, Z] — give me 3 execution variants for each, keeping the same tone and audience in mind.' 7. Save the output in a doc and note which prompt pattern produced the most usable ideas.

Quick Reference: AI Creative Work Cheat Sheet

  • Assign a role before any creative task — it shifts the model's default register immediately.
  • Ask for 10-15 items minimum in brainstorms — volume forces range and surfaces contrast.
  • Use the Range instruction ('from safe to provocative') to generate navigable variety.
  • Paste 2-3 writing samples when voice matching — don't describe the style, show it.
  • Iterate in one conversation — context compounds and outputs improve with each exchange.
  • Treat AI output as a first draft — your edit is what makes it publishable.
  • Check tone consistency in long documents past the 800-word mark.
  • Use negative constraints ('avoid jargon,' 'no bullet points') to sharpen output fast.
  • Run high-stakes brainstorms in both ChatGPT-4o and Claude — different models surface different ideas.
  • Never use AI-generated statistics or data claims without independent verification.

Key Takeaways from This Section

  1. AI defaults to statistical averages — constraints are what push output from generic to useful.
  2. Brainstorming is the highest-ROI creative use case: volume, speed, and range are AI's strengths here.
  3. Structured prompt patterns (Quantity + Range, Perspective Shift, Reverse Brainstorm) reliably outperform open-ended requests.
  4. ChatGPT-4o and Claude 3.5 Sonnet have distinct strengths — matching model to task improves results.
  5. Voice transfer requires examples, not descriptions — few-shot prompting is the professional's method.
  6. Every conversation is cumulative: the best creative outputs come from iterative sessions, not single prompts.

Writing With AI: Structure, Voice, and Iteration

Brainstorming gets you raw material. Writing is where that material becomes something real. AI assistants handle the full writing stack — first drafts, rewrites, tone shifts, structural reorganization — but only when you direct them precisely. The professionals who get the most out of ChatGPT or Claude for writing aren't the ones who ask for a finished product upfront. They're the ones who treat the AI as a writing partner: drafting in passes, adjusting voice mid-session, and using targeted prompts to fix specific weaknesses rather than starting over.

Controlling Tone and Voice

Every piece of writing has a voice — and AI defaults to a generic professional tone unless you specify otherwise. That default is competent but forgettable. To get output that sounds like your brand, your team, or you personally, give the model concrete reference points. Paste in a sample of your past writing and say 'match this voice.' Describe the audience in detail: 'skeptical CFOs who hate jargon' produces sharper copy than 'business readers.' Adjectives help too — 'terse, confident, slightly provocative' gives Claude enough signal to diverge meaningfully from its defaults.

  • Paste 150–200 words of your own writing as a voice reference — more reliable than describing style abstractly
  • Specify the emotional register: 'reassuring,' 'urgent,' 'dry and precise' all produce distinct outputs
  • Name the audience explicitly: job title, seniority level, and what they already know
  • Use contrast: 'like The Economist, not like a press release' gives the model a quick calibration anchor
  • Ask for multiple tone variants of the same paragraph — then pick or blend the best elements
  • If output feels generic, add: 'avoid corporate language, no passive voice, no hedge words'

The 'Voice Sample' Trick

Open any piece you've written that you're proud of. Paste the first three paragraphs into your prompt, then write: 'Write in this exact voice and sentence rhythm.' This single addition eliminates about 80% of the 'sounds like AI' complaints. Claude is particularly good at voice matching; ChatGPT GPT-4o is slightly stronger at maintaining it across longer documents.
Writing TaskBest ModelPrompt ApproachWatch Out For
Long-form articles (1000+ words)Claude 3.5 SonnetOutline first, then draft section by sectionStructural drift after ~800 words
Short punchy copy (ads, taglines)ChatGPT GPT-4oGive 3 examples of copy you like, ask for 10 variantsOver-polished, loses edge
Email sequencesChatGPT or ClaudeDefine the journey: awareness → decision, specify each email's single goalToo salesy by default
Executive summariesClaude 3.5 SonnetPaste source doc + 'summarize for a time-pressed VP in 150 words'May include details the VP doesn't need
Social media postsChatGPT GPT-4oSpecify platform, character limit, and 1 key messageGeneric hashtags, weak hooks
Technical documentationGitHub Copilot / ChatGPTProvide existing doc structure as templateAssumes too much reader knowledge
AI writing task routing guide — match the task to the right tool and approach

Iterating Instead of Regenerating

Most people hit 'regenerate' when AI output misses the mark. That's the slowest possible workflow. Targeted iteration — telling the model exactly what to change and why — gets you to a usable draft in two or three exchanges instead of ten. Think of each follow-up prompt as a surgical edit instruction. 'Make the opening line more surprising' is better than 'rewrite this.' 'Cut 30 words from paragraph two without losing the core argument' is better than 'make it shorter.' The model retains context within a session, so you're building on work already done rather than restarting.

The iteration loop that works best in practice: generate a rough draft, read it for structural problems first, fix structure before fixing prose, then refine language last. Trying to fix everything in one pass leads to prompts that are too vague to be useful. Separate concerns — structure, argument, tone, length — and address each in its own exchange. This mirrors how professional editors work, and it maps perfectly onto how context windows handle sequential instructions.

  1. Generate the full draft first — resist editing the prompt until you see the complete output
  2. Identify the single biggest structural problem before anything else
  3. Fix structure with one targeted prompt: 'Move the case study to after the problem statement'
  4. Read again for argument gaps — 'Add a concrete example to support the claim in paragraph 3'
  5. Address tone in a dedicated pass: 'Rewrite paragraphs 1 and 4 to sound less formal'
  6. Cut for length last: 'Remove 20% of the words without changing the meaning'
  7. Do a final polish pass: 'Check for repeated words and weak verbs — suggest replacements'

Context Window Limits Matter Here

Claude 3.5 Sonnet has a 200,000-token context window — roughly 150,000 words. ChatGPT GPT-4o runs at 128,000 tokens. For most writing tasks this is irrelevant, but for long documents or extended editing sessions, Claude maintains more of the conversation history without degrading. If you notice the model 'forgetting' earlier instructions mid-session, you've likely hit the practical working limit of the context window.
Iterative Editing — Tone Correction Pass

Prompt

Here's the current draft of my introduction: [paste text]. The tone is too cautious and hedge-y. Rewrite it to sound confident and direct — cut any phrase that qualifies or softens the main claim. Keep the same structure and word count.

AI Response

Revised introduction with hedges removed and active, declarative sentences substituted. For example, 'It may be possible to consider...' becomes 'The answer is...' and 'There are some indications that...' becomes 'The data shows...' Each change is marked in brackets so you can accept or reject individually.

Ideation at Scale: Going Beyond the First Ten Ideas

The first batch of AI-generated ideas is almost always the most obvious batch. That's not a flaw — it's a feature of how language models work. They draw on the statistical center of their training data, which means popular, well-represented ideas surface first. The professionals who use AI most effectively for ideation know to push past that first batch deliberately. Techniques like forcing constraints, demanding unusual combinations, or asking the model to argue against its own suggestions unlock the less-traveled territory where genuinely differentiated ideas live.

Constraint-based ideation is the most reliable method. Give the model a problem, then add an artificial constraint that forces it off the obvious path. 'Generate 10 campaign concepts with zero budget' produces different results than an open-ended request. 'What would this look like if we could only communicate through visuals?' changes the problem frame entirely. Each constraint acts as a filter that removes the generic answers and forces the model to draw on less-common training patterns. Stack two constraints for even stronger divergence from the default.

Ideation TechniquePrompt PatternBest ForExample Output Type
Constraint forcing'Generate X ideas but [hard constraint]'Breaking out of obvious solutionsUnconventional approaches to known problems
Role reversal'How would [competitor/enemy/skeptic] solve this?'Stress-testing assumptionsDevil's advocate angles, attack vectors
Forced combination'Combine [concept A] with [unrelated concept B]'Product/service innovationUnexpected feature sets or positioning angles
Volume + cull'Give me 30 ideas, then identify the 5 most unusual'Workshops, sprint sessionsA shortlist with genuine diversity
First principles'Ignore how this is currently done. What would you build from scratch?'Strategy and product workZero-based redesigns
Anti-brainstorm'List every reason this idea will fail'Risk identification, pitch prepObjection map, pre-mortem material
Trend injection'Apply the logic of [current trend] to [your problem]'Marketing and positioningTimely angles on existing offers
Ideation techniques — each one shifts where the model draws from to generate responses

The Confirmation Bias Trap

AI assistants are trained to be helpful, which means they default to agreeing with the frame you give them. If you say 'I think the real problem is X — brainstorm solutions,' the model generates solutions to X without questioning whether X is actually the problem. Build in a challenge step: ask 'What am I assuming here that might be wrong?' before committing to an ideation direction. Claude is slightly better than ChatGPT at pushing back on flawed premises, but neither model does it reliably without an explicit prompt.

Combining Brainstorming and Writing in One Session

The most efficient creative workflow uses a single AI session to move from raw ideation to structured draft without switching tools or losing context. Start with a wide ideation pass — 20 to 30 ideas, no filtering. Then use a culling prompt to narrow to the three strongest. Then ask the model to build a brief outline around the top choice. Then draft section by section. This end-to-end approach keeps all the context — your constraints, voice notes, rejected ideas — in one place, and the model uses earlier exchanges to inform later ones.

  • Start wide: ask for 20+ ideas before filtering anything — volume creates the raw material for selection
  • Use explicit culling criteria: 'Which 3 of these are most differentiated from what competitors currently say?'
  • Build the outline before drafting: a 5-point outline takes 30 seconds to generate and saves 20 minutes of structural revision
  • Reference earlier ideas mid-session: 'Use the angle from idea #7 as the hook for the introduction'
  • Keep a running 'parking lot' prompt: 'Hold these unused ideas — I may want them later' keeps rejected material accessible
  • End the session with a synthesis prompt: 'Summarize the key creative decisions we made and why'
Run a Full Creative Sprint in One Session

Goal: Produce a first-draft opening section of a real work project, generated through a structured ideation-to-writing sprint — with a saved outline you can use to complete the remaining sections independently.

1. Open a new chat in ChatGPT or Claude. Paste this context block: your project name, target audience (job title + what they care about), the core message you need to communicate, and one constraint (budget, format, or time limit). 2. Prompt: 'Generate 25 ideas for [project]. Apply the constraint I gave you. Label each idea with a one-sentence description.' 3. Read the full list without editing. Highlight the 4 or 5 that feel most differentiated — not most familiar. 4. Prompt: 'Of these 25 ideas, which 3 are least similar to what's commonly done in this space? Explain briefly why each stands out.' 5. Compare your highlights to the model's picks. Note any surprises — ideas you dismissed that the model flagged, or vice versa. 6. Choose one idea. Prompt: 'Build a 5-section outline for [chosen idea]. Each section should have a single clear purpose. Include a suggested hook for the opening.' 7. Review the outline. Adjust any section that doesn't serve the core message. Then prompt: 'Draft Section 1 in [voice description] for [audience description]. Target 150 words.'

Editing, Refining, and Making AI Output Actually Yours

Raw AI output is a starting point, not a finished product. The professionals who get the most from tools like ChatGPT and Claude treat the first response as a draft zero — something to interrogate, reshape, and push further. Knowing how to edit AI output efficiently, how to run multi-turn refinement loops, and how to inject your own voice back into the work separates people who save 20 minutes from people who save 3 hours. This section gives you the systematic approach to do exactly that.

The Refinement Loop

Most users send one prompt, get one response, and accept whatever arrives. That's leaving most of the value on the table. A refinement loop means treating AI conversation as iterative: you critique the output, give targeted feedback, and re-prompt. Each round narrows the gap between what the AI produced and what you actually need. Claude and ChatGPT both maintain context across a conversation, so your follow-up instructions build on prior output without restating everything.

  • Start broad, then constrain: get the skeleton first, then add specificity in follow-up prompts
  • Name what's wrong explicitly: 'the tone is too formal' beats 'make it better'
  • Ask for alternatives: 'give me three different opening lines for this' produces options to cherry-pick
  • Use 'keep X, change Y' framing to preserve what's working while fixing what isn't
  • Request a critique before a rewrite: 'what are the weakest parts of this draft?' surfaces issues you might miss
  • Set a constraint to force creativity: 'rewrite this in under 50 words' often produces sharper copy than any direct edit

The 'Critic First' Move

Before asking AI to rewrite anything, ask it to critique the current draft. Paste your text and prompt: 'List the three biggest weaknesses in this paragraph — clarity, tone, and structure.' The diagnosis is often more valuable than the rewrite itself, and it gives you precise targets for your next prompt.
Weak Follow-UpStrong Follow-UpWhy It Works Better
Make it betterMake the opening sentence more direct and cut any hedging languageSpecific target, specific action
Change the toneRewrite this for a skeptical CFO who values brevity over enthusiasmAudience and attitude defined
It's too longCut this to 80 words without losing the three key benefitsHard constraint + preservation rule
Add more detailAdd one concrete example for each claim, drawn from B2B SaaS contextsType of detail and domain specified
Make it sound like meHere are two emails I've written [paste]. Match that voice in this draftModel provided, not just described
Refinement prompt upgrades — each right-column version gives the AI a solvable, specific task

Injecting Your Voice and Protecting Your Judgment

AI writing defaults to a confident, slightly generic register. It's readable but unmistakably averaged. To make output sound like you, feed it examples of your actual writing — emails, Slack messages, past reports. ChatGPT and Claude can extract stylistic patterns from 3-5 samples and apply them. This isn't about mimicry; it's calibration. The AI learns your sentence rhythm, your preferred transitions, your tolerance for humor.

Your judgment still owns the final product. AI hallucinates facts, misses organizational context, and can't know what your CEO reacted badly to in last quarter's all-hands. Always read the output as an editor, not a recipient. The goal is to use AI to handle the structural and linguistic heavy lifting while you apply the contextual intelligence only you have.

Output ProblemLikely CauseFix
Sounds generic and flatNo voice calibration providedPaste 3 writing samples, ask AI to match the style
Wrong level of formalityNo audience specifiedAdd 'audience:' field to your prompt template
Factually confident but wrongAI hallucination on specificsVerify all statistics, names, and dates independently
Too long, hard to cutNo length constraint set upfrontInclude word count in original prompt
Missing your organization's contextAI has no internal knowledgePaste relevant background directly into the prompt
Ideas feel safe and obviousPrompt didn't push for noveltyAdd 'include one unconventional or counterintuitive angle'
Diagnosing and fixing common AI output problems in creative and writing tasks

Don't Outsource Your Credibility

AI tools will confidently cite statistics that don't exist, attribute quotes to people who never said them, and describe industry trends with false precision. In creative brainstorming this rarely matters. In anything that goes to a client, exec, or public audience, every factual claim needs independent verification. Perplexity AI is better than most for sourced outputs, but even then, check the primary source.
Voice Calibration + Refinement in One Prompt

Prompt

Here are two examples of how I write professionally: [Example 1: paste a paragraph from a past email or report] [Example 2: paste another] Now rewrite the following draft in that same voice — same sentence length, same directness, same tolerance for a light touch of dry humor. Keep all three key points but cut the total length by 30%. [Paste your AI-generated draft here]

AI Response

The rewritten draft mirrors your sentence rhythm and cuts filler phrases. Key points remain intact. Length reduced from 210 words to 147. Hedging phrases like 'it could be argued' and 'one might consider' replaced with direct assertions matching your sample style.

Notion AI and In-Context Editing

If you draft directly in Notion, Notion AI lets you highlight any block and run commands like 'make shorter,' 'change tone to casual,' or 'continue writing.' This in-context editing removes the copy-paste loop entirely and keeps your refinement workflow inside the document where the work lives.
Build Your Personal AI Writing Toolkit

Goal: You end this task with a saved personal voice brief you can paste into any future AI writing prompt, plus one polished draft produced using the full refinement loop.

1. Open ChatGPT or Claude and start a new conversation. 2. Paste two or three paragraphs from emails or documents you've written — work you're proud of and that represents your real voice. 3. Prompt: 'Analyze the style of these samples. Describe my sentence length patterns, formality level, use of humor, and any distinctive phrasing habits.' 4. Save that style description in a notes document — this becomes your personal voice brief. 5. Now pick a real upcoming work task: a meeting summary, a proposal intro, or a brainstorm brief. Draft it using AI with your voice brief included in the prompt. 6. Run one refinement round using the 'critic first' method: ask AI to identify the two weakest elements before rewriting.

Quick-Reference Cheat Sheet

  • Treat first AI output as draft zero — always refine before using
  • Use 'keep X, change Y' to preserve good elements while targeting weak ones
  • Feed AI your own writing samples to calibrate voice before any important draft
  • Ask for a critique before a rewrite — diagnosis first, fix second
  • Set explicit word count constraints upfront to avoid bloated output
  • Verify every statistic, quote, and factual claim independently — always
  • Use Perplexity for sourced research outputs; use ChatGPT/Claude for creative and structural work
  • Notion AI handles in-context editing without breaking your document workflow
  • Specificity in follow-up prompts drives quality — vague feedback produces vague results
  • Your contextual judgment (org politics, audience history, brand nuance) is what AI cannot replace

Key Takeaways

  1. Refinement loops — not single prompts — are what produce professional-quality AI output
  2. Voice calibration using your own writing samples closes the gap between generic AI text and your actual style
  3. The 'critic first' technique surfaces structural problems before you invest time in rewrites
  4. Factual hallucination is a consistent risk across all major AI tools; verification is non-negotiable for external-facing work
  5. Your irreplaceable contribution is contextual judgment — AI handles structure and language, you supply organizational intelligence
Knowledge Check

You ask ChatGPT to rewrite a paragraph and the result still sounds generic. What is the most likely cause?

A colleague pastes an AI-generated market analysis into a client deck without checking the statistics. What is the primary risk?

Which follow-up prompt will most reliably improve an AI draft?

You want to refine an AI-generated proposal introduction but aren't sure what's wrong with it. What should you do first?

A manager uses AI to brainstorm campaign ideas, then edits the output heavily before presenting. She adds competitive context, adjusts ideas to reflect a past failed campaign, and cuts anything that wouldn't fit the brand. What does this scenario best illustrate?

This lesson requires Pro

Upgrade your plan to unlock this lesson and all other Pro content on the platform.

Upgrade to Pro

You're currently on the Free plan.