Using AI for creative work: brainstorming, writing, and ideation
~16 min readUsing AI for Creative Work: Brainstorming, Writing, and Ideation
Creative work is where AI assistants earn their keep fastest. ChatGPT, Claude, and Gemini aren't just search engines with better grammar — they're thinking partners that can generate 20 angles on a problem in 30 seconds, draft a campaign brief while you drink your coffee, or break a two-week creative block in a single conversation. The catch: vague inputs produce vague outputs. Professionals who get the most from AI creative work treat these tools like a brilliant but literal-minded collaborator — one that needs clear context, constraints, and a defined role before the good stuff starts flowing.
7 Things You Need to Know Before You Start
- AI generates plausible, not necessarily correct — always verify facts, statistics, and claims it produces during creative sessions.
- The model has no memory between sessions by default — re-establishing context at the start of each chat saves you from generic outputs.
- Specificity multiplies quality: giving Claude a target audience, tone, word count, and format produces 3-4x more usable output than open-ended requests.
- You can iterate in the same conversation — follow-up prompts like 'make option 3 more provocative' build on what the model already knows about your project.
- ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro all handle long-form creative tasks well; each has a distinct default voice you'll learn to recognize.
- AI is strongest at divergence (generating many ideas) and weakest at taste — human judgment is still the filter that makes output publishable.
- Prompt length matters less than prompt clarity — a 20-word prompt with sharp constraints beats a 200-word prompt that wanders.
How AI Handles Creative Requests
When you send a creative prompt, the model predicts the most statistically useful next tokens based on your input and its training data. This means it defaults toward the center of what it has seen — competent, average, safe. That's why your first draft often feels generic. The fix isn't a better model; it's a more constrained prompt. Tell the model what genre, what audience, what emotional register, and what to avoid. Those constraints push it away from the statistical average and toward something distinctive.
This also explains why AI creative output improves dramatically mid-conversation. As you react, redirect, and refine, the model accumulates context about your preferences and project. A brainstorming session that starts with 'give me tagline ideas for a fintech app' and ends 20 exchanges later has far sharper outputs than the opening round — because every message you send trains the model on what 'good' looks like for your specific task. Treat the first response as a rough draft, not a deliverable.
- Default outputs skew toward the average — constraints are your primary lever for quality.
- Iterating in one conversation compounds context and improves results over time.
- The model reflects your prompt's energy: flat input produces flat output.
- Asking for multiple variants (e.g., 'give me 5 versions') surfaces range and helps you identify what direction you actually want.
- Negative constraints work well: 'avoid corporate jargon' or 'don't use rhymes' sharpens outputs faster than describing what you do want.
The Role Assignment Trick
Creative Task Reference: What AI Does Well vs. Poorly
| Creative Task | AI Performance | Best Tool | Human Role |
|---|---|---|---|
| Brainstorming / idea generation | Excellent | ChatGPT-4o, Claude 3.5 | Filter and select |
| First-draft blog posts | Good | Claude 3.5 Sonnet | Edit voice and facts |
| Taglines and headlines | Good with iteration | ChatGPT-4o | Judge and refine |
| Long-form narrative / storytelling | Moderate | Claude 3.5 Sonnet | Heavy structural editing |
| Creative briefs | Excellent | Gemini 1.5 Pro, Claude | Add strategic context |
| Poetry and lyric writing | Moderate | Claude 3.5 Sonnet | Taste and authenticity |
| Brand voice development | Good as starting point | ChatGPT-4o | Validate against brand |
| Social media copy (short-form) | Excellent | ChatGPT-4o, Gemini | Tone check, platform fit |
| Original research or data | Poor — do not rely on it | None | Full human ownership |
Brainstorming: Getting Volume and Variety
The highest-ROI use of AI in creative work is pure brainstorming. A skilled prompt can generate 20 distinct campaign concepts, 15 newsletter subject lines, or 10 product positioning angles in under a minute. The key discipline is resisting the urge to stop at the first good idea. Ask for more. Ask for weirder. Ask the model to approach the same brief from the perspective of a skeptic, a competitor, or a customer who almost didn't buy. Each reframe unlocks a different slice of possibility space.
Structured brainstorming prompts consistently outperform open-ended ones. Instead of 'give me ideas for a product launch,' try 'give me 10 product launch concepts for a $200 smart water bottle targeting endurance athletes aged 28-45, ranging from conservative to provocative.' The range instruction is particularly powerful — it forces the model to move beyond its default safe zone and produce contrast, which makes it far easier for you to identify where your instincts land.
- Define the deliverable type: concepts, headlines, angles, formats, names — be specific about what you're brainstorming.
- Set a quantity: asking for 10-15 items forces range and prevents the model from stopping at the first plausible answer.
- Add a spectrum instruction: 'from safe to provocative' or 'from rational to emotional' produces contrast that helps you navigate.
- Include audience specifics: demographics, psychographics, pain points, or a job-to-be-done statement.
- Use perspective shifts: ask the model to ideate as a skeptic, a competitor, or a specific archetype customer.
- Follow up on the best 2-3: 'expand on concept 4 with three execution variants' deepens the most promising directions.
- Ask for what's missing: 'what angles haven't we covered yet?' often surfaces the most original ideas.
Brainstorming Prompt Structures
| Prompt Pattern | Example | Best For |
|---|---|---|
| Quantity + Range | Give me 12 taglines from professional to irreverent | Headlines, names, concepts |
| Perspective Shift | Brainstorm from the POV of a skeptical CFO | Objection handling, positioning |
| Constraint Forcing | Generate 8 ideas using only social proof as the hook | Copy, campaigns, content |
| Reverse Brainstorm | List 10 ways this product launch could fail, then invert each | Risk identification, differentiation |
| Category Mapping | Give me one idea per channel: email, paid, organic, event, partnership | Campaign planning, GTM strategy |
| Analogical Thinking | How would Apple, Patagonia, and a challenger brand each position this? | Brand strategy, tone exploration |
Claude vs. ChatGPT for Brainstorming
Prompt
You are a senior creative strategist. I'm launching a project management tool for architecture firms — teams of 5-20 people who currently track projects in spreadsheets and email. Give me 10 campaign concept directions, ranging from practical/rational to emotional/aspirational. For each, write a one-sentence concept description and a rough headline.
AI Response
1. **The Hidden Cost Angle (Rational)** — Target the billable hours lost to admin chaos. Headline: 'Your best architects are doing spreadsheet work.' 2. **The Deadline Miss** (Rational-Emotional) — Lead with the moment a project slips. Headline: 'The 2am email nobody wants to send.' 3. **The Craft Reclaimed** (Aspirational) — Position the tool as giving architects back their creative time. Headline: 'Less coordination. More architecture.' 4. **The Firm That Scaled** (Social Proof) — Feature a firm that grew from 8 to 30 people without chaos. Headline: 'How Meridian Studio took on twice the projects.' 5. **The Competitor Dig** (Provocative) — Directly call out spreadsheet culture as a bottleneck. Headline: 'Excel wasn't built for this. Neither were you.' [...and 5 more concepts with headlines]
Writing with AI: Drafting, Editing, and Voice
AI writing assistants work best when you treat them as a first-draft engine, not a finished-copy machine. Claude 3.5 Sonnet is particularly strong at long-form drafting — blog posts, executive summaries, creative briefs — and maintains coherence across 1,500-2,000 words better than most alternatives. The practical workflow: prompt for a complete draft, read it end-to-end for structure and argument, then rewrite the sections that don't sound like you. This is almost always faster than writing from scratch, even when you rewrite 40% of the output.
Voice is the hardest thing to transfer to AI. Without examples, models default to a competent-but-bland register that feels like a corporate blog from 2019. The solution is few-shot prompting: paste 2-3 samples of writing you want to match — your own previous work, a publication's style, a specific author's tone — and ask the model to draft in that style. ChatGPT-4o and Claude both analyze stylistic patterns from examples with impressive accuracy. Three samples of 200-300 words each is usually enough to shift the output meaningfully.
The AI Voice Drift Problem
Goal: Experience the compounding effect of iterative prompting in one session, and identify which brainstorm pattern works best for your specific creative context.
1. Pick a real project you're currently working on — a presentation, campaign, product feature, or piece of content. 2. Open ChatGPT-4o or Claude 3.5 Sonnet in a new conversation. 3. Assign a role at the start: write 'You are a [relevant expert role] with deep experience in [your industry].' 4. Write a brainstorm prompt using the Quantity + Range pattern from the table above — ask for at least 10 items spanning a defined spectrum. 5. Review the output and mark your top 3 items. 6. Send a follow-up prompt: 'Expand on items [X, Y, Z] — give me 3 execution variants for each, keeping the same tone and audience in mind.' 7. Save the output in a doc and note which prompt pattern produced the most usable ideas.
Quick Reference: AI Creative Work Cheat Sheet
- Assign a role before any creative task — it shifts the model's default register immediately.
- Ask for 10-15 items minimum in brainstorms — volume forces range and surfaces contrast.
- Use the Range instruction ('from safe to provocative') to generate navigable variety.
- Paste 2-3 writing samples when voice matching — don't describe the style, show it.
- Iterate in one conversation — context compounds and outputs improve with each exchange.
- Treat AI output as a first draft — your edit is what makes it publishable.
- Check tone consistency in long documents past the 800-word mark.
- Use negative constraints ('avoid jargon,' 'no bullet points') to sharpen output fast.
- Run high-stakes brainstorms in both ChatGPT-4o and Claude — different models surface different ideas.
- Never use AI-generated statistics or data claims without independent verification.
Key Takeaways from This Section
- AI defaults to statistical averages — constraints are what push output from generic to useful.
- Brainstorming is the highest-ROI creative use case: volume, speed, and range are AI's strengths here.
- Structured prompt patterns (Quantity + Range, Perspective Shift, Reverse Brainstorm) reliably outperform open-ended requests.
- ChatGPT-4o and Claude 3.5 Sonnet have distinct strengths — matching model to task improves results.
- Voice transfer requires examples, not descriptions — few-shot prompting is the professional's method.
- Every conversation is cumulative: the best creative outputs come from iterative sessions, not single prompts.
Writing With AI: Structure, Voice, and Iteration
Brainstorming gets you raw material. Writing is where that material becomes something real. AI assistants handle the full writing stack — first drafts, rewrites, tone shifts, structural reorganization — but only when you direct them precisely. The professionals who get the most out of ChatGPT or Claude for writing aren't the ones who ask for a finished product upfront. They're the ones who treat the AI as a writing partner: drafting in passes, adjusting voice mid-session, and using targeted prompts to fix specific weaknesses rather than starting over.
Controlling Tone and Voice
Every piece of writing has a voice — and AI defaults to a generic professional tone unless you specify otherwise. That default is competent but forgettable. To get output that sounds like your brand, your team, or you personally, give the model concrete reference points. Paste in a sample of your past writing and say 'match this voice.' Describe the audience in detail: 'skeptical CFOs who hate jargon' produces sharper copy than 'business readers.' Adjectives help too — 'terse, confident, slightly provocative' gives Claude enough signal to diverge meaningfully from its defaults.
- Paste 150–200 words of your own writing as a voice reference — more reliable than describing style abstractly
- Specify the emotional register: 'reassuring,' 'urgent,' 'dry and precise' all produce distinct outputs
- Name the audience explicitly: job title, seniority level, and what they already know
- Use contrast: 'like The Economist, not like a press release' gives the model a quick calibration anchor
- Ask for multiple tone variants of the same paragraph — then pick or blend the best elements
- If output feels generic, add: 'avoid corporate language, no passive voice, no hedge words'
The 'Voice Sample' Trick
| Writing Task | Best Model | Prompt Approach | Watch Out For |
|---|---|---|---|
| Long-form articles (1000+ words) | Claude 3.5 Sonnet | Outline first, then draft section by section | Structural drift after ~800 words |
| Short punchy copy (ads, taglines) | ChatGPT GPT-4o | Give 3 examples of copy you like, ask for 10 variants | Over-polished, loses edge |
| Email sequences | ChatGPT or Claude | Define the journey: awareness → decision, specify each email's single goal | Too salesy by default |
| Executive summaries | Claude 3.5 Sonnet | Paste source doc + 'summarize for a time-pressed VP in 150 words' | May include details the VP doesn't need |
| Social media posts | ChatGPT GPT-4o | Specify platform, character limit, and 1 key message | Generic hashtags, weak hooks |
| Technical documentation | GitHub Copilot / ChatGPT | Provide existing doc structure as template | Assumes too much reader knowledge |
Iterating Instead of Regenerating
Most people hit 'regenerate' when AI output misses the mark. That's the slowest possible workflow. Targeted iteration — telling the model exactly what to change and why — gets you to a usable draft in two or three exchanges instead of ten. Think of each follow-up prompt as a surgical edit instruction. 'Make the opening line more surprising' is better than 'rewrite this.' 'Cut 30 words from paragraph two without losing the core argument' is better than 'make it shorter.' The model retains context within a session, so you're building on work already done rather than restarting.
The iteration loop that works best in practice: generate a rough draft, read it for structural problems first, fix structure before fixing prose, then refine language last. Trying to fix everything in one pass leads to prompts that are too vague to be useful. Separate concerns — structure, argument, tone, length — and address each in its own exchange. This mirrors how professional editors work, and it maps perfectly onto how context windows handle sequential instructions.
- Generate the full draft first — resist editing the prompt until you see the complete output
- Identify the single biggest structural problem before anything else
- Fix structure with one targeted prompt: 'Move the case study to after the problem statement'
- Read again for argument gaps — 'Add a concrete example to support the claim in paragraph 3'
- Address tone in a dedicated pass: 'Rewrite paragraphs 1 and 4 to sound less formal'
- Cut for length last: 'Remove 20% of the words without changing the meaning'
- Do a final polish pass: 'Check for repeated words and weak verbs — suggest replacements'
Context Window Limits Matter Here
Prompt
Here's the current draft of my introduction: [paste text]. The tone is too cautious and hedge-y. Rewrite it to sound confident and direct — cut any phrase that qualifies or softens the main claim. Keep the same structure and word count.
AI Response
Revised introduction with hedges removed and active, declarative sentences substituted. For example, 'It may be possible to consider...' becomes 'The answer is...' and 'There are some indications that...' becomes 'The data shows...' Each change is marked in brackets so you can accept or reject individually.
Ideation at Scale: Going Beyond the First Ten Ideas
The first batch of AI-generated ideas is almost always the most obvious batch. That's not a flaw — it's a feature of how language models work. They draw on the statistical center of their training data, which means popular, well-represented ideas surface first. The professionals who use AI most effectively for ideation know to push past that first batch deliberately. Techniques like forcing constraints, demanding unusual combinations, or asking the model to argue against its own suggestions unlock the less-traveled territory where genuinely differentiated ideas live.
Constraint-based ideation is the most reliable method. Give the model a problem, then add an artificial constraint that forces it off the obvious path. 'Generate 10 campaign concepts with zero budget' produces different results than an open-ended request. 'What would this look like if we could only communicate through visuals?' changes the problem frame entirely. Each constraint acts as a filter that removes the generic answers and forces the model to draw on less-common training patterns. Stack two constraints for even stronger divergence from the default.
| Ideation Technique | Prompt Pattern | Best For | Example Output Type |
|---|---|---|---|
| Constraint forcing | 'Generate X ideas but [hard constraint]' | Breaking out of obvious solutions | Unconventional approaches to known problems |
| Role reversal | 'How would [competitor/enemy/skeptic] solve this?' | Stress-testing assumptions | Devil's advocate angles, attack vectors |
| Forced combination | 'Combine [concept A] with [unrelated concept B]' | Product/service innovation | Unexpected feature sets or positioning angles |
| Volume + cull | 'Give me 30 ideas, then identify the 5 most unusual' | Workshops, sprint sessions | A shortlist with genuine diversity |
| First principles | 'Ignore how this is currently done. What would you build from scratch?' | Strategy and product work | Zero-based redesigns |
| Anti-brainstorm | 'List every reason this idea will fail' | Risk identification, pitch prep | Objection map, pre-mortem material |
| Trend injection | 'Apply the logic of [current trend] to [your problem]' | Marketing and positioning | Timely angles on existing offers |
The Confirmation Bias Trap
Combining Brainstorming and Writing in One Session
The most efficient creative workflow uses a single AI session to move from raw ideation to structured draft without switching tools or losing context. Start with a wide ideation pass — 20 to 30 ideas, no filtering. Then use a culling prompt to narrow to the three strongest. Then ask the model to build a brief outline around the top choice. Then draft section by section. This end-to-end approach keeps all the context — your constraints, voice notes, rejected ideas — in one place, and the model uses earlier exchanges to inform later ones.
- Start wide: ask for 20+ ideas before filtering anything — volume creates the raw material for selection
- Use explicit culling criteria: 'Which 3 of these are most differentiated from what competitors currently say?'
- Build the outline before drafting: a 5-point outline takes 30 seconds to generate and saves 20 minutes of structural revision
- Reference earlier ideas mid-session: 'Use the angle from idea #7 as the hook for the introduction'
- Keep a running 'parking lot' prompt: 'Hold these unused ideas — I may want them later' keeps rejected material accessible
- End the session with a synthesis prompt: 'Summarize the key creative decisions we made and why'
Goal: Produce a first-draft opening section of a real work project, generated through a structured ideation-to-writing sprint — with a saved outline you can use to complete the remaining sections independently.
1. Open a new chat in ChatGPT or Claude. Paste this context block: your project name, target audience (job title + what they care about), the core message you need to communicate, and one constraint (budget, format, or time limit). 2. Prompt: 'Generate 25 ideas for [project]. Apply the constraint I gave you. Label each idea with a one-sentence description.' 3. Read the full list without editing. Highlight the 4 or 5 that feel most differentiated — not most familiar. 4. Prompt: 'Of these 25 ideas, which 3 are least similar to what's commonly done in this space? Explain briefly why each stands out.' 5. Compare your highlights to the model's picks. Note any surprises — ideas you dismissed that the model flagged, or vice versa. 6. Choose one idea. Prompt: 'Build a 5-section outline for [chosen idea]. Each section should have a single clear purpose. Include a suggested hook for the opening.' 7. Review the outline. Adjust any section that doesn't serve the core message. Then prompt: 'Draft Section 1 in [voice description] for [audience description]. Target 150 words.'
Editing, Refining, and Making AI Output Actually Yours
Raw AI output is a starting point, not a finished product. The professionals who get the most from tools like ChatGPT and Claude treat the first response as a draft zero — something to interrogate, reshape, and push further. Knowing how to edit AI output efficiently, how to run multi-turn refinement loops, and how to inject your own voice back into the work separates people who save 20 minutes from people who save 3 hours. This section gives you the systematic approach to do exactly that.
The Refinement Loop
Most users send one prompt, get one response, and accept whatever arrives. That's leaving most of the value on the table. A refinement loop means treating AI conversation as iterative: you critique the output, give targeted feedback, and re-prompt. Each round narrows the gap between what the AI produced and what you actually need. Claude and ChatGPT both maintain context across a conversation, so your follow-up instructions build on prior output without restating everything.
- Start broad, then constrain: get the skeleton first, then add specificity in follow-up prompts
- Name what's wrong explicitly: 'the tone is too formal' beats 'make it better'
- Ask for alternatives: 'give me three different opening lines for this' produces options to cherry-pick
- Use 'keep X, change Y' framing to preserve what's working while fixing what isn't
- Request a critique before a rewrite: 'what are the weakest parts of this draft?' surfaces issues you might miss
- Set a constraint to force creativity: 'rewrite this in under 50 words' often produces sharper copy than any direct edit
The 'Critic First' Move
| Weak Follow-Up | Strong Follow-Up | Why It Works Better |
|---|---|---|
| Make it better | Make the opening sentence more direct and cut any hedging language | Specific target, specific action |
| Change the tone | Rewrite this for a skeptical CFO who values brevity over enthusiasm | Audience and attitude defined |
| It's too long | Cut this to 80 words without losing the three key benefits | Hard constraint + preservation rule |
| Add more detail | Add one concrete example for each claim, drawn from B2B SaaS contexts | Type of detail and domain specified |
| Make it sound like me | Here are two emails I've written [paste]. Match that voice in this draft | Model provided, not just described |
Injecting Your Voice and Protecting Your Judgment
AI writing defaults to a confident, slightly generic register. It's readable but unmistakably averaged. To make output sound like you, feed it examples of your actual writing — emails, Slack messages, past reports. ChatGPT and Claude can extract stylistic patterns from 3-5 samples and apply them. This isn't about mimicry; it's calibration. The AI learns your sentence rhythm, your preferred transitions, your tolerance for humor.
Your judgment still owns the final product. AI hallucinates facts, misses organizational context, and can't know what your CEO reacted badly to in last quarter's all-hands. Always read the output as an editor, not a recipient. The goal is to use AI to handle the structural and linguistic heavy lifting while you apply the contextual intelligence only you have.
| Output Problem | Likely Cause | Fix |
|---|---|---|
| Sounds generic and flat | No voice calibration provided | Paste 3 writing samples, ask AI to match the style |
| Wrong level of formality | No audience specified | Add 'audience:' field to your prompt template |
| Factually confident but wrong | AI hallucination on specifics | Verify all statistics, names, and dates independently |
| Too long, hard to cut | No length constraint set upfront | Include word count in original prompt |
| Missing your organization's context | AI has no internal knowledge | Paste relevant background directly into the prompt |
| Ideas feel safe and obvious | Prompt didn't push for novelty | Add 'include one unconventional or counterintuitive angle' |
Don't Outsource Your Credibility
Prompt
Here are two examples of how I write professionally: [Example 1: paste a paragraph from a past email or report] [Example 2: paste another] Now rewrite the following draft in that same voice — same sentence length, same directness, same tolerance for a light touch of dry humor. Keep all three key points but cut the total length by 30%. [Paste your AI-generated draft here]
AI Response
The rewritten draft mirrors your sentence rhythm and cuts filler phrases. Key points remain intact. Length reduced from 210 words to 147. Hedging phrases like 'it could be argued' and 'one might consider' replaced with direct assertions matching your sample style.
Notion AI and In-Context Editing
Goal: You end this task with a saved personal voice brief you can paste into any future AI writing prompt, plus one polished draft produced using the full refinement loop.
1. Open ChatGPT or Claude and start a new conversation. 2. Paste two or three paragraphs from emails or documents you've written — work you're proud of and that represents your real voice. 3. Prompt: 'Analyze the style of these samples. Describe my sentence length patterns, formality level, use of humor, and any distinctive phrasing habits.' 4. Save that style description in a notes document — this becomes your personal voice brief. 5. Now pick a real upcoming work task: a meeting summary, a proposal intro, or a brainstorm brief. Draft it using AI with your voice brief included in the prompt. 6. Run one refinement round using the 'critic first' method: ask AI to identify the two weakest elements before rewriting.
Quick-Reference Cheat Sheet
- Treat first AI output as draft zero — always refine before using
- Use 'keep X, change Y' to preserve good elements while targeting weak ones
- Feed AI your own writing samples to calibrate voice before any important draft
- Ask for a critique before a rewrite — diagnosis first, fix second
- Set explicit word count constraints upfront to avoid bloated output
- Verify every statistic, quote, and factual claim independently — always
- Use Perplexity for sourced research outputs; use ChatGPT/Claude for creative and structural work
- Notion AI handles in-context editing without breaking your document workflow
- Specificity in follow-up prompts drives quality — vague feedback produces vague results
- Your contextual judgment (org politics, audience history, brand nuance) is what AI cannot replace
Key Takeaways
- Refinement loops — not single prompts — are what produce professional-quality AI output
- Voice calibration using your own writing samples closes the gap between generic AI text and your actual style
- The 'critic first' technique surfaces structural problems before you invest time in rewrites
- Factual hallucination is a consistent risk across all major AI tools; verification is non-negotiable for external-facing work
- Your irreplaceable contribution is contextual judgment — AI handles structure and language, you supply organizational intelligence
You ask ChatGPT to rewrite a paragraph and the result still sounds generic. What is the most likely cause?
A colleague pastes an AI-generated market analysis into a client deck without checking the statistics. What is the primary risk?
Which follow-up prompt will most reliably improve an AI draft?
You want to refine an AI-generated proposal introduction but aren't sure what's wrong with it. What should you do first?
A manager uses AI to brainstorm campaign ideas, then edits the output heavily before presenting. She adds competitive context, adjusts ideas to reflect a past failed campaign, and cuts anything that wouldn't fit the brand. What does this scenario best illustrate?
This lesson requires Pro
Upgrade your plan to unlock this lesson and all other Pro content on the platform.
You're currently on the Free plan.
