Skip to main content
Back to Getting the Most from AI Assistants
Lesson 2 of 10

System prompts and custom instructions: setting your defaults

~17 min read

System Prompts and Custom Instructions: Setting Your Defaults

Every AI assistant you use starts with a blank slate — unless you tell it otherwise. System prompts and custom instructions are how you pre-load context, tone, and behavior before a single conversation begins. Instead of re-explaining who you are and what you need every session, you encode it once. ChatGPT, Claude, and Gemini all support this. The difference between a generic AI response and one that feels tailored to your work? Usually, it's a well-crafted system prompt sitting behind the scenes.

7 Things You Need to Know Before We Start

  1. A system prompt is a persistent instruction block that shapes every response in a session — it runs before your first message.
  2. Custom Instructions in ChatGPT (available to all users since August 2023) are the consumer-friendly version of system prompts.
  3. Claude accepts system prompts via its API and through Claude.ai's Projects feature, which launched in 2024.
  4. System prompts are invisible to end users in deployed apps — when you use a company's AI chatbot, there's almost certainly a system prompt shaping its behavior.
  5. You can't fully override a model's safety guardrails with a system prompt, but you can dramatically change tone, format, verbosity, and expertise level.
  6. Token length matters: system prompts consume context window space. A 500-word system prompt in GPT-4o leaves less room for your conversation.
  7. Bad system prompts create worse outputs than no system prompt — vague instructions produce vague behavior.

What a System Prompt Actually Does

When you send a message to ChatGPT or Claude, the model doesn't just see your words. It sees a structured input that typically has three layers: the system prompt, the conversation history, and your current message. The system prompt sits at the top of that stack and carries the highest interpretive weight. Think of it as the briefing you give a new hire before their first client call — it sets expectations, defines the role, and establishes what 'good' looks like before any specific task arrives.

Practically, this means a system prompt can tell GPT-4o to always respond in bullet points, to assume you're a senior financial analyst, to never use hedging language, or to default to British English. These aren't just stylistic tweaks — they change the model's reasoning path. A prompt that says 'you are a skeptical editor' produces fundamentally different feedback on a draft than one that says 'you are an encouraging writing coach.' Same model, same draft, radically different output.

  • Sets the AI's persona and expertise level for the entire session
  • Controls default output format (prose, bullets, tables, code blocks)
  • Defines what the model should assume about your background and goals
  • Establishes constraints — what to avoid, what to flag, what to escalate
  • Can specify response length preferences (concise vs. comprehensive)
  • Locks in language, tone, and formality level without repeating it each time

The Role Framing Trick

Starting your system prompt with 'You are a [specific role] with expertise in [domain]' consistently outperforms generic instructions. 'You are a senior B2B marketing strategist with 15 years in SaaS' gives the model a coherent persona to inhabit — it draws on different training patterns than a blank-slate assistant. Be specific about the role, not just the task.

System Prompt Anatomy: A Field Reference

ComponentWhat It ControlsExample InstructionPriority
Role definitionExpertise level, persona, knowledge baseYou are a corporate tax attorney specializing in M&A transactions.High
Audience contextHow technical or detailed responses should beThe user is a non-technical marketing manager.High
Format defaultsStructure, length, use of lists/tables/headersAlways respond with a brief summary first, then detail. Use bullet points for lists of 3+.Medium
Tone and voiceFormality, directness, hedging behaviorBe direct. Omit filler phrases. Do not say 'certainly' or 'great question'.Medium
ConstraintsWhat to avoid, refuse, or flagDo not provide specific legal advice. Recommend consulting counsel for binding decisions.High
Output languageLanguage, regional spelling, terminologyUse British English. Use UK date formats (DD/MM/YYYY).Low
Behavioral rulesHow to handle ambiguity, errors, or edge casesIf the request is unclear, ask one clarifying question before proceeding.Medium
Core components of a well-structured system prompt and their relative priority

ChatGPT Custom Instructions vs. Claude Projects

ChatGPT's Custom Instructions live in your account settings and apply globally across new conversations. You fill in two fields: what ChatGPT should know about you, and how you want it to respond. These persist until you change them. The practical ceiling is around 1,500 characters per field — enough for a solid persona and format brief, but not a full operational manual. ChatGPT Plus subscribers also get the ability to create GPTs, which have their own dedicated system prompts and can be shared with others.

Claude handles this differently. Claude.ai's Projects feature (launched mid-2024, available on Pro plans at $20/month) lets you create persistent workspaces with a custom system prompt and uploaded reference documents. A Project for client work might include your system prompt, a style guide PDF, and a company fact sheet — all of which Claude references automatically. This makes Claude Projects closer to a configured AI workspace than a simple chat interface. For API users of either model, system prompts are passed programmatically with each request.

  1. Open ChatGPT → click your profile icon → 'Customize ChatGPT' to access Custom Instructions
  2. Field 1 ('About you'): enter your role, industry, expertise level, and key context
  3. Field 2 ('Response preferences'): specify format, tone, length, and any recurring constraints
  4. For Claude Projects: go to claude.ai → 'Projects' → 'New Project' → paste your system prompt in the Project Instructions field
  5. Test immediately: send a standard work request and check whether the defaults changed
  6. Iterate: if the output still feels generic, add more specificity to the role definition
FeatureChatGPT Custom InstructionsChatGPT GPTsClaude ProjectsClaude API
Where it livesAccount settingsGPT builderProject workspaceAPI request header
ScopeAll new chatsSpecific GPT onlyAll chats in projectPer-request
Character/token limit~1,500 chars per field~8,000 chars~2,000 chars (+ docs)Model context limit
File/doc uploadNoYes (knowledge files)Yes (up to 5 docs)No (text only)
CostFree + Plus ($20/mo)Plus ($20/mo) to createPro ($20/mo)API pricing (~$15/1M tokens for GPT-4o)
ShareableNoYes (GPT Store)NoYes (via code)
Best forPersonal daily useRepeatable workflowsProject-based workDevelopers/automation
Comparison of system prompt interfaces across ChatGPT and Claude as of 2024

Gemini and Copilot Also Support This

Google Gemini Advanced ($19.99/month via Google One AI Premium) supports custom instructions through 'Gems' — pre-configured AI personas you build and save. Microsoft Copilot embeds system-level instructions at the enterprise tier, configurable by IT admins. If you're using Notion AI, its behavior is shaped by Notion's own system prompts, which you can't edit directly — but you can prepend instructions in each prompt block.

A System Prompt in the Wild

Custom Instruction for a Strategy Consultant

Prompt

SYSTEM PROMPT: You are a senior management consultant with deep expertise in operational efficiency and organizational design. The user is a mid-level manager at a professional services firm preparing internal strategy documents and client-facing slide decks. Defaults: - Lead with the key insight or recommendation, not background - Use structured formats: headers, bullets, numbered lists where appropriate - Keep responses under 400 words unless the user asks for depth - Flag assumptions explicitly with [ASSUMPTION: ...] - Do not use filler phrases or hedging language - When given a problem, offer one primary recommendation and one alternative USER MESSAGE: We're seeing a 23% drop in project margins over 18 months. What should I investigate first?

AI Response

**Primary recommendation:** Start with utilization rate by project type and seniority level. A margin drop of this size over 18 months almost always traces back to one of three causes: 1. **Scope creep without billing adjustment** — projects running over hours without change orders 2. **Seniority mix shift** — more senior (higher cost) staff on projects priced for junior delivery 3. **Rate stagnation** — billing rates unchanged while salary costs rose [ASSUMPTION: You have project-level P&L data accessible by project manager] **First diagnostic:** Pull utilization reports segmented by project type. Look for projects where actual hours exceed estimated by >15%. That's your fastest signal. **Alternative approach:** If project data is messy, start at the client level — identify the 3 lowest-margin clients and interview the project leads. Pattern recognition often surfaces faster than data analysis in firms with inconsistent time-tracking.

Writing Effective Instructions: The Specificity Principle

The single biggest mistake professionals make with custom instructions is staying abstract. 'Be helpful and professional' tells the model almost nothing it doesn't already default to. Every instruction you write should be testable — you should be able to look at an AI response and confirm whether the instruction was followed. 'Use bullet points for any list of three or more items' is testable. 'Be clear and organized' is not. Specificity also helps the model handle edge cases: when it knows your exact defaults, ambiguous situations resolve closer to your preference.

A useful drafting technique: write your system prompt by imagining the most annoying AI behaviors you want to eliminate. If ChatGPT always writes five-paragraph essays when you want a quick answer, add 'Default to concise responses under 200 words unless I ask for more.' If it over-hedges with phrases like 'it's important to consider,' ban them explicitly. Negative instructions ('do not,' 'avoid,' 'never') work just as well as positive ones and are often easier to write from lived frustration with the tool.

Weak InstructionWhy It FailsStrong Version
Be concise.No measurable standard — 'concise' means different thingsKeep responses under 150 words unless I ask for detail.
Write professionally.Too broad — every model already aims for thisUse formal register. No contractions. No casual openers like 'Sure!' or 'Great!'
Help me with marketing.Not an instruction — it's a topicAssume I work in B2B SaaS marketing targeting mid-market procurement teams.
Be accurate.Impossible to enforce via instruction aloneFlag any claim you're uncertain about with [UNCERTAIN]. Do not fabricate statistics.
Format things nicely.Subjective and vagueUse headers for responses over 200 words. Use tables for comparisons of 3+ items.
Understand my context.The model can't read your mindI am a CFO at a 200-person manufacturing company. Assume financial literacy.
Weak vs. strong system prompt instructions — and why the difference matters

System Prompts Are Not Secrets

If you deploy an AI tool with a system prompt — even inside a company chatbot — assume it can be extracted. Researchers and curious users regularly get models to reveal their system prompts by asking directly or using jailbreak-adjacent phrasing. Never put genuinely sensitive information (API keys, proprietary formulas, confidential client names) in a system prompt. Treat it like a semi-public configuration file, not a vault.
Build Your First Custom Instruction Set

Goal: Produce a working custom instruction set that changes at least three observable behaviors in AI responses compared to your default experience.

1. Open ChatGPT (free or Plus) and navigate to Settings → 'Customize ChatGPT'. If using Claude, open a new Project at claude.ai. 2. In Field 1 (ChatGPT) or the Project Instructions box (Claude), write your role and context: your job title, industry, the types of tasks you most commonly use AI for, and your technical literacy level. 3. In Field 2 (ChatGPT) or continuing the same instructions box (Claude), add your format defaults: preferred response length, whether you want bullets or prose, any phrases you want banned. 4. Add at least one constraint relevant to your work — something the AI should always flag, avoid, or clarify before proceeding. 5. Save the instructions and open a fresh conversation. 6. Send a real work task you'd normally use AI for — something you've tried before without custom instructions. 7. Compare the output to what you'd typically receive. Note what changed and adjust one instruction based on what still feels off.

Quick Reference: System Prompt Essentials

  • System prompts run before your first message and shape every response in the session
  • ChatGPT Custom Instructions: ~1,500 chars per field, account-level, free to use
  • Claude Projects: ~2,000 chars + document uploads, Pro plan required ($20/month)
  • Always start with a specific role definition — 'You are a [role] with expertise in [domain]'
  • Make every instruction testable: you should be able to verify compliance in the output
  • Negative instructions work well — ban specific phrases or behaviors you dislike
  • System prompts consume context window tokens — keep them tight and purposeful
  • Never store sensitive data (credentials, confidential info) in a system prompt
  • GPT-4o system prompts support up to ~8,000 tokens in the GPT builder interface
  • Gemini uses 'Gems' for saved personas; Copilot uses admin-configured instructions at enterprise tier

Key Takeaways from This Section

  1. System prompts are persistent instruction layers — they define the AI's defaults before any task arrives.
  2. Custom Instructions (ChatGPT) and Projects (Claude) are the practical interfaces for non-developers to set these defaults.
  3. Specificity is the core skill: vague instructions produce vague behavior, testable instructions produce consistent results.
  4. The anatomy of a strong system prompt includes: role definition, audience context, format defaults, tone rules, and constraints.
  5. Different tools handle system prompts differently — knowing the character limits and scope of each platform helps you work within them.
  6. Weak instructions fail because they're untestable; rewrite them with measurable criteria or explicit examples.
  7. Treat system prompts as semi-public configuration — never embed sensitive information in them.

What Goes Inside a System Prompt

Part 1 established what system prompts are and why they exist. Now you need to know what to actually put in them. The best system prompts are built from five distinct layers: identity, context, constraints, format rules, and behavioral nudges. Most beginners write one or two of these and wonder why results are inconsistent. Professionals stack all five. Each layer handles a different failure mode — identity stops role confusion, constraints stop scope creep, format rules stop wall-of-text responses that waste your time.

The Five Layers of an Effective System Prompt

  1. Identity — Who the AI is playing: role, expertise level, name if needed ('You are a senior financial analyst with 15 years in equity research')
  2. Context — What world it operates in: your industry, your company, your audience ('Our customers are mid-market SaaS companies with 50–500 employees')
  3. Constraints — What it must never do: topics to avoid, formats to reject, claims it cannot make ('Never recommend specific stocks or give legal advice')
  4. Format rules — How output should look: length, structure, use of headers, bullet density, response style ('Always respond in under 200 words unless asked for a full report')
  5. Behavioral nudges — Tone and reasoning style: direct vs. diplomatic, ask clarifying questions vs. attempt and note assumptions ('When the request is ambiguous, state your assumption and proceed')

The One-Sentence Test

After writing your system prompt, read just the first sentence. It should tell the AI exactly who it is and what it's for. If your first sentence is vague ('You are a helpful assistant'), rewrite it. 'You are a procurement analyst helping a manufacturing company evaluate supplier contracts' gives the model something to anchor every response to.

System Prompt Component Reference

LayerWhat to WriteExample PhraseFixes This Problem
IdentityRole + seniority + domain'You are a senior UX researcher'Generic, unfocused answers
ContextIndustry + audience + product'Users are non-technical HR managers'Wrong assumptions about reader knowledge
ConstraintsHard limits on topics or actions'Never speculate on legal outcomes'Hallucinated advice in sensitive areas
Format rulesLength, structure, style'Use bullet points, max 150 words'Unreadable walls of text
Behavioral nudgesReasoning style, tone, error handling'Ask one clarifying question if intent is unclear'Confident wrong answers on ambiguous prompts
Five-layer system prompt framework — use this as your build checklist

Platform Differences That Change Everything

System prompts don't behave identically across platforms. ChatGPT's Custom Instructions live in two fields: 'What would you like ChatGPT to know about you?' and 'How would you like ChatGPT to respond?' — effectively splitting context from format rules. Claude's system prompt is a single free-form field with a higher character limit and stronger instruction-following on constraints. Gemini Advanced's system prompt support (called 'Gems') is the newest of the three and currently the least flexible for complex behavioral rules. Knowing which platform you're on determines how you structure your input.

The practical implication: a system prompt that works perfectly in Claude may need restructuring for ChatGPT. Claude handles long, nuanced constraint lists well — it genuinely reads and applies them. ChatGPT benefits from shorter, more directive phrasing in each field. If you use both tools (many professionals do), maintain two versions of your core system prompt. It takes 20 minutes once and saves hours of prompt-repair later. Notion AI and Perplexity don't expose true system prompts to end users, so on those platforms your technique shifts to persistent context blocks at the top of every session.

  • ChatGPT Custom Instructions: two fields, ~1,500 characters each, applies to all GPT-4o conversations by default
  • Claude Projects: single system prompt field, higher limit (~2,000+ tokens), strongest constraint adherence of the three
  • Gemini Gems: template-based system prompts, good for role-playing personas, weaker on complex multi-rule sets
  • GitHub Copilot: system behavior set by your IDE and workspace settings, not a text field — configured via .github/copilot-instructions.md
  • Notion AI: no exposed system prompt — use a pinned 'context block' at the top of your workspace as a workaround
  • Perplexity: no persistent system prompt for standard users — Pro users can set default focus modes but not full behavioral instructions
PlatformSystem Prompt AccessCharacter Limit (approx.)Persists Across Sessions?Best For
ChatGPT (Plus/Team)Custom Instructions — two fields~1,500 per fieldYes, until changedGeneral professional use
Claude (Pro/Team)Projects system prompt2,000+ tokensYes, per ProjectComplex constraint-heavy workflows
Gemini AdvancedGems~500 wordsYes, per GemRole personas, simple tasks
GitHub Copilot.github/copilot-instructions.mdNo hard limitYes, per repoCode style and review standards
Perplexity ProDefault focus + prompt prefixMinimalPartialResearch with source constraints
Platform-by-platform system prompt capabilities — accurate as of mid-2025

Claude Projects Are the Most Powerful Option Right Now

Claude's Projects feature lets you attach a system prompt AND upload reference documents — style guides, company policies, product specs — that the model reads before every conversation. This effectively turns Claude into a context-aware specialist without any per-session setup. For professionals who repeat similar high-stakes tasks (proposals, analysis, client communications), Projects cuts setup time to zero.

A Real System Prompt in the Wild

System prompt for a marketing strategist using ChatGPT

Prompt

You are a B2B marketing strategist with deep expertise in SaaS demand generation. You work with my company, a mid-market project management software vendor targeting operations teams at companies with 200–2,000 employees. Our brand voice is direct, data-driven, and never uses jargon like 'synergy' or 'holistic.' When I ask for copy, keep it under 100 words unless I specify otherwise. When I ask for strategy, give me a numbered action list with one-line rationale for each item. If my request is ambiguous, state your interpretation first, then answer. Never recommend paid ad budgets above $50K without flagging that CFO approval is typically required.

AI Response

Understood. I'll operate as your B2B SaaS marketing strategist focused on operations-team buyers at mid-market companies. For copy requests: under 100 words, direct tone, zero jargon. For strategy requests: numbered action list with rationale. I'll flag my interpretation when your intent isn't explicit, and I'll note CFO-approval thresholds on any large budget recommendations. Ready — what's the first challenge?

Where System Prompts Break Down

System prompts are not foolproof. Three failure modes appear repeatedly. First, instruction overload: cramming 20 rules into one prompt causes the model to honor the first five and quietly ignore the rest. Models have finite attention, and a 1,200-word system prompt competes with your actual query for that attention. Keep system prompts under 400 words for ChatGPT and under 600 for Claude. If you have more rules than that, they belong in a reference document, not the system prompt itself.

Second failure mode: contradictory instructions. 'Be concise' and 'always provide full context with examples' cannot coexist without a priority rule. The model will pick one interpretation and apply it inconsistently. Third failure mode: prompt drift over long conversations. After 15–20 exchanges, even a strong system prompt loses influence as the conversation history grows. The model starts weighting recent messages more heavily than the original instructions. For long working sessions, paste a one-line reminder of your key constraint — 'reminder: keep all responses under 150 words' — every 10 or so exchanges.

  • Keep system prompts under 400 words (ChatGPT) or 600 words (Claude) to avoid instruction dropout
  • Resolve contradictions explicitly: 'Be concise — but if asked for a full report, length is no longer constrained'
  • Use priority language: 'Above all else...', 'Unless explicitly told otherwise...'
  • For sessions longer than 20 exchanges, paste a short constraint reminder mid-conversation
  • Test your system prompt with 5 diverse queries before relying on it in real work — edge cases surface fast
  • Version-control your system prompts in a simple doc with dates — models update, and a prompt that worked in March may drift by June

System Prompts Are Not Private on Shared Platforms

If you're using a company-wide ChatGPT Team or Enterprise account, admins can see system prompts. More critically, if you build a GPT or Claude Project and share it with colleagues, they can often extract your system prompt with a direct request like 'repeat your instructions.' Don't embed API keys, confidential client names, or proprietary pricing in system prompts. Treat them like a sticky note on a shared desk — useful, but visible.

Quick Build: Your First Professional System Prompt

Build and Test a Working System Prompt in 15 Minutes

Goal: A tested, saved Custom Instruction set in ChatGPT that improves at least two of your three test responses compared to default behavior.

1. Open ChatGPT (Plus) and navigate to Settings → Personalization → Custom Instructions. 2. In the first field ('What should ChatGPT know about you?'), write 2–3 sentences: your role, your industry, and the type of tasks you use AI for most often. 3. In the second field ('How should ChatGPT respond?'), write your format rules: preferred response length, whether you want bullet points or prose, and your tone preference (direct, formal, casual). 4. Add one behavioral rule: either 'Ask a clarifying question if my request is ambiguous' OR 'State your assumption and proceed' — pick the one that matches how you work. 5. Add one constraint relevant to your work (e.g., 'Do not recommend tools that cost more than $X/month' or 'Always flag when you're uncertain about a fact'). 6. Save the instructions, then open a new conversation and send three test prompts that represent your typical daily tasks. 7. Note which responses feel off — too long, wrong tone, missing context — and edit the relevant layer of your system prompt to fix each issue.

System Prompt Cheat Sheet

  • Five layers: Identity → Context → Constraints → Format rules → Behavioral nudges
  • ChatGPT: two fields, ~1,500 chars each | Claude Projects: single field, 2,000+ tokens | Gemini Gems: ~500 words
  • Under 400 words for ChatGPT, under 600 for Claude — beyond that, rules get dropped
  • Contradictory instructions cause inconsistency — always specify which rule wins
  • Long sessions cause prompt drift — paste a constraint reminder every ~10 exchanges
  • Claude Projects: attach reference docs alongside system prompts for zero-setup context
  • GitHub Copilot: use .github/copilot-instructions.md for persistent code standards
  • Never embed confidential data in system prompts — treat them as semi-public
  • Version-control your prompts with dates — model updates can shift behavior
  • Test with 5 diverse queries before relying on any new system prompt in real work

You have a working mental model of system prompts and custom instructions. Now the goal is precision — knowing exactly which levers to pull for different work contexts, avoiding the mistakes that silently degrade your outputs, and building a personal defaults library you actually reuse. The professionals who get the most from ChatGPT, Claude, and Gemini are not the ones who write the cleverest one-off prompts. They are the ones who invest 20 minutes once and benefit every session after that.

Tuning for Context: Role-Specific Default Strategies

Different professional roles demand radically different AI defaults. A marketing director wants punchy, persuasive copy at a tenth-grade reading level. A strategy consultant wants structured analysis with explicit assumptions and caveats. A data analyst wants code-first responses with inline comments. Your custom instructions should encode your role's specific communication contract — the unspoken rules your colleagues already follow when they brief you. Generic defaults produce generic outputs. Role-tuned defaults make the AI feel like it was hired specifically for your job.

RoleTone SettingFormat DefaultKey Constraint to Include
Marketing ManagerPersuasive, punchyShort paragraphs, bullet CTAsNo jargon; reading level grade 8–10
Strategy ConsultantAnalytical, preciseNumbered sections, explicit assumptionsFlag uncertainty; cite reasoning
Data AnalystTechnical, conciseCode blocks + comments firstPython preferred; explain edge cases
HR / People OpsEmpathetic, neutralPlain prose, no bullet overloadAvoid legal claims; stay descriptive
Product ManagerCrisp, outcome-focusedUser story format when applicableAlways link feature to user need
Executive / C-SuiteStrategic, high-levelExecutive summary firstMax 3 key points per response
Default strategy by professional role — adapt for your specific context

What to Include — and What to Leave Out

Overstuffed custom instructions backfire. When you load 800 words of preferences into a system prompt, the model starts dropping instructions — particularly the ones near the end. ChatGPT's custom instructions field has a 1,500-character limit precisely because brevity forces prioritization. Claude's system prompts can be longer, but longer does not mean better. Treat your defaults like a job brief: include the non-negotiables, skip the nice-to-haves. If a preference only applies once a month, handle it in that specific prompt instead.

  • Include: your role, your audience, preferred output format, tone, any hard constraints (no legal claims, no speculation presented as fact)
  • Include: response length preference (concise vs. comprehensive) and whether you want the model to ask clarifying questions
  • Include: any domain-specific vocabulary or frameworks you use regularly (OKRs, MECE, RACI, Jobs-to-be-Done)
  • Exclude: preferences that vary by task — handle those inline
  • Exclude: redundant instructions ('always be helpful') that the model already defaults to
  • Exclude: contradictory instructions ('be brief' + 'always give full examples') — pick one or scope them separately

The Brevity Test

Read your custom instructions aloud. If any sentence sounds like something the AI would do anyway without being told, delete it. Your instructions should only contain information the model could not reasonably infer on its own.
Instruction TypeInclude?Example
Your job title and primary audienceYes'I'm a B2B SaaS product manager. My readers are technical buyers.'
Preferred output structureYes'Lead with a summary. Use headers for sections over 200 words.'
Tone and registerYes'Professional but direct. No filler phrases.'
Domain frameworks you useYes'Apply MECE structure to analysis tasks.'
Generic quality requestsNo'Always be accurate and helpful.'
Task-specific one-offsNo'Today I need a 500-word blog post.' — put this in the chat
Contradictory instructionsNever'Be brief' + 'Always provide comprehensive detail'
Inclusion filter for custom instructions and system prompts

Claude vs. ChatGPT: Where Your Instructions Live

In ChatGPT, custom instructions persist across all conversations automatically. In Claude, system prompts must be set per-project (in Projects) or per-API call — there is no global default in the standard interface. Gemini's 'Gems' feature lets you save named configurations. Know which tool you're in before assuming your defaults are active.
Strong Custom Instruction — Strategy Consultant

Prompt

I'm a management consultant specializing in operational efficiency for mid-market manufacturing firms. My outputs go to CFOs and COOs. Always structure analysis using MECE principles. Lead with the key insight, then supporting evidence, then implications. Flag any assumption you're making. Use precise numbers when I provide data; don't invent them. Keep responses under 400 words unless I ask for a full document. Ask one clarifying question before starting any analysis task.

AI Response

Understood. I'll apply MECE structuring to all analysis, lead with the headline insight, and flag assumptions explicitly. I'll stay under 400 words for standard responses and ask one clarifying question before beginning analysis tasks. Ready when you are.

Common Failures and How to Fix Them

Even well-intentioned custom instructions fail in predictable ways. The most common: instructions that conflict with the task at hand. If your default says 'be concise' but you ask for a 2,000-word report, the model will be confused about which instruction to honor — and it will usually follow the in-conversation request, quietly dropping your default. The fix is to write defaults that apply to mode, not length, and override length explicitly in each prompt.

Defaults Don't Override Safety Guardrails

System prompts cannot instruct the model to bypass content policies, fabricate citations, or impersonate real people. Attempts to use custom instructions for jailbreaking are detected and ignored — and in some platforms, flagged. Use your defaults for professional customization, not policy circumvention.
Failure ModeWhat HappensFix
Conflicting length instructionsModel ignores one, picks the other inconsistentlySet tone/format defaults; specify length per task
Instructions too long (800+ words)Later instructions dropped silentlyKeep under 300 words; prioritize ruthlessly
No format specifiedOutput format varies unpredictablyState your default format explicitly (bullets, prose, headers)
Role stated but audience omittedTone calibration is offAlways pair role with intended reader
Assuming defaults are active across toolsUsing Claude when you set up ChatGPT defaultsVerify active instructions at the start of each session
Common system prompt failures and targeted fixes
Build Your Personal Defaults Document

Goal: A saved, tested custom instruction block you can reuse and iterate on — your permanent AI defaults baseline.

1. Open a blank document (Notion, Google Docs, or a notes app) — this becomes your AI Defaults Library. 2. Write your role statement in one sentence: your title, your domain, and who reads your outputs. 3. List your top three output format preferences (e.g., 'headers for long content,' 'bullet points for lists,' 'summary first'). 4. Write one sentence on tone: the register you want the AI to use consistently. 5. List any hard constraints — things the AI must never do in your context (e.g., 'never present speculation as confirmed fact'). 6. Combine steps 2–5 into a single custom instruction block under 250 words and paste it into ChatGPT's custom instructions or a Claude Project system prompt. 7. Run three different work tasks using these defaults and note in your document which instructions worked, which were ignored, and what you'd change.

Quick-Reference Cheat Sheet

  • Custom instructions in ChatGPT: Settings → Personalization → Custom Instructions (1,500-char limit)
  • Claude system prompts: set per-project via Projects, or per-conversation in the API
  • Gemini custom personas: create via 'Gems' in the Gemini interface
  • Always include: role + audience + format preference + tone + hard constraints
  • Always exclude: generic quality requests, task-specific one-offs, contradictory instructions
  • Keep custom instructions under 300 words for reliable instruction-following
  • Defaults set the baseline — override inline for task-specific needs
  • Test your defaults with three real work tasks before treating them as final
  • Revisit and update your defaults every 4–6 weeks as your work evolves

Key Takeaways

  1. Role-specific defaults outperform generic ones — encode your professional context explicitly
  2. Brevity in custom instructions is a feature, not a limitation — prioritize ruthlessly
  3. Know where your defaults live in each tool; they do not transfer between platforms automatically
  4. Conflicting instructions cause silent failures — write defaults that complement, not compete with, your task prompts
  5. Your defaults are a living document — test, observe, and iterate every few weeks
  6. The investment is front-loaded: 20 minutes of setup pays back across hundreds of future sessions
Knowledge Check

A consultant sets custom instructions saying 'be concise' but then asks ChatGPT for a comprehensive 2,000-word strategic report. What is the most likely outcome?

You set up detailed custom instructions in ChatGPT and then switch to Claude for a project. Which statement is accurate?

Which of the following is the strongest example of a useful custom instruction?

Why is keeping custom instructions under 300 words recommended?

A product manager wants to use custom instructions to make ChatGPT always write in the 'Jobs-to-be-Done' framework and skip clarifying questions. She also wants it to never make legal claims. Which approach is correct?

This lesson requires Pro

Upgrade your plan to unlock this lesson and all other Pro content on the platform.

Upgrade to Pro

You're currently on the Free plan.