System prompts and custom instructions: setting your defaults
~17 min readSystem Prompts and Custom Instructions: Setting Your Defaults
Every AI assistant you use starts with a blank slate — unless you tell it otherwise. System prompts and custom instructions are how you pre-load context, tone, and behavior before a single conversation begins. Instead of re-explaining who you are and what you need every session, you encode it once. ChatGPT, Claude, and Gemini all support this. The difference between a generic AI response and one that feels tailored to your work? Usually, it's a well-crafted system prompt sitting behind the scenes.
7 Things You Need to Know Before We Start
- A system prompt is a persistent instruction block that shapes every response in a session — it runs before your first message.
- Custom Instructions in ChatGPT (available to all users since August 2023) are the consumer-friendly version of system prompts.
- Claude accepts system prompts via its API and through Claude.ai's Projects feature, which launched in 2024.
- System prompts are invisible to end users in deployed apps — when you use a company's AI chatbot, there's almost certainly a system prompt shaping its behavior.
- You can't fully override a model's safety guardrails with a system prompt, but you can dramatically change tone, format, verbosity, and expertise level.
- Token length matters: system prompts consume context window space. A 500-word system prompt in GPT-4o leaves less room for your conversation.
- Bad system prompts create worse outputs than no system prompt — vague instructions produce vague behavior.
What a System Prompt Actually Does
When you send a message to ChatGPT or Claude, the model doesn't just see your words. It sees a structured input that typically has three layers: the system prompt, the conversation history, and your current message. The system prompt sits at the top of that stack and carries the highest interpretive weight. Think of it as the briefing you give a new hire before their first client call — it sets expectations, defines the role, and establishes what 'good' looks like before any specific task arrives.
Practically, this means a system prompt can tell GPT-4o to always respond in bullet points, to assume you're a senior financial analyst, to never use hedging language, or to default to British English. These aren't just stylistic tweaks — they change the model's reasoning path. A prompt that says 'you are a skeptical editor' produces fundamentally different feedback on a draft than one that says 'you are an encouraging writing coach.' Same model, same draft, radically different output.
- Sets the AI's persona and expertise level for the entire session
- Controls default output format (prose, bullets, tables, code blocks)
- Defines what the model should assume about your background and goals
- Establishes constraints — what to avoid, what to flag, what to escalate
- Can specify response length preferences (concise vs. comprehensive)
- Locks in language, tone, and formality level without repeating it each time
The Role Framing Trick
System Prompt Anatomy: A Field Reference
| Component | What It Controls | Example Instruction | Priority |
|---|---|---|---|
| Role definition | Expertise level, persona, knowledge base | You are a corporate tax attorney specializing in M&A transactions. | High |
| Audience context | How technical or detailed responses should be | The user is a non-technical marketing manager. | High |
| Format defaults | Structure, length, use of lists/tables/headers | Always respond with a brief summary first, then detail. Use bullet points for lists of 3+. | Medium |
| Tone and voice | Formality, directness, hedging behavior | Be direct. Omit filler phrases. Do not say 'certainly' or 'great question'. | Medium |
| Constraints | What to avoid, refuse, or flag | Do not provide specific legal advice. Recommend consulting counsel for binding decisions. | High |
| Output language | Language, regional spelling, terminology | Use British English. Use UK date formats (DD/MM/YYYY). | Low |
| Behavioral rules | How to handle ambiguity, errors, or edge cases | If the request is unclear, ask one clarifying question before proceeding. | Medium |
ChatGPT Custom Instructions vs. Claude Projects
ChatGPT's Custom Instructions live in your account settings and apply globally across new conversations. You fill in two fields: what ChatGPT should know about you, and how you want it to respond. These persist until you change them. The practical ceiling is around 1,500 characters per field — enough for a solid persona and format brief, but not a full operational manual. ChatGPT Plus subscribers also get the ability to create GPTs, which have their own dedicated system prompts and can be shared with others.
Claude handles this differently. Claude.ai's Projects feature (launched mid-2024, available on Pro plans at $20/month) lets you create persistent workspaces with a custom system prompt and uploaded reference documents. A Project for client work might include your system prompt, a style guide PDF, and a company fact sheet — all of which Claude references automatically. This makes Claude Projects closer to a configured AI workspace than a simple chat interface. For API users of either model, system prompts are passed programmatically with each request.
- Open ChatGPT → click your profile icon → 'Customize ChatGPT' to access Custom Instructions
- Field 1 ('About you'): enter your role, industry, expertise level, and key context
- Field 2 ('Response preferences'): specify format, tone, length, and any recurring constraints
- For Claude Projects: go to claude.ai → 'Projects' → 'New Project' → paste your system prompt in the Project Instructions field
- Test immediately: send a standard work request and check whether the defaults changed
- Iterate: if the output still feels generic, add more specificity to the role definition
| Feature | ChatGPT Custom Instructions | ChatGPT GPTs | Claude Projects | Claude API |
|---|---|---|---|---|
| Where it lives | Account settings | GPT builder | Project workspace | API request header |
| Scope | All new chats | Specific GPT only | All chats in project | Per-request |
| Character/token limit | ~1,500 chars per field | ~8,000 chars | ~2,000 chars (+ docs) | Model context limit |
| File/doc upload | No | Yes (knowledge files) | Yes (up to 5 docs) | No (text only) |
| Cost | Free + Plus ($20/mo) | Plus ($20/mo) to create | Pro ($20/mo) | API pricing (~$15/1M tokens for GPT-4o) |
| Shareable | No | Yes (GPT Store) | No | Yes (via code) |
| Best for | Personal daily use | Repeatable workflows | Project-based work | Developers/automation |
Gemini and Copilot Also Support This
A System Prompt in the Wild
Prompt
SYSTEM PROMPT: You are a senior management consultant with deep expertise in operational efficiency and organizational design. The user is a mid-level manager at a professional services firm preparing internal strategy documents and client-facing slide decks. Defaults: - Lead with the key insight or recommendation, not background - Use structured formats: headers, bullets, numbered lists where appropriate - Keep responses under 400 words unless the user asks for depth - Flag assumptions explicitly with [ASSUMPTION: ...] - Do not use filler phrases or hedging language - When given a problem, offer one primary recommendation and one alternative USER MESSAGE: We're seeing a 23% drop in project margins over 18 months. What should I investigate first?
AI Response
**Primary recommendation:** Start with utilization rate by project type and seniority level. A margin drop of this size over 18 months almost always traces back to one of three causes: 1. **Scope creep without billing adjustment** — projects running over hours without change orders 2. **Seniority mix shift** — more senior (higher cost) staff on projects priced for junior delivery 3. **Rate stagnation** — billing rates unchanged while salary costs rose [ASSUMPTION: You have project-level P&L data accessible by project manager] **First diagnostic:** Pull utilization reports segmented by project type. Look for projects where actual hours exceed estimated by >15%. That's your fastest signal. **Alternative approach:** If project data is messy, start at the client level — identify the 3 lowest-margin clients and interview the project leads. Pattern recognition often surfaces faster than data analysis in firms with inconsistent time-tracking.
Writing Effective Instructions: The Specificity Principle
The single biggest mistake professionals make with custom instructions is staying abstract. 'Be helpful and professional' tells the model almost nothing it doesn't already default to. Every instruction you write should be testable — you should be able to look at an AI response and confirm whether the instruction was followed. 'Use bullet points for any list of three or more items' is testable. 'Be clear and organized' is not. Specificity also helps the model handle edge cases: when it knows your exact defaults, ambiguous situations resolve closer to your preference.
A useful drafting technique: write your system prompt by imagining the most annoying AI behaviors you want to eliminate. If ChatGPT always writes five-paragraph essays when you want a quick answer, add 'Default to concise responses under 200 words unless I ask for more.' If it over-hedges with phrases like 'it's important to consider,' ban them explicitly. Negative instructions ('do not,' 'avoid,' 'never') work just as well as positive ones and are often easier to write from lived frustration with the tool.
| Weak Instruction | Why It Fails | Strong Version |
|---|---|---|
| Be concise. | No measurable standard — 'concise' means different things | Keep responses under 150 words unless I ask for detail. |
| Write professionally. | Too broad — every model already aims for this | Use formal register. No contractions. No casual openers like 'Sure!' or 'Great!' |
| Help me with marketing. | Not an instruction — it's a topic | Assume I work in B2B SaaS marketing targeting mid-market procurement teams. |
| Be accurate. | Impossible to enforce via instruction alone | Flag any claim you're uncertain about with [UNCERTAIN]. Do not fabricate statistics. |
| Format things nicely. | Subjective and vague | Use headers for responses over 200 words. Use tables for comparisons of 3+ items. |
| Understand my context. | The model can't read your mind | I am a CFO at a 200-person manufacturing company. Assume financial literacy. |
System Prompts Are Not Secrets
Goal: Produce a working custom instruction set that changes at least three observable behaviors in AI responses compared to your default experience.
1. Open ChatGPT (free or Plus) and navigate to Settings → 'Customize ChatGPT'. If using Claude, open a new Project at claude.ai. 2. In Field 1 (ChatGPT) or the Project Instructions box (Claude), write your role and context: your job title, industry, the types of tasks you most commonly use AI for, and your technical literacy level. 3. In Field 2 (ChatGPT) or continuing the same instructions box (Claude), add your format defaults: preferred response length, whether you want bullets or prose, any phrases you want banned. 4. Add at least one constraint relevant to your work — something the AI should always flag, avoid, or clarify before proceeding. 5. Save the instructions and open a fresh conversation. 6. Send a real work task you'd normally use AI for — something you've tried before without custom instructions. 7. Compare the output to what you'd typically receive. Note what changed and adjust one instruction based on what still feels off.
Quick Reference: System Prompt Essentials
- System prompts run before your first message and shape every response in the session
- ChatGPT Custom Instructions: ~1,500 chars per field, account-level, free to use
- Claude Projects: ~2,000 chars + document uploads, Pro plan required ($20/month)
- Always start with a specific role definition — 'You are a [role] with expertise in [domain]'
- Make every instruction testable: you should be able to verify compliance in the output
- Negative instructions work well — ban specific phrases or behaviors you dislike
- System prompts consume context window tokens — keep them tight and purposeful
- Never store sensitive data (credentials, confidential info) in a system prompt
- GPT-4o system prompts support up to ~8,000 tokens in the GPT builder interface
- Gemini uses 'Gems' for saved personas; Copilot uses admin-configured instructions at enterprise tier
Key Takeaways from This Section
- System prompts are persistent instruction layers — they define the AI's defaults before any task arrives.
- Custom Instructions (ChatGPT) and Projects (Claude) are the practical interfaces for non-developers to set these defaults.
- Specificity is the core skill: vague instructions produce vague behavior, testable instructions produce consistent results.
- The anatomy of a strong system prompt includes: role definition, audience context, format defaults, tone rules, and constraints.
- Different tools handle system prompts differently — knowing the character limits and scope of each platform helps you work within them.
- Weak instructions fail because they're untestable; rewrite them with measurable criteria or explicit examples.
- Treat system prompts as semi-public configuration — never embed sensitive information in them.
What Goes Inside a System Prompt
Part 1 established what system prompts are and why they exist. Now you need to know what to actually put in them. The best system prompts are built from five distinct layers: identity, context, constraints, format rules, and behavioral nudges. Most beginners write one or two of these and wonder why results are inconsistent. Professionals stack all five. Each layer handles a different failure mode — identity stops role confusion, constraints stop scope creep, format rules stop wall-of-text responses that waste your time.
The Five Layers of an Effective System Prompt
- Identity — Who the AI is playing: role, expertise level, name if needed ('You are a senior financial analyst with 15 years in equity research')
- Context — What world it operates in: your industry, your company, your audience ('Our customers are mid-market SaaS companies with 50–500 employees')
- Constraints — What it must never do: topics to avoid, formats to reject, claims it cannot make ('Never recommend specific stocks or give legal advice')
- Format rules — How output should look: length, structure, use of headers, bullet density, response style ('Always respond in under 200 words unless asked for a full report')
- Behavioral nudges — Tone and reasoning style: direct vs. diplomatic, ask clarifying questions vs. attempt and note assumptions ('When the request is ambiguous, state your assumption and proceed')
The One-Sentence Test
System Prompt Component Reference
| Layer | What to Write | Example Phrase | Fixes This Problem |
|---|---|---|---|
| Identity | Role + seniority + domain | 'You are a senior UX researcher' | Generic, unfocused answers |
| Context | Industry + audience + product | 'Users are non-technical HR managers' | Wrong assumptions about reader knowledge |
| Constraints | Hard limits on topics or actions | 'Never speculate on legal outcomes' | Hallucinated advice in sensitive areas |
| Format rules | Length, structure, style | 'Use bullet points, max 150 words' | Unreadable walls of text |
| Behavioral nudges | Reasoning style, tone, error handling | 'Ask one clarifying question if intent is unclear' | Confident wrong answers on ambiguous prompts |
Platform Differences That Change Everything
System prompts don't behave identically across platforms. ChatGPT's Custom Instructions live in two fields: 'What would you like ChatGPT to know about you?' and 'How would you like ChatGPT to respond?' — effectively splitting context from format rules. Claude's system prompt is a single free-form field with a higher character limit and stronger instruction-following on constraints. Gemini Advanced's system prompt support (called 'Gems') is the newest of the three and currently the least flexible for complex behavioral rules. Knowing which platform you're on determines how you structure your input.
The practical implication: a system prompt that works perfectly in Claude may need restructuring for ChatGPT. Claude handles long, nuanced constraint lists well — it genuinely reads and applies them. ChatGPT benefits from shorter, more directive phrasing in each field. If you use both tools (many professionals do), maintain two versions of your core system prompt. It takes 20 minutes once and saves hours of prompt-repair later. Notion AI and Perplexity don't expose true system prompts to end users, so on those platforms your technique shifts to persistent context blocks at the top of every session.
- ChatGPT Custom Instructions: two fields, ~1,500 characters each, applies to all GPT-4o conversations by default
- Claude Projects: single system prompt field, higher limit (~2,000+ tokens), strongest constraint adherence of the three
- Gemini Gems: template-based system prompts, good for role-playing personas, weaker on complex multi-rule sets
- GitHub Copilot: system behavior set by your IDE and workspace settings, not a text field — configured via .github/copilot-instructions.md
- Notion AI: no exposed system prompt — use a pinned 'context block' at the top of your workspace as a workaround
- Perplexity: no persistent system prompt for standard users — Pro users can set default focus modes but not full behavioral instructions
| Platform | System Prompt Access | Character Limit (approx.) | Persists Across Sessions? | Best For |
|---|---|---|---|---|
| ChatGPT (Plus/Team) | Custom Instructions — two fields | ~1,500 per field | Yes, until changed | General professional use |
| Claude (Pro/Team) | Projects system prompt | 2,000+ tokens | Yes, per Project | Complex constraint-heavy workflows |
| Gemini Advanced | Gems | ~500 words | Yes, per Gem | Role personas, simple tasks |
| GitHub Copilot | .github/copilot-instructions.md | No hard limit | Yes, per repo | Code style and review standards |
| Perplexity Pro | Default focus + prompt prefix | Minimal | Partial | Research with source constraints |
Claude Projects Are the Most Powerful Option Right Now
A Real System Prompt in the Wild
Prompt
You are a B2B marketing strategist with deep expertise in SaaS demand generation. You work with my company, a mid-market project management software vendor targeting operations teams at companies with 200–2,000 employees. Our brand voice is direct, data-driven, and never uses jargon like 'synergy' or 'holistic.' When I ask for copy, keep it under 100 words unless I specify otherwise. When I ask for strategy, give me a numbered action list with one-line rationale for each item. If my request is ambiguous, state your interpretation first, then answer. Never recommend paid ad budgets above $50K without flagging that CFO approval is typically required.
AI Response
Understood. I'll operate as your B2B SaaS marketing strategist focused on operations-team buyers at mid-market companies. For copy requests: under 100 words, direct tone, zero jargon. For strategy requests: numbered action list with rationale. I'll flag my interpretation when your intent isn't explicit, and I'll note CFO-approval thresholds on any large budget recommendations. Ready — what's the first challenge?
Where System Prompts Break Down
System prompts are not foolproof. Three failure modes appear repeatedly. First, instruction overload: cramming 20 rules into one prompt causes the model to honor the first five and quietly ignore the rest. Models have finite attention, and a 1,200-word system prompt competes with your actual query for that attention. Keep system prompts under 400 words for ChatGPT and under 600 for Claude. If you have more rules than that, they belong in a reference document, not the system prompt itself.
Second failure mode: contradictory instructions. 'Be concise' and 'always provide full context with examples' cannot coexist without a priority rule. The model will pick one interpretation and apply it inconsistently. Third failure mode: prompt drift over long conversations. After 15–20 exchanges, even a strong system prompt loses influence as the conversation history grows. The model starts weighting recent messages more heavily than the original instructions. For long working sessions, paste a one-line reminder of your key constraint — 'reminder: keep all responses under 150 words' — every 10 or so exchanges.
- Keep system prompts under 400 words (ChatGPT) or 600 words (Claude) to avoid instruction dropout
- Resolve contradictions explicitly: 'Be concise — but if asked for a full report, length is no longer constrained'
- Use priority language: 'Above all else...', 'Unless explicitly told otherwise...'
- For sessions longer than 20 exchanges, paste a short constraint reminder mid-conversation
- Test your system prompt with 5 diverse queries before relying on it in real work — edge cases surface fast
- Version-control your system prompts in a simple doc with dates — models update, and a prompt that worked in March may drift by June
System Prompts Are Not Private on Shared Platforms
Quick Build: Your First Professional System Prompt
Goal: A tested, saved Custom Instruction set in ChatGPT that improves at least two of your three test responses compared to default behavior.
1. Open ChatGPT (Plus) and navigate to Settings → Personalization → Custom Instructions. 2. In the first field ('What should ChatGPT know about you?'), write 2–3 sentences: your role, your industry, and the type of tasks you use AI for most often. 3. In the second field ('How should ChatGPT respond?'), write your format rules: preferred response length, whether you want bullet points or prose, and your tone preference (direct, formal, casual). 4. Add one behavioral rule: either 'Ask a clarifying question if my request is ambiguous' OR 'State your assumption and proceed' — pick the one that matches how you work. 5. Add one constraint relevant to your work (e.g., 'Do not recommend tools that cost more than $X/month' or 'Always flag when you're uncertain about a fact'). 6. Save the instructions, then open a new conversation and send three test prompts that represent your typical daily tasks. 7. Note which responses feel off — too long, wrong tone, missing context — and edit the relevant layer of your system prompt to fix each issue.
System Prompt Cheat Sheet
- Five layers: Identity → Context → Constraints → Format rules → Behavioral nudges
- ChatGPT: two fields, ~1,500 chars each | Claude Projects: single field, 2,000+ tokens | Gemini Gems: ~500 words
- Under 400 words for ChatGPT, under 600 for Claude — beyond that, rules get dropped
- Contradictory instructions cause inconsistency — always specify which rule wins
- Long sessions cause prompt drift — paste a constraint reminder every ~10 exchanges
- Claude Projects: attach reference docs alongside system prompts for zero-setup context
- GitHub Copilot: use .github/copilot-instructions.md for persistent code standards
- Never embed confidential data in system prompts — treat them as semi-public
- Version-control your prompts with dates — model updates can shift behavior
- Test with 5 diverse queries before relying on any new system prompt in real work
You have a working mental model of system prompts and custom instructions. Now the goal is precision — knowing exactly which levers to pull for different work contexts, avoiding the mistakes that silently degrade your outputs, and building a personal defaults library you actually reuse. The professionals who get the most from ChatGPT, Claude, and Gemini are not the ones who write the cleverest one-off prompts. They are the ones who invest 20 minutes once and benefit every session after that.
Tuning for Context: Role-Specific Default Strategies
Different professional roles demand radically different AI defaults. A marketing director wants punchy, persuasive copy at a tenth-grade reading level. A strategy consultant wants structured analysis with explicit assumptions and caveats. A data analyst wants code-first responses with inline comments. Your custom instructions should encode your role's specific communication contract — the unspoken rules your colleagues already follow when they brief you. Generic defaults produce generic outputs. Role-tuned defaults make the AI feel like it was hired specifically for your job.
| Role | Tone Setting | Format Default | Key Constraint to Include |
|---|---|---|---|
| Marketing Manager | Persuasive, punchy | Short paragraphs, bullet CTAs | No jargon; reading level grade 8–10 |
| Strategy Consultant | Analytical, precise | Numbered sections, explicit assumptions | Flag uncertainty; cite reasoning |
| Data Analyst | Technical, concise | Code blocks + comments first | Python preferred; explain edge cases |
| HR / People Ops | Empathetic, neutral | Plain prose, no bullet overload | Avoid legal claims; stay descriptive |
| Product Manager | Crisp, outcome-focused | User story format when applicable | Always link feature to user need |
| Executive / C-Suite | Strategic, high-level | Executive summary first | Max 3 key points per response |
What to Include — and What to Leave Out
Overstuffed custom instructions backfire. When you load 800 words of preferences into a system prompt, the model starts dropping instructions — particularly the ones near the end. ChatGPT's custom instructions field has a 1,500-character limit precisely because brevity forces prioritization. Claude's system prompts can be longer, but longer does not mean better. Treat your defaults like a job brief: include the non-negotiables, skip the nice-to-haves. If a preference only applies once a month, handle it in that specific prompt instead.
- Include: your role, your audience, preferred output format, tone, any hard constraints (no legal claims, no speculation presented as fact)
- Include: response length preference (concise vs. comprehensive) and whether you want the model to ask clarifying questions
- Include: any domain-specific vocabulary or frameworks you use regularly (OKRs, MECE, RACI, Jobs-to-be-Done)
- Exclude: preferences that vary by task — handle those inline
- Exclude: redundant instructions ('always be helpful') that the model already defaults to
- Exclude: contradictory instructions ('be brief' + 'always give full examples') — pick one or scope them separately
The Brevity Test
| Instruction Type | Include? | Example |
|---|---|---|
| Your job title and primary audience | Yes | 'I'm a B2B SaaS product manager. My readers are technical buyers.' |
| Preferred output structure | Yes | 'Lead with a summary. Use headers for sections over 200 words.' |
| Tone and register | Yes | 'Professional but direct. No filler phrases.' |
| Domain frameworks you use | Yes | 'Apply MECE structure to analysis tasks.' |
| Generic quality requests | No | 'Always be accurate and helpful.' |
| Task-specific one-offs | No | 'Today I need a 500-word blog post.' — put this in the chat |
| Contradictory instructions | Never | 'Be brief' + 'Always provide comprehensive detail' |
Claude vs. ChatGPT: Where Your Instructions Live
Prompt
I'm a management consultant specializing in operational efficiency for mid-market manufacturing firms. My outputs go to CFOs and COOs. Always structure analysis using MECE principles. Lead with the key insight, then supporting evidence, then implications. Flag any assumption you're making. Use precise numbers when I provide data; don't invent them. Keep responses under 400 words unless I ask for a full document. Ask one clarifying question before starting any analysis task.
AI Response
Understood. I'll apply MECE structuring to all analysis, lead with the headline insight, and flag assumptions explicitly. I'll stay under 400 words for standard responses and ask one clarifying question before beginning analysis tasks. Ready when you are.
Common Failures and How to Fix Them
Even well-intentioned custom instructions fail in predictable ways. The most common: instructions that conflict with the task at hand. If your default says 'be concise' but you ask for a 2,000-word report, the model will be confused about which instruction to honor — and it will usually follow the in-conversation request, quietly dropping your default. The fix is to write defaults that apply to mode, not length, and override length explicitly in each prompt.
Defaults Don't Override Safety Guardrails
| Failure Mode | What Happens | Fix |
|---|---|---|
| Conflicting length instructions | Model ignores one, picks the other inconsistently | Set tone/format defaults; specify length per task |
| Instructions too long (800+ words) | Later instructions dropped silently | Keep under 300 words; prioritize ruthlessly |
| No format specified | Output format varies unpredictably | State your default format explicitly (bullets, prose, headers) |
| Role stated but audience omitted | Tone calibration is off | Always pair role with intended reader |
| Assuming defaults are active across tools | Using Claude when you set up ChatGPT defaults | Verify active instructions at the start of each session |
Goal: A saved, tested custom instruction block you can reuse and iterate on — your permanent AI defaults baseline.
1. Open a blank document (Notion, Google Docs, or a notes app) — this becomes your AI Defaults Library. 2. Write your role statement in one sentence: your title, your domain, and who reads your outputs. 3. List your top three output format preferences (e.g., 'headers for long content,' 'bullet points for lists,' 'summary first'). 4. Write one sentence on tone: the register you want the AI to use consistently. 5. List any hard constraints — things the AI must never do in your context (e.g., 'never present speculation as confirmed fact'). 6. Combine steps 2–5 into a single custom instruction block under 250 words and paste it into ChatGPT's custom instructions or a Claude Project system prompt. 7. Run three different work tasks using these defaults and note in your document which instructions worked, which were ignored, and what you'd change.
Quick-Reference Cheat Sheet
- Custom instructions in ChatGPT: Settings → Personalization → Custom Instructions (1,500-char limit)
- Claude system prompts: set per-project via Projects, or per-conversation in the API
- Gemini custom personas: create via 'Gems' in the Gemini interface
- Always include: role + audience + format preference + tone + hard constraints
- Always exclude: generic quality requests, task-specific one-offs, contradictory instructions
- Keep custom instructions under 300 words for reliable instruction-following
- Defaults set the baseline — override inline for task-specific needs
- Test your defaults with three real work tasks before treating them as final
- Revisit and update your defaults every 4–6 weeks as your work evolves
Key Takeaways
- Role-specific defaults outperform generic ones — encode your professional context explicitly
- Brevity in custom instructions is a feature, not a limitation — prioritize ruthlessly
- Know where your defaults live in each tool; they do not transfer between platforms automatically
- Conflicting instructions cause silent failures — write defaults that complement, not compete with, your task prompts
- Your defaults are a living document — test, observe, and iterate every few weeks
- The investment is front-loaded: 20 minutes of setup pays back across hundreds of future sessions
A consultant sets custom instructions saying 'be concise' but then asks ChatGPT for a comprehensive 2,000-word strategic report. What is the most likely outcome?
You set up detailed custom instructions in ChatGPT and then switch to Claude for a project. Which statement is accurate?
Which of the following is the strongest example of a useful custom instruction?
Why is keeping custom instructions under 300 words recommended?
A product manager wants to use custom instructions to make ChatGPT always write in the 'Jobs-to-be-Done' framework and skip clarifying questions. She also wants it to never make legal claims. Which approach is correct?
This lesson requires Pro
Upgrade your plan to unlock this lesson and all other Pro content on the platform.
You're currently on the Free plan.
