Skip to main content
Back to Setting Up Your AI Workflow
Lesson 2 of 10

Setting up your AI accounts and security

~18 min read

Setting Up Your AI Accounts and Security

Most professionals waste their first week with AI tools fumbling through account setups, hitting unexpected paywalls, and accidentally sharing sensitive company data. This lesson cuts through that. You will walk away knowing exactly which accounts to create, what each tier actually costs, where your data goes when you hit send, and how to configure every major platform so your information — and your employer's — stays protected. Think of this as your security and setup reference sheet: keep it open the first time you configure each tool.

  1. The free tiers of ChatGPT, Claude, and Gemini are genuinely useful — but each has hard limits that will block you at the worst moment
  2. Your prompts are used for model training by default on most free plans — you must opt out manually
  3. Business email addresses sometimes unlock enterprise-grade data protections automatically, depending on the platform
  4. Two-factor authentication (2FA) is not optional if you store work-related prompts or use AI integrations connected to company systems
  5. API keys and consumer accounts are fundamentally different products with different pricing, data policies, and use cases
  6. Password managers are the practical prerequisite to managing five or more AI tool accounts securely
  7. Data residency — where your data is physically stored — matters for teams in the EU, healthcare, legal, and finance sectors

The Account Landscape: What You're Actually Signing Up For

The major AI platforms have split their products into consumer accounts and API access. When you sign up at chat.openai.com, you get a consumer ChatGPT account. When you sign up at platform.openai.com, you get API access — a completely separate product with separate billing, separate data policies, and no chat interface. The same split exists at Anthropic (claude.ai vs api.anthropic.com) and Google (gemini.google.com vs Google AI Studio). Most professionals only ever need the consumer account, but knowing the distinction prevents expensive confusion later.

Within consumer accounts, every major platform now offers three tiers: free, paid personal (typically $20/month), and enterprise or team plans (typically $25–$30 per user per month, billed annually). The jump from free to paid unlocks meaningfully more powerful models — GPT-4o instead of GPT-3.5, Claude Sonnet instead of Claude Haiku — plus higher usage limits and, critically, stronger default data protections. The jump to enterprise adds SSO, admin consoles, audit logs, and contractual data privacy guarantees. For solo professionals, the $20/month tier is almost always the right starting point.

  • ChatGPT Free: GPT-4o access (limited), memory off by default, data used for training unless opted out
  • ChatGPT Plus ($20/mo): Full GPT-4o access, higher rate limits, access to o1 and o3-mini models, training opt-out available
  • Claude Free: Claude Haiku + limited Sonnet access, 5 messages per day on Sonnet, training opt-out available
  • Claude Pro ($20/mo): Full Claude Sonnet and Opus access, 5x more usage, Projects feature for persistent context
  • Gemini Free: Gemini 1.5 Flash, integrated with Google Workspace on personal accounts
  • Gemini Advanced ($19.99/mo via Google One): Gemini 1.5 Pro, 1TB storage bundle, deeper Workspace integration
  • Perplexity Pro ($20/mo): Unlimited pro searches, choice of GPT-4o or Claude as the underlying model

Start With One Paid Account

Don't subscribe to everything at once. Pick one platform based on your primary use case — ChatGPT Plus for general writing and analysis, Claude Pro if you work with long documents (its 200K token context window is unmatched at this price), Gemini Advanced if you live in Google Workspace. Run it for 30 days, then decide whether to add a second tool.

Platform Comparison: Tiers, Costs, and Key Limits

PlatformFree Tier ModelPaid Tier ($20/mo)Context Window (Paid)Key Free Limit
ChatGPTGPT-4o (limited)GPT-4o + o1/o3-mini128K tokens~10 GPT-4o msgs/3hrs
ClaudeHaiku + limited SonnetSonnet + Opus200K tokens5 Sonnet msgs/day
GeminiGemini 1.5 FlashGemini 1.5 Pro1M tokensRate limits vary
PerplexityLimited Pro searchesUnlimited Pro + model choiceVaries by model5 Pro searches/day
Notion AI20 AI responses freeIncluded in Notion Plus ($16/mo)Based on page content20 lifetime free responses
GitHub CopilotFree for students/OSS$10/mo individualCode file contextVerified eligibility required
Key specs as of mid-2025. Token limits and pricing change — verify at each platform's pricing page before purchasing.

Data Privacy: Where Your Prompts Actually Go

Every prompt you send travels to a third-party server. That is the fundamental reality of cloud-based AI tools. On free plans, OpenAI, Anthropic, and Google all reserve the right to use your conversations to improve their models — though all three allow you to opt out in settings. On paid personal plans, the default is the same, but opt-out is clearly available and widely recommended. On enterprise plans, all three providers contractually commit to not training on your data. The difference is meaningful: enterprise contracts include data processing agreements (DPAs) that satisfy GDPR and similar regulations.

The risk is not that someone at OpenAI will read your specific prompt about Q3 forecasts. The risk is that sensitive information — client names, unreleased product details, personal employee data, financial projections — enters a training pipeline and could theoretically surface in responses to other users in edge cases. Major enterprises have already banned specific AI tools for exactly this reason. Samsung famously restricted ChatGPT use internally after engineers accidentally pasted proprietary chip designs into prompts in 2023. The rule is simple: treat the prompt box like a public channel until you have a signed enterprise agreement.

  1. Go to ChatGPT Settings → Data Controls → turn off 'Improve the model for everyone'
  2. In Claude, go to Privacy Settings → disable 'Allow Anthropic to use my content to train models'
  3. In Gemini, go to My Activity → turn off Gemini Apps Activity to stop conversation storage
  4. Never paste client names, employee data, financial figures, or unreleased product details into any free-tier tool
  5. If your company has a Microsoft 365 Copilot or Google Workspace subscription, use those — they include enterprise-grade data protections by default
  6. When in doubt, anonymize: replace 'Acme Corp's Q3 revenue of $4.2M' with 'a company's quarterly revenue of $X'
Plan TypeTraining on Your Data?Data RetentionGDPR/Compliance Ready?Recommended For
Free (all platforms)Yes, unless opted outUp to 30 days (varies)No formal DPAPersonal exploration only
Paid Personal ($20/mo)Yes, unless opted outUp to 30 days (varies)No formal DPASolo professionals, non-sensitive work
Teams/Business ($25-30/user)No (by default)ConfigurableDPA availableSmall teams, client-facing work
Enterprise (custom pricing)No (contractual)Admin-controlledFull DPA, SOC2, HIPAA optionsLarge orgs, regulated industries
Microsoft 365 Copilot ($30/user)NoTied to M365 retentionFull compliance suiteOrgs already on Microsoft stack
Google Workspace Gemini (add-on)NoTied to Workspace retentionFull compliance suiteOrgs already on Google Workspace
Data handling summary by plan type. Always review the current privacy policy and DPA terms before processing sensitive data.

The Enterprise Boundary Is a Contract, Not a Feature

The difference between a paid personal plan and an enterprise plan isn't just data handling behavior — it's legal accountability. Enterprise agreements include signed data processing agreements that make the AI provider a 'data processor' under GDPR, creating contractual obligations. A paid personal plan has no such agreement. If your work involves EU personal data, healthcare records, or legal documents, you need the enterprise tier or an equivalent compliant solution before processing that data.

Account Security Configuration

Testing What an AI Tool Knows About Your Account Security

Prompt

I'm setting up ChatGPT Plus for professional use. What security settings should I configure right now, and what are the default settings I should be aware of that might affect my data privacy?

AI Response

Here are the key security and privacy settings to configure in ChatGPT Plus immediately: **Security settings:** - Enable two-factor authentication (Settings → Security) - Review active sessions and log out of any unrecognized devices - Confirm your account email is secure — it's your recovery method **Privacy/data settings:** - Settings → Data Controls → 'Improve the model for everyone' — OFF by default on Plus, but verify - Memory: decide if you want ChatGPT to remember details across conversations (Settings → Personalization → Memory) - Chat history: you can archive or delete individual conversations **Default behaviors to know:** - Conversations are stored and visible in your history unless you use Temporary Chat mode - Shared links to conversations are accessible to anyone with the URL - Custom GPTs you create are public by default unless you change visibility settings

Setting Up 2FA and Password Management

Two-factor authentication blocks the vast majority of account takeover attacks. All major AI platforms support it — ChatGPT, Claude, and Gemini all offer authenticator app-based 2FA (Google Authenticator, Authy, or a hardware key via TOTP). SMS-based 2FA is better than nothing but is vulnerable to SIM-swapping attacks; use an authenticator app where possible. Enable 2FA immediately after creating any AI account you plan to use for work. The setup takes under three minutes and the protection is substantial — especially if you store sensitive prompts, custom instructions, or integrated API keys within your account.

A password manager is not optional once you're managing five or more AI tools alongside your existing work accounts. 1Password ($3/month), Bitwarden (free tier is strong), and Dashlane ($5/month) all integrate with browsers and mobile apps. The practical reason: AI platforms update their interfaces and occasionally force password resets. Reusing passwords across AI tools and work systems creates a single point of failure. Generate unique 20+ character passwords for each AI account, store them in your manager, and you eliminate one of the most common vectors for account compromise. This is foundational digital hygiene, not advanced security.

Security ActionPlatformWhere to Find ItPriority
Enable 2FA (authenticator app)ChatGPTSettings → Security → Two-factor authenticationCritical
Enable 2FAClaude (Anthropic)Account Settings → SecurityCritical
Enable 2FAGoogle (Gemini)myaccount.google.com → SecurityCritical
Disable model trainingChatGPTSettings → Data Controls → Improve the modelHigh
Disable model trainingClaudePrivacy Settings → Training preferencesHigh
Disable Gemini activityGeminimyactivity.google.com → Gemini Apps ActivityHigh
Review active sessionsAll platformsSecurity or Account settings on eachMedium
Set conversation visibilityChatGPTSettings → Data Controls → Chat historyMedium
Unique password via managerAll platforms1Password / Bitwarden / DashlaneCritical
Security configuration checklist. Complete 'Critical' items before your first work-related session on any platform.

API Keys Are Not Account Passwords — Treat Them Differently

If you ever generate an API key (at platform.openai.com or console.anthropic.com) to connect AI tools to other software, that key grants direct programmatic access to your account and billing. Never paste an API key into a chat, a shared document, or version control. Store it only in a password manager or a secrets manager like AWS Secrets Manager. A leaked API key can result in thousands of dollars in charges within hours — there are automated bots that scan GitHub repositories specifically for exposed API keys.
Configure Your First AI Account for Professional Use

Goal: By the end of this task, your ChatGPT account has 2FA enabled, data training disabled, and credentials stored securely — meeting the baseline security standard for professional use.

1. Create or log into your ChatGPT account at chat.openai.com — if starting fresh, use your professional email address. 2. Navigate to Settings (click your profile icon) → Security → enable Two-Factor Authentication using an authenticator app like Google Authenticator or Authy. 3. Navigate to Settings → Data Controls → locate 'Improve the model for everyone' and confirm it is turned off. 4. Navigate to Settings → Personalization → Memory — decide whether to enable memory (useful for consistent work context) or leave it off (better for privacy on shared devices). 5. Open a new chat and switch to 'Temporary Chat' mode (top of screen) — send one test prompt and confirm it does not appear in your chat history afterward. 6. Log into your password manager (or create a free Bitwarden account at bitwarden.com) and save your ChatGPT credentials with a unique generated password.

Quick Reference: Account Setup Cheat Sheet

  • ChatGPT Plus ($20/mo) = GPT-4o + o1 models + higher limits; best for general writing, analysis, coding assistance
  • Claude Pro ($20/mo) = 200K context window + Sonnet/Opus; best for long documents, nuanced writing, research synthesis
  • Gemini Advanced ($19.99/mo) = Gemini 1.5 Pro + Workspace integration; best for Google-native workflows
  • Free tiers train on your data by default — opt out in settings on every platform
  • Enterprise/Teams plans = contractual data protection; required for client data, EU personal data, regulated industries
  • 2FA via authenticator app: enable on ChatGPT, Claude, and Google before first professional use
  • Password manager (Bitwarden free or 1Password $3/mo): generate unique passwords for each AI account
  • API keys ≠ account passwords: store separately, never share, monitor for unauthorized usage
  • Samsung rule: never paste proprietary data, client names, or unreleased financials into any free-tier or personal-plan tool
  • Temporary Chat mode in ChatGPT: conversations not saved — use for any session involving sensitive context

Key Takeaways

  1. Consumer accounts and API accounts are separate products — most professionals only need the consumer account
  2. The $20/month paid tier on ChatGPT or Claude is the practical minimum for serious professional use
  3. Free plans default to using your data for model training — opt out manually on every platform you use
  4. Enterprise plans provide contractual data protections; personal paid plans do not
  5. Enable 2FA with an authenticator app immediately on every AI account connected to your work
  6. Treat the prompt box as a public channel until you have an enterprise agreement in place
  7. API keys carry billing access — store them in a secrets manager, never in chat or shared documents

Managing API Keys, Billing, and Organizational Access

Once your accounts are live, the real security work begins. API keys are the silent attack surface most professionals ignore until something goes wrong. A leaked OpenAI key can rack up thousands of dollars in charges overnight — OpenAI has no automatic fraud protection equivalent to a credit card. Organizational access controls determine who on your team can see, edit, or export AI-generated content. Getting both of these right early saves you from expensive, embarrassing cleanup later. This section is your reference guide for doing both correctly.

API Keys: What They Are and Why They're Dangerous

An API key is a long alphanumeric string that authenticates your application or script to an AI service without requiring a username and password each time. When you paste an OpenAI key into a tool like Zapier, Make, or a custom script, that tool gains the full billing authority of your account. There are no built-in spending limits by default. If that key gets committed to a public GitHub repository — a shockingly common mistake — bots scan for it within minutes and begin generating requests at scale. OpenAI's platform logs show compromised keys being exploited within 4 minutes of public exposure on average.

Anthropic (Claude), Google (Gemini), and OpenAI all issue API keys from their developer dashboards, separate from the consumer chat interfaces. A ChatGPT Plus subscription and an OpenAI API account are billed independently — many users don't realize this until they see two separate charges. Your API usage is metered by tokens: roughly 750 words equals 1,000 tokens. GPT-4o costs $5 per million input tokens and $15 per million output tokens as of mid-2024. Claude 3.5 Sonnet runs $3 per million input tokens. Small inefficiencies in prompts multiply across thousands of API calls.

  • Generate API keys from the platform dashboard, not from within third-party tools
  • Name every key descriptively: 'zapier-marketing-automation' not 'key1'
  • Set hard spending limits in OpenAI's billing settings before using any key in production
  • Never store a key in plain text in a document, Slack message, or email
  • Use environment variables or a secrets manager (like 1Password Secrets Automation) for code projects
  • Rotate keys every 90 days or immediately after any suspected exposure
  • Delete unused keys — dormant keys are still valid attack vectors

GitHub Will Not Save You

Making a GitHub repository private after accidentally committing an API key does not invalidate the key. Git history is permanent and can be accessed by anyone who cloned the repo before you made it private. The only safe response is to immediately revoke the key in the platform dashboard and generate a new one. Do this before doing anything else.
PlatformWhere to Create KeysSpending Limit OptionKey Rotation Policy
OpenAI (GPT-4, GPT-4o)platform.openai.com → API KeysYes — hard limits in Billing settingsManual; no forced expiry
Anthropic (Claude)console.anthropic.com → API KeysYes — usage limits per keyManual; no forced expiry
Google (Gemini)console.cloud.google.com → CredentialsYes — via Cloud Billing budgetsCan set expiry dates
Mistral AIconsole.mistral.ai → API KeysLimited — account-level onlyManual
Coheredashboard.cohere.com → API KeysYes — per-key rate limitsManual
API key management options across major AI platforms (as of Q3 2024)

Team Access and Organizational Settings

ChatGPT Teams ($30/user/month) and ChatGPT Enterprise give administrators a dedicated workspace where conversations are isolated from OpenAI's training data by default. This is a critical distinction from free and Plus accounts, where your inputs may be used to improve models unless you manually opt out in settings. Anthropic's Claude for Teams and Google's Workspace integration with Gemini offer equivalent data isolation. If your company handles client data, contracts, or anything under NDA, you should be operating from an organizational account, not a personal one.

Role-based access control (RBAC) in ChatGPT Teams lets you assign Owner, Admin, or Member roles. Owners control billing and can add or remove users. Admins manage workspace settings and can restrict which GPTs members can access. Members use the tools but cannot change organizational settings. Claude and Gemini for Workspace follow similar patterns, with Gemini inheriting Google Workspace's existing admin console roles. Setting these up correctly from the start prevents the common scenario where a departing employee retains access to a shared AI workspace containing sensitive prompts and outputs.

  1. Designate one Owner per workspace — ideally an IT or operations lead, not just the person who signed up first
  2. Add a secondary Owner as backup before the primary Owner travels or changes roles
  3. Audit member list quarterly and remove accounts for anyone who has left the team
  4. Restrict custom GPT creation to Admins only if your team is larger than 10 people
  5. Enable SSO (Single Sign-On) if your organization uses Okta, Azure AD, or Google Workspace — this centralizes offboarding
  6. Document which API keys belong to which team projects in a shared internal wiki
  7. Review the Admin console's usage dashboard monthly to spot unusual consumption patterns
Account TypeData Training Opt-OutTeam ManagementSSO SupportMonthly Cost (USD)
ChatGPT FreeManual opt-out requiredNoneNo$0
ChatGPT PlusManual opt-out requiredNoneNo$20/user
ChatGPT TeamsOff by defaultOwner/Admin/Member rolesNo$30/user
ChatGPT EnterpriseOff by defaultFull RBAC + audit logsYesCustom pricing
Claude ProManual opt-out requiredNoneNo$20/user
Claude for TeamsOff by defaultAdmin controlsYes (Enterprise)$30/user
Gemini for WorkspaceGoverned by Google Workspace adminInherits Workspace RBACYes$30/user (Business)
Account tier comparison for data privacy, team management, and SSO across major platforms

The Training Data Default Is Not What You Think

On free and individual paid plans, ChatGPT and Claude can use your conversations to train future models unless you explicitly opt out. For ChatGPT, go to Settings → Data Controls → Improve the model for everyone and toggle it off. For Claude, submit a request via Anthropic's privacy portal. This setting is per-account and does not carry over to new accounts. Check it on every account you create.
Auditing Your Own AI Usage Footprint

Prompt

I manage a marketing team of 8 people. We use ChatGPT Teams, the OpenAI API via Zapier, and Claude Pro on individual accounts. Create a one-page audit checklist I can use to review our AI security setup. Include: data privacy settings, API key hygiene, access control, and offboarding steps. Format it as a table with columns for Item, Current Risk if Ignored, and Action Required.

AI Response

| Item | Current Risk if Ignored | Action Required | |---|---|---| | ChatGPT Teams training opt-out | Client data used in model training | Confirm off in Admin console → Settings → Data Controls | | Individual Claude Pro accounts | No central visibility or control | Migrate to Claude for Teams; disable personal accounts | | OpenAI API keys in Zapier | Key exposure if Zapier account is compromised | Rotate keys; enable IP restrictions in OpenAI dashboard | | Offboarding process | Departed staff retain workspace access | Add 'revoke AI access' to HR offboarding checklist | | API spending limits | Runaway costs from misconfigured automation | Set hard monthly limits in OpenAI billing settings | | SSO integration | Manual deactivation required per tool | Connect ChatGPT Teams to existing Okta or Google SSO |

Protecting Sensitive Data Inside AI Conversations

Even with the right account tier, how you write prompts determines how much sensitive information enters an AI system. The model processes everything you type — names, revenue figures, client details, internal strategy — and that text is transmitted to and processed on external servers. No AI vendor offers the equivalent of end-to-end encryption where they cannot see your input. Enterprise agreements include contractual data protection, but the data still passes through their infrastructure. Knowing what to anonymize before prompting is a practical skill, not paranoia.

The standard professional practice is data minimization: include only what the model needs to produce a useful output. Replace client names with 'Client A' or 'a mid-size logistics company.' Substitute real revenue figures with representative numbers. Remove employee names from performance-related prompts. This isn't about distrusting vendors — it's about maintaining the same data hygiene you'd apply to any external tool. Notably, Microsoft's Copilot for Microsoft 365 processes data within your existing Microsoft 365 tenant boundary, which is why large enterprises often prefer it for sensitive workflows despite its higher cost ($30/user/month add-on).

Data TypeRisk LevelRecommended Approach
Client names and company identifiersHighReplace with 'Client A', 'Vendor B', or a generic descriptor
Revenue, margin, or pricing dataHighUse representative figures or percentage changes instead of absolutes
Employee names in HR contextsHighUse role titles: 'a senior analyst on my team'
Internal project codenamesMediumUse generic labels: 'Project X' or 'our Q4 initiative'
Personal contact informationHighNever include; describe the person by role only
Legal case details or contract termsHighConsult legal before using AI; use sealed enterprise agreements
General industry data or public informationLowSafe to include as context
Data minimization guide for AI prompts in professional settings

Build a 'Safe Prompt' Template for Your Team

Create a shared document with pre-approved prompt structures that already have sensitive fields anonymized. When a team member needs to analyze a client proposal, they copy the template and fill in the placeholders. This takes 30 minutes to set up and eliminates the guesswork of deciding what to redact in the moment. Store it in the same place you keep your brand guidelines or style guide.
Conduct a 15-Minute AI Account Security Audit

Goal: Produce a written record of your current AI account security status, with all training opt-outs confirmed, unused API keys deleted, a spending limit set, and your member list verified as current.

1. Log into each AI platform you actively use (ChatGPT, Claude, Gemini, etc.) and navigate to Settings → Privacy or Data Controls. Screenshot or note the current training data opt-out status for each account. 2. In ChatGPT, go to Settings → Data Controls and confirm 'Improve the model for everyone' is toggled off. If it was on, toggle it off now. 3. Open the OpenAI platform dashboard at platform.openai.com. Under API Keys, review every key listed. Delete any named generically ('Secret key', 'key1') or that you don't recognize. 4. In OpenAI's Billing settings, locate 'Usage limits' and set a hard monthly spending cap appropriate for your usage (start with $20 if unsure). 5. If you use any third-party tools connected to an AI API (Zapier, Make, Notion AI, etc.), open each tool and verify which API key it is using. Note the key name in a private document. 6. Check your AI platform workspace member lists. Remove any accounts belonging to people who no longer work on your team or in your organization.

You have your accounts created and your passwords secured. Now the operational layer matters: knowing which tool bills you how, catching unauthorized access before it costs you, and building habits that keep your AI stack clean as it grows. Most professionals ignore this until something breaks — a surprise $80 charge, a shared login that exposes client data, or a tool quietly auto-upgrading to a paid tier. This section gives you the reference material to stay ahead of all three. Treat it as a living cheat sheet you return to each time you add a new AI tool.

Billing Models Across Major AI Tools

AI tools use four distinct billing structures, and mixing them up leads to budget surprises. ChatGPT Plus charges a flat $20/month regardless of how much you use it. Claude Pro is also $20/month flat. These are predictable. API access is different — OpenAI's API bills per token consumed, with GPT-4o costing $5 per million input tokens and $15 per million output tokens as of mid-2024. Midjourney charges by subscription tier ($10–$120/month) based on GPU minutes. Perplexity Pro runs $20/month. GitHub Copilot is $10/month for individuals, $19/month per seat for business.

The danger zone is API access without a spending cap. A misconfigured script or an app that calls GPT-4 in a loop can generate a four-figure bill overnight. OpenAI lets you set hard usage limits in your API dashboard — use them. Set a monthly cap the moment you generate an API key. For flat-subscription tools like ChatGPT Plus and Claude Pro, the risk is different: you may be paying for tools you stopped using. Audit your active subscriptions every 90 days.

ToolBilling TypeCost (2024)Free Tier?Spending Cap Available?
ChatGPT PlusFlat monthly$20/monthYes (GPT-3.5)No — fixed cost
Claude ProFlat monthly$20/monthYes (limited)No — fixed cost
OpenAI APIPay-per-tokenVaries by model$5 credit (new accounts)Yes — set in dashboard
Gemini AdvancedFlat monthly$19.99/month (Google One AI)Yes (Gemini 1.0)No — fixed cost
MidjourneySubscription tiers$10–$120/monthNo (trial ended 2023)No — tier-based
GitHub CopilotPer seat$10 individual / $19 businessYes (students/OSS)No — fixed cost
Perplexity ProFlat monthly$20/monthYes (limited searches)No — fixed cost
AI tool billing structures — verify current pricing at each provider's website

API Keys Are Not the Same as Account Passwords

An API key grants programmatic access to your account and can generate charges without any login. Store API keys in a password manager, never in a code file you might share or commit to GitHub. If you suspect a key is exposed, revoke it immediately in your provider dashboard and generate a new one. Rotating keys costs nothing. A leaked key can cost hundreds.

Monitoring Access and Spotting Unauthorized Activity

Every major AI platform logs your sessions. ChatGPT shows active sessions under Settings → Security. Claude shows connected apps and sessions in Account Settings. OpenAI's API dashboard logs every key's usage with timestamps and token counts. Check these logs monthly — not because breaches are common, but because the habit catches both external intrusions and internal mistakes, like a team member using your personal API key for a side project.

  1. Log into each AI tool's security or account settings page
  2. Review active sessions — revoke any device or location you don't recognize
  3. Check API key usage logs for unexpected spikes in token consumption
  4. Confirm your recovery email and phone number are current
  5. Verify that MFA is still active (settings sometimes reset after account changes)
  6. Review any connected third-party apps — remove ones you no longer use
PlatformWhere to Check SessionsWhere to Check API UsageMFA Location
ChatGPT / OpenAISettings → Security → Active Sessionsplatform.openai.com → UsageSettings → Security → 2FA
Claude (Anthropic)Account Settings → Securityconsole.anthropic.com → UsageAccount Settings → 2FA
Google (Gemini)myaccount.google.com → Securityaistudio.google.com → APIGoogle Account → 2-Step Verification
GitHub (Copilot)github.com → Settings → SessionsN/A (flat billing)Settings → Password & Authentication
MidjourneyDiscord account security settingsN/A (flat billing)Discord → Settings → Safety
Security and usage check locations for major AI platforms

Set a Calendar Reminder Now

A 90-day security audit takes 15 minutes. Block it in your calendar today — title it 'AI Stack Audit.' Cover three things each time: active sessions, billing charges, and connected apps. Most professionals who discover unauthorized access find it during a routine check, not because they were alerted. The platforms won't always notify you of unusual activity in time.

Sharing Access Safely in a Team Context

Shared logins are the most common AI security mistake in small teams. When two people share one ChatGPT account, you lose audit trails, risk conversation history exposure, and violate most platforms' terms of service. ChatGPT Team ($25/user/month) and Claude for Teams ($25/user/month) both offer proper multi-user workspaces with separate logins and admin controls. If budget is tight, individual free tiers are safer than shared paid accounts.

For API keys used by a team, use environment variables rather than hardcoded credentials, and assign one key per project or application — not one key for everything. This way, revoking a compromised key for one project doesn't break your entire stack. If you use Notion AI or similar tools embedded in a workspace, audit who has workspace access, since AI features inherit those permissions and can surface information across the workspace.

One Key Per Project Is a Professional Standard

Treat API keys like door keys. You wouldn't give every contractor the same master key to your office. Create separate API keys for separate use cases — one for your chatbot project, one for your data analysis scripts, one for any third-party app integration. Label them clearly in your dashboard. When a project ends, revoke its key. This costs nothing and keeps your security surface minimal.
Prompting an AI to generate your personal security checklist

Prompt

I use the following AI tools: ChatGPT Plus, Claude Pro, and the OpenAI API for a personal project. Create a concise monthly security checklist I can paste into Notion. Include: session review steps, billing checks, API key hygiene, and MFA verification. Format as a checklist with checkboxes.

AI Response

## Monthly AI Security Checklist **ChatGPT Plus** - [ ] Review active sessions (Settings → Security → Active Sessions) - [ ] Confirm MFA is enabled - [ ] Check billing statement matches $20 flat charge **Claude Pro** - [ ] Review connected apps and active sessions - [ ] Confirm recovery email is current - [ ] Verify MFA status **OpenAI API** - [ ] Check usage dashboard for unexpected token spikes - [ ] Confirm monthly spending cap is set - [ ] Review API key list — revoke unused keys - [ ] Verify no keys are stored in shared documents or repos **General** - [ ] Remove AI tools from subscriptions no longer in use - [ ] Check password manager for any weak or reused AI passwords - [ ] Confirm no shared logins exist across team members

Build Your AI Account Security Reference Doc

Goal: A completed AI Stack Master Reference document listing all your AI tools, their billing details, MFA status, and an audit reminder — a practical asset you update every quarter.

1. Open a blank document in Notion, Google Docs, or your preferred tool — title it 'AI Stack Master Reference.' 2. Create a table with five columns: Tool Name, Account Email, Billing Type & Cost, MFA Status, Last Security Check Date. 3. Fill in every AI tool you currently have an account with — include free accounts, not just paid ones. 4. For each tool, log into the account settings and confirm MFA is active — update the MFA Status column accordingly. 5. Navigate to the billing or usage section of each tool and record the exact billing type and current cost in your table. 6. Set a recurring 90-day calendar event titled 'AI Stack Audit' and link it to this document in the event description. 7. Save and store the document somewhere you'll actually find it — bookmark it in your browser or pin it in your project management tool.

Quick-Reference Cheat Sheet

  • Use a unique, strong password for every AI tool — store in 1Password, Bitwarden, or equivalent
  • Enable MFA on every platform that supports it; prefer an authenticator app over SMS
  • API keys ≠ passwords — treat them as sensitive credentials, store in a password manager, never in shared docs
  • Set spending caps on all API accounts before generating your first key
  • Flat-rate tools (ChatGPT Plus, Claude Pro, Perplexity Pro) cost ~$20/month each — audit quarterly to avoid paying for unused subscriptions
  • Never share a single login across team members — use team plans or individual free tiers
  • One API key per project; revoke keys when projects end
  • Check active sessions and usage logs monthly — 15 minutes prevents expensive surprises
  • Connected third-party apps inherit your account permissions — remove ones you no longer use
  • Recovery email and phone must be current — a locked account with outdated recovery info is effectively lost

Key Takeaways

  1. AI tools use four billing models — flat subscription, per-token API, tier-based, and per-seat — and each carries different financial risk profiles
  2. API keys are high-risk credentials: a single leaked key can generate large charges without any account login
  3. Every major platform provides session logs and usage dashboards — checking them monthly is the single highest-value security habit
  4. Shared logins violate terms of service, eliminate audit trails, and expose conversation history across users
  5. A 90-day audit of sessions, billing, and connected apps takes 15 minutes and catches most problems before they escalate
  6. Your AI Stack Master Reference document is a living asset — update it every time you add or remove a tool
Knowledge Check

You generate an OpenAI API key for a small automation project. What should you do immediately after generating it?

Your small marketing team wants to use ChatGPT daily. The budget-conscious manager suggests everyone share one ChatGPT Plus account at $20/month. What's the primary problem with this approach?

Which of the following AI tools ended its free trial in 2023, meaning new users must pay to access it?

A colleague tells you they haven't changed their AI account passwords or checked active sessions since creating the accounts 18 months ago. What's the most practical first step you'd recommend?

You're managing three separate AI-powered projects, each using the OpenAI API. What's the recommended approach to API key management across these projects?

Sign in to track your progress.