Setting up your AI accounts and security
~18 min readSetting Up Your AI Accounts and Security
Most professionals waste their first week with AI tools fumbling through account setups, hitting unexpected paywalls, and accidentally sharing sensitive company data. This lesson cuts through that. You will walk away knowing exactly which accounts to create, what each tier actually costs, where your data goes when you hit send, and how to configure every major platform so your information — and your employer's — stays protected. Think of this as your security and setup reference sheet: keep it open the first time you configure each tool.
- The free tiers of ChatGPT, Claude, and Gemini are genuinely useful — but each has hard limits that will block you at the worst moment
- Your prompts are used for model training by default on most free plans — you must opt out manually
- Business email addresses sometimes unlock enterprise-grade data protections automatically, depending on the platform
- Two-factor authentication (2FA) is not optional if you store work-related prompts or use AI integrations connected to company systems
- API keys and consumer accounts are fundamentally different products with different pricing, data policies, and use cases
- Password managers are the practical prerequisite to managing five or more AI tool accounts securely
- Data residency — where your data is physically stored — matters for teams in the EU, healthcare, legal, and finance sectors
The Account Landscape: What You're Actually Signing Up For
The major AI platforms have split their products into consumer accounts and API access. When you sign up at chat.openai.com, you get a consumer ChatGPT account. When you sign up at platform.openai.com, you get API access — a completely separate product with separate billing, separate data policies, and no chat interface. The same split exists at Anthropic (claude.ai vs api.anthropic.com) and Google (gemini.google.com vs Google AI Studio). Most professionals only ever need the consumer account, but knowing the distinction prevents expensive confusion later.
Within consumer accounts, every major platform now offers three tiers: free, paid personal (typically $20/month), and enterprise or team plans (typically $25–$30 per user per month, billed annually). The jump from free to paid unlocks meaningfully more powerful models — GPT-4o instead of GPT-3.5, Claude Sonnet instead of Claude Haiku — plus higher usage limits and, critically, stronger default data protections. The jump to enterprise adds SSO, admin consoles, audit logs, and contractual data privacy guarantees. For solo professionals, the $20/month tier is almost always the right starting point.
- ChatGPT Free: GPT-4o access (limited), memory off by default, data used for training unless opted out
- ChatGPT Plus ($20/mo): Full GPT-4o access, higher rate limits, access to o1 and o3-mini models, training opt-out available
- Claude Free: Claude Haiku + limited Sonnet access, 5 messages per day on Sonnet, training opt-out available
- Claude Pro ($20/mo): Full Claude Sonnet and Opus access, 5x more usage, Projects feature for persistent context
- Gemini Free: Gemini 1.5 Flash, integrated with Google Workspace on personal accounts
- Gemini Advanced ($19.99/mo via Google One): Gemini 1.5 Pro, 1TB storage bundle, deeper Workspace integration
- Perplexity Pro ($20/mo): Unlimited pro searches, choice of GPT-4o or Claude as the underlying model
Start With One Paid Account
Platform Comparison: Tiers, Costs, and Key Limits
| Platform | Free Tier Model | Paid Tier ($20/mo) | Context Window (Paid) | Key Free Limit |
|---|---|---|---|---|
| ChatGPT | GPT-4o (limited) | GPT-4o + o1/o3-mini | 128K tokens | ~10 GPT-4o msgs/3hrs |
| Claude | Haiku + limited Sonnet | Sonnet + Opus | 200K tokens | 5 Sonnet msgs/day |
| Gemini | Gemini 1.5 Flash | Gemini 1.5 Pro | 1M tokens | Rate limits vary |
| Perplexity | Limited Pro searches | Unlimited Pro + model choice | Varies by model | 5 Pro searches/day |
| Notion AI | 20 AI responses free | Included in Notion Plus ($16/mo) | Based on page content | 20 lifetime free responses |
| GitHub Copilot | Free for students/OSS | $10/mo individual | Code file context | Verified eligibility required |
Data Privacy: Where Your Prompts Actually Go
Every prompt you send travels to a third-party server. That is the fundamental reality of cloud-based AI tools. On free plans, OpenAI, Anthropic, and Google all reserve the right to use your conversations to improve their models — though all three allow you to opt out in settings. On paid personal plans, the default is the same, but opt-out is clearly available and widely recommended. On enterprise plans, all three providers contractually commit to not training on your data. The difference is meaningful: enterprise contracts include data processing agreements (DPAs) that satisfy GDPR and similar regulations.
The risk is not that someone at OpenAI will read your specific prompt about Q3 forecasts. The risk is that sensitive information — client names, unreleased product details, personal employee data, financial projections — enters a training pipeline and could theoretically surface in responses to other users in edge cases. Major enterprises have already banned specific AI tools for exactly this reason. Samsung famously restricted ChatGPT use internally after engineers accidentally pasted proprietary chip designs into prompts in 2023. The rule is simple: treat the prompt box like a public channel until you have a signed enterprise agreement.
- Go to ChatGPT Settings → Data Controls → turn off 'Improve the model for everyone'
- In Claude, go to Privacy Settings → disable 'Allow Anthropic to use my content to train models'
- In Gemini, go to My Activity → turn off Gemini Apps Activity to stop conversation storage
- Never paste client names, employee data, financial figures, or unreleased product details into any free-tier tool
- If your company has a Microsoft 365 Copilot or Google Workspace subscription, use those — they include enterprise-grade data protections by default
- When in doubt, anonymize: replace 'Acme Corp's Q3 revenue of $4.2M' with 'a company's quarterly revenue of $X'
| Plan Type | Training on Your Data? | Data Retention | GDPR/Compliance Ready? | Recommended For |
|---|---|---|---|---|
| Free (all platforms) | Yes, unless opted out | Up to 30 days (varies) | No formal DPA | Personal exploration only |
| Paid Personal ($20/mo) | Yes, unless opted out | Up to 30 days (varies) | No formal DPA | Solo professionals, non-sensitive work |
| Teams/Business ($25-30/user) | No (by default) | Configurable | DPA available | Small teams, client-facing work |
| Enterprise (custom pricing) | No (contractual) | Admin-controlled | Full DPA, SOC2, HIPAA options | Large orgs, regulated industries |
| Microsoft 365 Copilot ($30/user) | No | Tied to M365 retention | Full compliance suite | Orgs already on Microsoft stack |
| Google Workspace Gemini (add-on) | No | Tied to Workspace retention | Full compliance suite | Orgs already on Google Workspace |
The Enterprise Boundary Is a Contract, Not a Feature
Account Security Configuration
Prompt
I'm setting up ChatGPT Plus for professional use. What security settings should I configure right now, and what are the default settings I should be aware of that might affect my data privacy?
AI Response
Here are the key security and privacy settings to configure in ChatGPT Plus immediately: **Security settings:** - Enable two-factor authentication (Settings → Security) - Review active sessions and log out of any unrecognized devices - Confirm your account email is secure — it's your recovery method **Privacy/data settings:** - Settings → Data Controls → 'Improve the model for everyone' — OFF by default on Plus, but verify - Memory: decide if you want ChatGPT to remember details across conversations (Settings → Personalization → Memory) - Chat history: you can archive or delete individual conversations **Default behaviors to know:** - Conversations are stored and visible in your history unless you use Temporary Chat mode - Shared links to conversations are accessible to anyone with the URL - Custom GPTs you create are public by default unless you change visibility settings
Setting Up 2FA and Password Management
Two-factor authentication blocks the vast majority of account takeover attacks. All major AI platforms support it — ChatGPT, Claude, and Gemini all offer authenticator app-based 2FA (Google Authenticator, Authy, or a hardware key via TOTP). SMS-based 2FA is better than nothing but is vulnerable to SIM-swapping attacks; use an authenticator app where possible. Enable 2FA immediately after creating any AI account you plan to use for work. The setup takes under three minutes and the protection is substantial — especially if you store sensitive prompts, custom instructions, or integrated API keys within your account.
A password manager is not optional once you're managing five or more AI tools alongside your existing work accounts. 1Password ($3/month), Bitwarden (free tier is strong), and Dashlane ($5/month) all integrate with browsers and mobile apps. The practical reason: AI platforms update their interfaces and occasionally force password resets. Reusing passwords across AI tools and work systems creates a single point of failure. Generate unique 20+ character passwords for each AI account, store them in your manager, and you eliminate one of the most common vectors for account compromise. This is foundational digital hygiene, not advanced security.
| Security Action | Platform | Where to Find It | Priority |
|---|---|---|---|
| Enable 2FA (authenticator app) | ChatGPT | Settings → Security → Two-factor authentication | Critical |
| Enable 2FA | Claude (Anthropic) | Account Settings → Security | Critical |
| Enable 2FA | Google (Gemini) | myaccount.google.com → Security | Critical |
| Disable model training | ChatGPT | Settings → Data Controls → Improve the model | High |
| Disable model training | Claude | Privacy Settings → Training preferences | High |
| Disable Gemini activity | Gemini | myactivity.google.com → Gemini Apps Activity | High |
| Review active sessions | All platforms | Security or Account settings on each | Medium |
| Set conversation visibility | ChatGPT | Settings → Data Controls → Chat history | Medium |
| Unique password via manager | All platforms | 1Password / Bitwarden / Dashlane | Critical |
API Keys Are Not Account Passwords — Treat Them Differently
Goal: By the end of this task, your ChatGPT account has 2FA enabled, data training disabled, and credentials stored securely — meeting the baseline security standard for professional use.
1. Create or log into your ChatGPT account at chat.openai.com — if starting fresh, use your professional email address. 2. Navigate to Settings (click your profile icon) → Security → enable Two-Factor Authentication using an authenticator app like Google Authenticator or Authy. 3. Navigate to Settings → Data Controls → locate 'Improve the model for everyone' and confirm it is turned off. 4. Navigate to Settings → Personalization → Memory — decide whether to enable memory (useful for consistent work context) or leave it off (better for privacy on shared devices). 5. Open a new chat and switch to 'Temporary Chat' mode (top of screen) — send one test prompt and confirm it does not appear in your chat history afterward. 6. Log into your password manager (or create a free Bitwarden account at bitwarden.com) and save your ChatGPT credentials with a unique generated password.
Quick Reference: Account Setup Cheat Sheet
- ChatGPT Plus ($20/mo) = GPT-4o + o1 models + higher limits; best for general writing, analysis, coding assistance
- Claude Pro ($20/mo) = 200K context window + Sonnet/Opus; best for long documents, nuanced writing, research synthesis
- Gemini Advanced ($19.99/mo) = Gemini 1.5 Pro + Workspace integration; best for Google-native workflows
- Free tiers train on your data by default — opt out in settings on every platform
- Enterprise/Teams plans = contractual data protection; required for client data, EU personal data, regulated industries
- 2FA via authenticator app: enable on ChatGPT, Claude, and Google before first professional use
- Password manager (Bitwarden free or 1Password $3/mo): generate unique passwords for each AI account
- API keys ≠ account passwords: store separately, never share, monitor for unauthorized usage
- Samsung rule: never paste proprietary data, client names, or unreleased financials into any free-tier or personal-plan tool
- Temporary Chat mode in ChatGPT: conversations not saved — use for any session involving sensitive context
Key Takeaways
- Consumer accounts and API accounts are separate products — most professionals only need the consumer account
- The $20/month paid tier on ChatGPT or Claude is the practical minimum for serious professional use
- Free plans default to using your data for model training — opt out manually on every platform you use
- Enterprise plans provide contractual data protections; personal paid plans do not
- Enable 2FA with an authenticator app immediately on every AI account connected to your work
- Treat the prompt box as a public channel until you have an enterprise agreement in place
- API keys carry billing access — store them in a secrets manager, never in chat or shared documents
Managing API Keys, Billing, and Organizational Access
Once your accounts are live, the real security work begins. API keys are the silent attack surface most professionals ignore until something goes wrong. A leaked OpenAI key can rack up thousands of dollars in charges overnight — OpenAI has no automatic fraud protection equivalent to a credit card. Organizational access controls determine who on your team can see, edit, or export AI-generated content. Getting both of these right early saves you from expensive, embarrassing cleanup later. This section is your reference guide for doing both correctly.
API Keys: What They Are and Why They're Dangerous
An API key is a long alphanumeric string that authenticates your application or script to an AI service without requiring a username and password each time. When you paste an OpenAI key into a tool like Zapier, Make, or a custom script, that tool gains the full billing authority of your account. There are no built-in spending limits by default. If that key gets committed to a public GitHub repository — a shockingly common mistake — bots scan for it within minutes and begin generating requests at scale. OpenAI's platform logs show compromised keys being exploited within 4 minutes of public exposure on average.
Anthropic (Claude), Google (Gemini), and OpenAI all issue API keys from their developer dashboards, separate from the consumer chat interfaces. A ChatGPT Plus subscription and an OpenAI API account are billed independently — many users don't realize this until they see two separate charges. Your API usage is metered by tokens: roughly 750 words equals 1,000 tokens. GPT-4o costs $5 per million input tokens and $15 per million output tokens as of mid-2024. Claude 3.5 Sonnet runs $3 per million input tokens. Small inefficiencies in prompts multiply across thousands of API calls.
- Generate API keys from the platform dashboard, not from within third-party tools
- Name every key descriptively: 'zapier-marketing-automation' not 'key1'
- Set hard spending limits in OpenAI's billing settings before using any key in production
- Never store a key in plain text in a document, Slack message, or email
- Use environment variables or a secrets manager (like 1Password Secrets Automation) for code projects
- Rotate keys every 90 days or immediately after any suspected exposure
- Delete unused keys — dormant keys are still valid attack vectors
GitHub Will Not Save You
| Platform | Where to Create Keys | Spending Limit Option | Key Rotation Policy |
|---|---|---|---|
| OpenAI (GPT-4, GPT-4o) | platform.openai.com → API Keys | Yes — hard limits in Billing settings | Manual; no forced expiry |
| Anthropic (Claude) | console.anthropic.com → API Keys | Yes — usage limits per key | Manual; no forced expiry |
| Google (Gemini) | console.cloud.google.com → Credentials | Yes — via Cloud Billing budgets | Can set expiry dates |
| Mistral AI | console.mistral.ai → API Keys | Limited — account-level only | Manual |
| Cohere | dashboard.cohere.com → API Keys | Yes — per-key rate limits | Manual |
Team Access and Organizational Settings
ChatGPT Teams ($30/user/month) and ChatGPT Enterprise give administrators a dedicated workspace where conversations are isolated from OpenAI's training data by default. This is a critical distinction from free and Plus accounts, where your inputs may be used to improve models unless you manually opt out in settings. Anthropic's Claude for Teams and Google's Workspace integration with Gemini offer equivalent data isolation. If your company handles client data, contracts, or anything under NDA, you should be operating from an organizational account, not a personal one.
Role-based access control (RBAC) in ChatGPT Teams lets you assign Owner, Admin, or Member roles. Owners control billing and can add or remove users. Admins manage workspace settings and can restrict which GPTs members can access. Members use the tools but cannot change organizational settings. Claude and Gemini for Workspace follow similar patterns, with Gemini inheriting Google Workspace's existing admin console roles. Setting these up correctly from the start prevents the common scenario where a departing employee retains access to a shared AI workspace containing sensitive prompts and outputs.
- Designate one Owner per workspace — ideally an IT or operations lead, not just the person who signed up first
- Add a secondary Owner as backup before the primary Owner travels or changes roles
- Audit member list quarterly and remove accounts for anyone who has left the team
- Restrict custom GPT creation to Admins only if your team is larger than 10 people
- Enable SSO (Single Sign-On) if your organization uses Okta, Azure AD, or Google Workspace — this centralizes offboarding
- Document which API keys belong to which team projects in a shared internal wiki
- Review the Admin console's usage dashboard monthly to spot unusual consumption patterns
| Account Type | Data Training Opt-Out | Team Management | SSO Support | Monthly Cost (USD) |
|---|---|---|---|---|
| ChatGPT Free | Manual opt-out required | None | No | $0 |
| ChatGPT Plus | Manual opt-out required | None | No | $20/user |
| ChatGPT Teams | Off by default | Owner/Admin/Member roles | No | $30/user |
| ChatGPT Enterprise | Off by default | Full RBAC + audit logs | Yes | Custom pricing |
| Claude Pro | Manual opt-out required | None | No | $20/user |
| Claude for Teams | Off by default | Admin controls | Yes (Enterprise) | $30/user |
| Gemini for Workspace | Governed by Google Workspace admin | Inherits Workspace RBAC | Yes | $30/user (Business) |
The Training Data Default Is Not What You Think
Prompt
I manage a marketing team of 8 people. We use ChatGPT Teams, the OpenAI API via Zapier, and Claude Pro on individual accounts. Create a one-page audit checklist I can use to review our AI security setup. Include: data privacy settings, API key hygiene, access control, and offboarding steps. Format it as a table with columns for Item, Current Risk if Ignored, and Action Required.
AI Response
| Item | Current Risk if Ignored | Action Required | |---|---|---| | ChatGPT Teams training opt-out | Client data used in model training | Confirm off in Admin console → Settings → Data Controls | | Individual Claude Pro accounts | No central visibility or control | Migrate to Claude for Teams; disable personal accounts | | OpenAI API keys in Zapier | Key exposure if Zapier account is compromised | Rotate keys; enable IP restrictions in OpenAI dashboard | | Offboarding process | Departed staff retain workspace access | Add 'revoke AI access' to HR offboarding checklist | | API spending limits | Runaway costs from misconfigured automation | Set hard monthly limits in OpenAI billing settings | | SSO integration | Manual deactivation required per tool | Connect ChatGPT Teams to existing Okta or Google SSO |
Protecting Sensitive Data Inside AI Conversations
Even with the right account tier, how you write prompts determines how much sensitive information enters an AI system. The model processes everything you type — names, revenue figures, client details, internal strategy — and that text is transmitted to and processed on external servers. No AI vendor offers the equivalent of end-to-end encryption where they cannot see your input. Enterprise agreements include contractual data protection, but the data still passes through their infrastructure. Knowing what to anonymize before prompting is a practical skill, not paranoia.
The standard professional practice is data minimization: include only what the model needs to produce a useful output. Replace client names with 'Client A' or 'a mid-size logistics company.' Substitute real revenue figures with representative numbers. Remove employee names from performance-related prompts. This isn't about distrusting vendors — it's about maintaining the same data hygiene you'd apply to any external tool. Notably, Microsoft's Copilot for Microsoft 365 processes data within your existing Microsoft 365 tenant boundary, which is why large enterprises often prefer it for sensitive workflows despite its higher cost ($30/user/month add-on).
| Data Type | Risk Level | Recommended Approach |
|---|---|---|
| Client names and company identifiers | High | Replace with 'Client A', 'Vendor B', or a generic descriptor |
| Revenue, margin, or pricing data | High | Use representative figures or percentage changes instead of absolutes |
| Employee names in HR contexts | High | Use role titles: 'a senior analyst on my team' |
| Internal project codenames | Medium | Use generic labels: 'Project X' or 'our Q4 initiative' |
| Personal contact information | High | Never include; describe the person by role only |
| Legal case details or contract terms | High | Consult legal before using AI; use sealed enterprise agreements |
| General industry data or public information | Low | Safe to include as context |
Build a 'Safe Prompt' Template for Your Team
Goal: Produce a written record of your current AI account security status, with all training opt-outs confirmed, unused API keys deleted, a spending limit set, and your member list verified as current.
1. Log into each AI platform you actively use (ChatGPT, Claude, Gemini, etc.) and navigate to Settings → Privacy or Data Controls. Screenshot or note the current training data opt-out status for each account. 2. In ChatGPT, go to Settings → Data Controls and confirm 'Improve the model for everyone' is toggled off. If it was on, toggle it off now. 3. Open the OpenAI platform dashboard at platform.openai.com. Under API Keys, review every key listed. Delete any named generically ('Secret key', 'key1') or that you don't recognize. 4. In OpenAI's Billing settings, locate 'Usage limits' and set a hard monthly spending cap appropriate for your usage (start with $20 if unsure). 5. If you use any third-party tools connected to an AI API (Zapier, Make, Notion AI, etc.), open each tool and verify which API key it is using. Note the key name in a private document. 6. Check your AI platform workspace member lists. Remove any accounts belonging to people who no longer work on your team or in your organization.
You have your accounts created and your passwords secured. Now the operational layer matters: knowing which tool bills you how, catching unauthorized access before it costs you, and building habits that keep your AI stack clean as it grows. Most professionals ignore this until something breaks — a surprise $80 charge, a shared login that exposes client data, or a tool quietly auto-upgrading to a paid tier. This section gives you the reference material to stay ahead of all three. Treat it as a living cheat sheet you return to each time you add a new AI tool.
Billing Models Across Major AI Tools
AI tools use four distinct billing structures, and mixing them up leads to budget surprises. ChatGPT Plus charges a flat $20/month regardless of how much you use it. Claude Pro is also $20/month flat. These are predictable. API access is different — OpenAI's API bills per token consumed, with GPT-4o costing $5 per million input tokens and $15 per million output tokens as of mid-2024. Midjourney charges by subscription tier ($10–$120/month) based on GPU minutes. Perplexity Pro runs $20/month. GitHub Copilot is $10/month for individuals, $19/month per seat for business.
The danger zone is API access without a spending cap. A misconfigured script or an app that calls GPT-4 in a loop can generate a four-figure bill overnight. OpenAI lets you set hard usage limits in your API dashboard — use them. Set a monthly cap the moment you generate an API key. For flat-subscription tools like ChatGPT Plus and Claude Pro, the risk is different: you may be paying for tools you stopped using. Audit your active subscriptions every 90 days.
| Tool | Billing Type | Cost (2024) | Free Tier? | Spending Cap Available? |
|---|---|---|---|---|
| ChatGPT Plus | Flat monthly | $20/month | Yes (GPT-3.5) | No — fixed cost |
| Claude Pro | Flat monthly | $20/month | Yes (limited) | No — fixed cost |
| OpenAI API | Pay-per-token | Varies by model | $5 credit (new accounts) | Yes — set in dashboard |
| Gemini Advanced | Flat monthly | $19.99/month (Google One AI) | Yes (Gemini 1.0) | No — fixed cost |
| Midjourney | Subscription tiers | $10–$120/month | No (trial ended 2023) | No — tier-based |
| GitHub Copilot | Per seat | $10 individual / $19 business | Yes (students/OSS) | No — fixed cost |
| Perplexity Pro | Flat monthly | $20/month | Yes (limited searches) | No — fixed cost |
API Keys Are Not the Same as Account Passwords
Monitoring Access and Spotting Unauthorized Activity
Every major AI platform logs your sessions. ChatGPT shows active sessions under Settings → Security. Claude shows connected apps and sessions in Account Settings. OpenAI's API dashboard logs every key's usage with timestamps and token counts. Check these logs monthly — not because breaches are common, but because the habit catches both external intrusions and internal mistakes, like a team member using your personal API key for a side project.
- Log into each AI tool's security or account settings page
- Review active sessions — revoke any device or location you don't recognize
- Check API key usage logs for unexpected spikes in token consumption
- Confirm your recovery email and phone number are current
- Verify that MFA is still active (settings sometimes reset after account changes)
- Review any connected third-party apps — remove ones you no longer use
| Platform | Where to Check Sessions | Where to Check API Usage | MFA Location |
|---|---|---|---|
| ChatGPT / OpenAI | Settings → Security → Active Sessions | platform.openai.com → Usage | Settings → Security → 2FA |
| Claude (Anthropic) | Account Settings → Security | console.anthropic.com → Usage | Account Settings → 2FA |
| Google (Gemini) | myaccount.google.com → Security | aistudio.google.com → API | Google Account → 2-Step Verification |
| GitHub (Copilot) | github.com → Settings → Sessions | N/A (flat billing) | Settings → Password & Authentication |
| Midjourney | Discord account security settings | N/A (flat billing) | Discord → Settings → Safety |
Set a Calendar Reminder Now
Sharing Access Safely in a Team Context
Shared logins are the most common AI security mistake in small teams. When two people share one ChatGPT account, you lose audit trails, risk conversation history exposure, and violate most platforms' terms of service. ChatGPT Team ($25/user/month) and Claude for Teams ($25/user/month) both offer proper multi-user workspaces with separate logins and admin controls. If budget is tight, individual free tiers are safer than shared paid accounts.
For API keys used by a team, use environment variables rather than hardcoded credentials, and assign one key per project or application — not one key for everything. This way, revoking a compromised key for one project doesn't break your entire stack. If you use Notion AI or similar tools embedded in a workspace, audit who has workspace access, since AI features inherit those permissions and can surface information across the workspace.
One Key Per Project Is a Professional Standard
Prompt
I use the following AI tools: ChatGPT Plus, Claude Pro, and the OpenAI API for a personal project. Create a concise monthly security checklist I can paste into Notion. Include: session review steps, billing checks, API key hygiene, and MFA verification. Format as a checklist with checkboxes.
AI Response
## Monthly AI Security Checklist **ChatGPT Plus** - [ ] Review active sessions (Settings → Security → Active Sessions) - [ ] Confirm MFA is enabled - [ ] Check billing statement matches $20 flat charge **Claude Pro** - [ ] Review connected apps and active sessions - [ ] Confirm recovery email is current - [ ] Verify MFA status **OpenAI API** - [ ] Check usage dashboard for unexpected token spikes - [ ] Confirm monthly spending cap is set - [ ] Review API key list — revoke unused keys - [ ] Verify no keys are stored in shared documents or repos **General** - [ ] Remove AI tools from subscriptions no longer in use - [ ] Check password manager for any weak or reused AI passwords - [ ] Confirm no shared logins exist across team members
Goal: A completed AI Stack Master Reference document listing all your AI tools, their billing details, MFA status, and an audit reminder — a practical asset you update every quarter.
1. Open a blank document in Notion, Google Docs, or your preferred tool — title it 'AI Stack Master Reference.' 2. Create a table with five columns: Tool Name, Account Email, Billing Type & Cost, MFA Status, Last Security Check Date. 3. Fill in every AI tool you currently have an account with — include free accounts, not just paid ones. 4. For each tool, log into the account settings and confirm MFA is active — update the MFA Status column accordingly. 5. Navigate to the billing or usage section of each tool and record the exact billing type and current cost in your table. 6. Set a recurring 90-day calendar event titled 'AI Stack Audit' and link it to this document in the event description. 7. Save and store the document somewhere you'll actually find it — bookmark it in your browser or pin it in your project management tool.
Quick-Reference Cheat Sheet
- Use a unique, strong password for every AI tool — store in 1Password, Bitwarden, or equivalent
- Enable MFA on every platform that supports it; prefer an authenticator app over SMS
- API keys ≠ passwords — treat them as sensitive credentials, store in a password manager, never in shared docs
- Set spending caps on all API accounts before generating your first key
- Flat-rate tools (ChatGPT Plus, Claude Pro, Perplexity Pro) cost ~$20/month each — audit quarterly to avoid paying for unused subscriptions
- Never share a single login across team members — use team plans or individual free tiers
- One API key per project; revoke keys when projects end
- Check active sessions and usage logs monthly — 15 minutes prevents expensive surprises
- Connected third-party apps inherit your account permissions — remove ones you no longer use
- Recovery email and phone must be current — a locked account with outdated recovery info is effectively lost
Key Takeaways
- AI tools use four billing models — flat subscription, per-token API, tier-based, and per-seat — and each carries different financial risk profiles
- API keys are high-risk credentials: a single leaked key can generate large charges without any account login
- Every major platform provides session logs and usage dashboards — checking them monthly is the single highest-value security habit
- Shared logins violate terms of service, eliminate audit trails, and expose conversation history across users
- A 90-day audit of sessions, billing, and connected apps takes 15 minutes and catches most problems before they escalate
- Your AI Stack Master Reference document is a living asset — update it every time you add or remove a tool
You generate an OpenAI API key for a small automation project. What should you do immediately after generating it?
Your small marketing team wants to use ChatGPT daily. The budget-conscious manager suggests everyone share one ChatGPT Plus account at $20/month. What's the primary problem with this approach?
Which of the following AI tools ended its free trial in 2023, meaning new users must pay to access it?
A colleague tells you they haven't changed their AI account passwords or checked active sessions since creating the accounts 18 months ago. What's the most practical first step you'd recommend?
You're managing three separate AI-powered projects, each using the OpenAI API. What's the recommended approach to API key management across these projects?
Sign in to track your progress.
