Ethical use of AI at work: what you should and shouldn't do
~17 min readEthical Use of AI at Work: What You Should and Shouldn't Do
AI tools are now embedded in everyday professional work — drafting emails in ChatGPT, analyzing data with Gemini, generating visuals in Midjourney, writing code with GitHub Copilot. Most professionals using these tools have received zero formal guidance on where the ethical lines are. That gap creates real risk: reputational damage, legal liability, breached confidentiality, and biased decisions that affect real people. This lesson gives you a clear, practical framework for what responsible AI use actually looks like — not abstract principles, but specific behaviors you can apply starting today.
7 Things You Need to Know Before We Go Further
- AI tools can expose confidential data — anything you type into ChatGPT or Claude may be used for model training unless you explicitly disable that setting or use an enterprise plan.
- AI outputs are not neutral — they reflect the biases present in training data, which skews toward English-language, Western, and majority-group perspectives.
- You are legally responsible for AI-generated content you publish or act on — the AI is not a legal entity and carries no liability.
- Copyright on AI-generated content is unsettled — courts in the US and EU have begun ruling that purely AI-generated work may not be copyrightable.
- Disclosure norms are emerging fast — some employers, publishers, and regulators now require you to declare when AI was used in producing work.
- Automation bias is a documented cognitive trap — people systematically over-trust AI outputs, even when those outputs are factually wrong.
- The EU AI Act (effective 2024-2026) classifies some workplace AI applications as 'high-risk,' triggering legal obligations for the companies using them.
Data Privacy: The Risk You're Probably Already Taking
When you paste a client contract into ChatGPT to get a summary, you've just sent confidential business information to a third-party server. OpenAI's default settings for free and Plus accounts use conversations to improve future models. Claude by Anthropic has similar defaults. This isn't a hypothetical risk — Samsung engineers famously leaked proprietary semiconductor code by pasting it into ChatGPT in 2023, prompting the company to ban the tool internally. The fix isn't to stop using AI; it's to know which data categories should never enter a consumer AI product.
Enterprise plans change this equation significantly. ChatGPT Team and Enterprise, Claude for Enterprise, and Microsoft Copilot for Microsoft 365 all offer contractual guarantees that your data won't be used for training. If your organization hasn't purchased one of these plans, treat every AI prompt as a potentially public document. That framing sounds extreme, but it's the safest default. The practical rule: strip identifying details before prompting, or use only anonymized, aggregated, or publicly available information.
- Never input: client names, contract terms, employee personal data, financial projections, health information, or unreleased product details into consumer AI tools.
- Always check: whether your company has an enterprise AI agreement before assuming your inputs are private.
- Safe to input: publicly available information, anonymized scenarios, your own writing for editing, general industry questions.
- When in doubt: replace real names and figures with placeholders (e.g., 'Client A', '$X million') before prompting.
Quick Privacy Check Before Every Prompt
Data Risk Reference: What's Safe to Input vs. What Isn't
| Data Type | Consumer AI (e.g. free ChatGPT) | Enterprise AI (e.g. Copilot M365) | Recommended Action |
|---|---|---|---|
| Client names & contact details | ❌ High risk | ⚠️ Check contract | Anonymize before prompting |
| Internal financial data | ❌ High risk | ⚠️ Check contract | Use aggregate or dummy figures |
| Employee personal data | ❌ High risk | ⚠️ GDPR/CCPA applies | Never input; use HR-approved tools only |
| Unreleased product info | ❌ High risk | ⚠️ Check NDA scope | Treat as trade secret; avoid AI entirely |
| Health / medical records | ❌ High risk | ❌ HIPAA risk | Never input into general AI tools |
| Published industry reports | ✅ Safe | ✅ Safe | Use freely |
| Your own draft documents | ✅ Generally safe | ✅ Safe | Remove any embedded client data first |
| Anonymized case scenarios | ✅ Safe | ✅ Safe | Good practice for all AI use |
Accuracy and Verification: AI Lies Confidently
AI language models hallucinate — they generate plausible-sounding but factually incorrect information — and they do it without signaling any uncertainty. ChatGPT has cited fake legal cases, fabricated academic papers with real-sounding authors, and invented statistics that are entirely plausible but completely wrong. A New York lawyer was sanctioned in 2023 after submitting an AI-generated brief containing six non-existent case citations. The model didn't malfunction; it functioned exactly as designed — predicting the next most likely token, not retrieving verified facts.
The practical implication is straightforward: AI output is a first draft, not a final source. Every factual claim, statistic, citation, or legal reference that comes from an AI tool must be independently verified before you act on it or share it. Perplexity AI reduces this problem by grounding answers in cited web sources, but even then, the citations can be misrepresented. Tools like ChatGPT with web browsing enabled are more reliable for recent facts than the base model, but verification remains your responsibility. Build verification into your workflow, not as an afterthought.
- Treat all AI-generated statistics as unverified until you've found the primary source.
- Never cite an AI-generated reference in a client deliverable, legal document, or published article without locating the original source yourself.
- Use Perplexity AI or ChatGPT with web browsing for fact-sensitive queries — but still check the linked sources directly.
- When an AI answer sounds surprisingly compelling, apply extra skepticism — fluency is not accuracy.
- For regulated industries (legal, medical, financial), assume AI output requires professional review before any use.
Verification Standards by Output Type
| AI Output Type | Hallucination Risk | Verification Required | Verification Method |
|---|---|---|---|
| Statistics and data points | High | Always | Find the primary source (report, database, study) |
| Legal citations and case names | Very High | Always | Check Westlaw, LexisNexis, or official court records |
| Medical or clinical information | High | Always | Cross-reference peer-reviewed literature or clinical guidelines |
| Summaries of documents you provided | Low-Medium | Spot-check | Re-read the source document for key claims |
| Creative or persuasive writing | N/A | Judgment call | Review for tone, accuracy of any embedded facts |
| Code generation (GitHub Copilot) | Medium | Test and review | Run tests; check for security vulnerabilities |
| Historical facts (pre-2021) | Medium | Spot-check | Verify key claims via encyclopedic or academic sources |
| Current events (post-training cutoff) | Very High | Always | Use web-enabled tools and verify via news sources |
Training Cutoffs Matter More Than You Think
What Good AI Verification Looks Like in Practice
Prompt
What percentage of Fortune 500 companies are currently using generative AI in their operations?
AI Response
According to a 2024 McKinsey Global Survey, approximately 65% of organizations report regularly using generative AI — nearly double the figure from the previous year. Fortune 500 adoption specifically tracks close to this figure, with many companies piloting tools across marketing, legal, and software development functions. [VERIFICATION STEP SHOWN]: The user then searches McKinsey's website directly, locates the 2024 'State of AI' report, confirms the 65% figure applies to 'organizations surveyed' (not specifically Fortune 500), and adjusts their language accordingly before including it in a client presentation. The stat is real; the framing the AI used was subtly misleading.
Transparency and Disclosure: The Honesty Layer
Using AI to help you work faster is not inherently dishonest. Passing off AI-generated work as fully your own — when the context implies otherwise — can be. The distinction matters because professional relationships run on trust, and undisclosed AI use can damage that trust if discovered. A consultant who submits a 40-page strategy report generated almost entirely by ChatGPT, billed at senior consultant rates, is misrepresenting the value being delivered. The same consultant using ChatGPT to accelerate research and structuring, while applying genuine expertise to the analysis, is using AI responsibly.
Disclosure norms vary by context. Academic institutions typically require explicit declaration of AI assistance. Many publishers — including the Financial Times, Nature, and major legal journals — have formal AI disclosure policies. Some employers now include AI use clauses in their work product policies. Even where no formal policy exists, proactive transparency is the professionally safer choice. Saying 'I used Claude to draft this initial structure, which I then reviewed and revised' demonstrates competence and honesty simultaneously. It positions you as someone who uses tools skillfully, not someone who outsources judgment.
| Context | Disclosure Expectation | Recommended Practice |
|---|---|---|
| Academic submissions | Mandatory at most institutions | Declare AI tools used; check institution policy |
| Client deliverables | Varies; increasingly expected | Disclose if AI generated substantial content or analysis |
| Internal reports and memos | Low formal requirement | Mention AI assistance if it materially shaped the output |
| Published articles or thought leadership | Required by many publishers | Check publication's AI policy before submitting |
| Code submitted in professional projects | Emerging expectation | Disclose Copilot or similar tools used in production code |
| Job applications and cover letters | No formal rule; ethically gray | Avoid submitting purely AI-generated applications |
| Marketing copy | Minimal formal requirement | Disclose if required by advertising standards in your sector |
| Legal documents and filings | Court-specific rules emerging | Check jurisdiction rules; disclose AI assistance to supervising attorney |
The 'No Policy Means No Problem' Trap
Quick-Start Task: Audit Your Current AI Use
Goal: Produce a one-page personal AI use snapshot that identifies your current exposure to data privacy risk and clarifies your disclosure obligations.
1. Open a blank document and list every AI tool you've used at work in the past 30 days (e.g., ChatGPT, Copilot, Grammarly, Notion AI, Midjourney). 2. For each tool, note whether your organization has an enterprise agreement or whether you're using a personal/consumer account. 3. Recall one specific prompt you sent that contained client, employee, or company-specific information — write down what data was included. 4. Using the Data Risk Matrix in this lesson, classify that data type and assess whether the input was appropriate. 5. Check whether your organization has a published AI use policy (search your intranet, ask your manager, or check your employment contract addenda). 6. Write one sentence describing how you will handle disclosure of AI use in your next client or stakeholder deliverable.
Part 1 Cheat Sheet: Core Rules for Ethical AI Use at Work
- Consumer AI tools (free ChatGPT, Claude.ai) may use your inputs for training — never paste confidential client, employee, or financial data.
- Enterprise AI plans (ChatGPT Enterprise, Copilot M365, Claude for Enterprise) offer data protection guarantees — verify your organization has one before assuming privacy.
- AI hallucinations are systematic, not rare — every factual claim from an AI tool needs independent verification before professional use.
- Training cutoffs mean AI models don't know recent events — GPT-4 cuts off early 2024, Claude 3.5 cuts off April 2024.
- You hold legal liability for AI-generated content you publish or act on — the AI carries none.
- Disclosure expectations vary by context but are tightening — academic, legal, and publishing contexts often now require explicit AI disclosure.
- Automation bias causes people to over-trust AI outputs — stay actively skeptical, especially when answers sound fluent and authoritative.
- Anonymize data before prompting: replace names, figures, and identifiers with placeholders to reduce privacy risk without sacrificing usefulness.
- The EU AI Act introduces legal obligations for high-risk AI applications in the workplace — your company may already be affected.
Key Takeaways from Part 1
- Privacy risk starts the moment you type into a consumer AI tool — the prompt itself is the exposure, not just the response.
- Verification is a professional obligation, not optional due diligence — AI fluency and AI accuracy are completely different things.
- Disclosure is increasingly a formal requirement in academic, legal, and publishing contexts — and an ethical expectation everywhere else.
- Your personal liability for AI-generated work is unchanged by the fact that AI produced it — you signed off on it.
- Enterprise AI agreements are the primary mechanism for managing organizational data risk — know whether your company has one.
Bias in AI: What It Looks Like at Work
Bias in AI isn't abstract — it shows up in hiring tools that downrank female candidates, credit models that penalize certain zip codes, and content generators that default to stereotyped imagery. These aren't edge cases. Amazon scrapped an internal recruiting tool in 2018 after discovering it systematically penalized resumes containing the word 'women's.' The model had trained on a decade of hiring data from a male-dominated industry. The AI learned the pattern, then amplified it. Understanding where bias enters the pipeline helps you catch it before it causes real damage.
Where Bias Enters AI Systems
- Training data bias: The dataset reflects historical inequalities — the model learns and perpetuates them
- Label bias: Human annotators bring their own assumptions when tagging data used to train models
- Measurement bias: Proxies used to represent a concept (e.g., 'zip code' for creditworthiness) encode structural inequities
- Deployment bias: A model trained on one population is applied to a different one with different characteristics
- Feedback loop bias: User behavior shapes model outputs over time — if biased users interact more, the model drifts further
As a professional using AI tools, you're unlikely to be training models from scratch. But you are making decisions about which outputs to trust, which to act on, and which to pass upstream. That makes you a checkpoint in the bias pipeline. When ChatGPT or Gemini returns a recommendation, ask: what population was this likely optimized for? Does the output reflect a narrow worldview? If you're using AI to screen, rank, or evaluate people — candidates, customers, employees — the stakes of unchecked bias are legal, not just ethical.
Bias in High-Stakes Decisions
| Use Case | Bias Risk Level | What to Watch For | Mitigation |
|---|---|---|---|
| Resume screening with AI | High | Gender, name, or school prestige bias | Blind review + human final decision |
| Customer sentiment analysis | Medium | Dialect and cultural expression misread as negative | Test across demographic groups |
| AI-generated marketing copy | Medium | Stereotyped assumptions about audience | Review for implicit assumptions before publishing |
| Meeting summarization | Low | Speaker misattribution, dominant voice bias | Cross-check against your own notes |
| Code review with Copilot | Low | Suggestions reflect dominant coding cultures | Apply your own style guide and standards |
Transparency and Disclosure at Work
Knowing when to disclose AI use is one of the murkiest areas of workplace ethics right now. Norms are forming in real time. Some organizations have explicit policies; most don't. The default principle is straightforward: if AI materially shaped the output someone else is evaluating or relying on, they deserve to know. A client reading your strategic analysis has a legitimate interest in knowing whether it was written by you or generated by Claude and lightly edited. A colleague using your research summary to make a business decision needs to know its source.
Disclosure doesn't mean apologizing. It means being accurate about provenance. 'I used ChatGPT to draft this and then reviewed and edited it' is a professional statement, not a confession. What erodes trust is the gap between what people assume and what actually happened. The higher the stakes — legal advice, medical guidance, financial projections, published content — the more explicit the disclosure should be. When in doubt, disclose. The professional cost of transparency is almost always lower than the reputational cost of concealment discovered later.
- Check your organization's AI use policy before deploying any tool on work tasks
- Disclose AI use to clients, managers, or collaborators when the output is being evaluated or acted upon
- Don't submit AI-generated work as entirely your own in contexts where original authorship is assumed (academic, legal, creative briefs)
- When AI assists with data analysis, note the tool and any known limitations in your methodology section
- In regulated industries (finance, healthcare, law), check whether AI-assisted outputs require specific disclaimers
- Keep records of significant AI interactions when working on high-stakes deliverables — these are your audit trail
| Scenario | Disclosure Required? | Recommended Action |
|---|---|---|
| Internal brainstorm doc drafted with ChatGPT | Usually no | Follow team norms; note if asked directly |
| Client deliverable substantially written by AI | Yes | State the tool used and your editorial role |
| AI-generated images in a client presentation | Yes | Label as AI-generated; confirm client accepts this |
| Code written with GitHub Copilot | Context-dependent | Check IP clauses in contracts; disclose in open-source projects |
| AI used to summarize research for your own use | No | No external disclosure needed |
| AI-assisted legal or compliance document | Yes | Flag for human expert review and note AI involvement |
| Social media post drafted by Notion AI | Platform-dependent | Check platform rules; brand voice review essential |
The Policy Gap Is Real
Prompt
I'm going to use this market analysis in a client presentation. Before I finalize it, tell me: what are the three most significant limitations or potential inaccuracies in what you've written? What should I verify independently?
AI Response
Three limitations worth flagging: 1) Market size figures are based on my training data (cutoff early 2024) — verify against current industry reports from IBISWorld or Statista. 2) The competitive landscape section reflects publicly available information only; recent M&A activity or private funding rounds may have changed it. 3) Regional breakdowns assume global trends apply uniformly — your specific geography may diverge significantly. I'd recommend verifying the top-line numbers and checking for any major industry news from the past six months before presenting.
Intellectual Property and Ownership
AI tools generate text, images, code, and data — and the ownership of that output is legally unsettled in most jurisdictions. The U.S. Copyright Office has stated that purely AI-generated content without meaningful human creative input is not eligible for copyright protection. That creates a real gap: if you publish AI-generated content, competitors can copy it freely. More practically, it means the value you create with AI tools is in your curation, editing, strategy, and judgment — the human layer — not in the raw output itself.
There's a second IP issue that runs in the opposite direction: what you feed into AI tools. When you paste a client contract, internal financial projections, or proprietary product specs into ChatGPT or Claude, you may be exposing confidential information to third-party servers. OpenAI's default settings previously used conversations to train future models (users can opt out). Anthropic and Google have similar data handling policies with enterprise tiers offering stronger protections. The rule is simple: never put information into a consumer AI tool that you wouldn't be comfortable seeing in a data breach or a competitor's hands.
- AI-generated content has weak or no copyright protection — your editorial additions are what you can protect
- Check your vendor's data retention policy before inputting any confidential or client data
- Enterprise tiers of ChatGPT (Team/Enterprise), Claude for Work, and Gemini for Workspace offer data isolation
- Code generated by GitHub Copilot may contain fragments of open-source code — check licensing implications for commercial projects
- Some jurisdictions (EU, UK) are actively legislating on AI-generated content ownership — monitor for updates
- Your employment contract may assign AI-assisted work product to your employer — review it
Use Enterprise Tiers for Sensitive Work
| Information Type | Safe for Consumer AI Tools? | Safer Alternative |
|---|---|---|
| Public market data or news | Yes | Any tier fine |
| Internal strategy documents | No | Enterprise tier with DPA, or keep offline |
| Client names and contact data | No | Anonymize before inputting, or use enterprise tier |
| Financial projections or M&A details | No | Enterprise tier only, or avoid AI entirely |
| Personal employee information | No | Check GDPR/local law; enterprise tier minimum |
| Product specs under NDA | No | Enterprise tier with legal review |
| Your own draft writing or ideas | Generally yes | Standard caution applies |
Goal: Identify gaps between your current AI tool usage and appropriate data handling practices, producing a concrete risk summary you can act on immediately.
1. List every AI tool you've used for work in the past two weeks — include ChatGPT, Copilot, Grammarly, Notion AI, and any embedded AI features in tools like Salesforce or LinkedIn. 2. For each tool, identify the most sensitive type of information you've entered — use the Data Sensitivity table above as your reference. 3. Check whether each tool's default tier is consumer or enterprise — look at your account settings or your organization's software inventory. 4. Flag any tool where you've entered Medium or High sensitivity data on a consumer tier — these are your risk items. 5. For your top two risk items, look up the tool's data handling policy and note whether an enterprise tier is available and at what cost. 6. Draft a one-paragraph summary of your findings to share with your manager or IT team — include which tools you're using, what data has been entered, and what policy or tier upgrade you recommend.
You now understand AI limitations and bias risks. This section closes the loop with the third pillar of ethical AI use at work: accountability. Who owns the output? Who gets blamed when it goes wrong? How do you build habits that protect you, your team, and your organization? These aren't philosophical questions — they have real answers, and ignoring them has real consequences. Professionals who use AI without accountability frameworks are one bad output away from a compliance issue, a client complaint, or a reputational hit.
Accountability: Who Owns AI Output at Work
When ChatGPT drafts a report and that report contains an error, the model doesn't get called into a meeting. You do. AI tools produce outputs — humans remain responsible for them. This principle is called human-in-the-loop accountability, and it applies whether you're using Copilot to write code, Gemini to summarize a contract, or Claude to draft a client email. The output carries your name the moment you send, publish, or act on it. No vendor terms of service shift that liability to the model provider.
Accountability also runs upward. If you're a manager whose team uses AI tools, you're responsible for establishing clear usage norms — even if your organization hasn't issued formal policy yet. Silence isn't permission, and 'the AI did it' is not a defensible position with clients, auditors, or regulators. Document which tools your team uses, for what tasks, and how outputs are reviewed before they leave the building. That documentation protects everyone.
- You are the author of any AI-assisted output you submit or publish
- Errors in AI output are your professional responsibility to catch
- Managers inherit accountability for their team's AI usage patterns
- Undisclosed AI use in regulated contexts (legal, medical, financial) creates compliance exposure
- Audit trails matter — keep records of what tools generated what outputs for high-stakes work
- Vendor terms do not indemnify you for misuse or negligent review
The Attribution Trap
Disclosure: When to Tell People AI Was Involved
Disclosure norms are evolving fast, but a practical framework already exists. The core question is whether the person receiving your work would consider AI involvement material to their decision to trust or use it. A client paying for original strategic analysis has a different expectation than a colleague receiving a meeting summary. When in doubt, disclose — it costs you almost nothing and protects your credibility. Many organizations are now building disclosure requirements directly into project workflows and contracts.
| Scenario | Disclosure Required? | Recommended Action |
|---|---|---|
| Internal meeting notes drafted with Notion AI | No | Optional note in footer |
| Client-facing report with AI-generated sections | Yes | State AI-assisted drafting in methodology |
| Marketing copy written by ChatGPT | Context-dependent | Follow platform/brand guidelines |
| Academic or certification submission | Yes — often mandatory | Check institution policy first |
| Legal document with AI-drafted clauses | Yes | Disclose to all parties; attorney review required |
| Job application materials | Context-dependent | Follow employer or platform rules |
| Code generated by GitHub Copilot | Check license terms | Review for open-source license conflicts |
Disclosure ≠ Weakness
Building Your Personal AI Ethics Checklist
Policy documents are useful. Habits are better. The professionals who use AI most safely don't stop before each task to consult a rulebook — they've internalized a short mental checklist that runs automatically. The checklist below is designed to become second nature within two to three weeks of consistent use. It covers the four failure modes that cause the most real-world harm: data exposure, hallucination, bias, and undisclosed use.
| Checkpoint | Question to Ask | If the Answer Is No... |
|---|---|---|
| Data Safety | Is this input free of personal, confidential, or proprietary data? | Anonymize or use a private/enterprise-tier tool |
| Accuracy | Have I verified key facts, figures, and citations independently? | Do not publish or submit — verify first |
| Bias Check | Have I reviewed output for unfair assumptions about people or groups? | Revise or regenerate with more explicit instructions |
| Disclosure | Does the recipient need to know AI was involved? | Add a disclosure note before sending |
| Ownership | Do I understand and accept responsibility for this output? | Do not send — escalate or review further |
Make It a Habit, Not a Hurdle
Prompt
Draft a performance improvement plan for an underperforming team member. Do not include any assumptions about the employee's age, background, or personal circumstances. Focus only on observable behaviors and measurable outcomes. Flag any language that could be interpreted as discriminatory.
AI Response
Performance Improvement Plan — Behavioral Focus Objective: Support [Employee Name] in meeting defined performance standards within 60 days. Observed Behaviors (past 30 days): • Missed 3 of 5 weekly reporting deadlines • Deliverables reviewed contained an average of 4 factual errors per document • Two client escalations linked to delayed responses Expected Standards: • 100% on-time submission of weekly reports • Deliverable error rate below 1 per document after peer review • Client response time within 24 business hours Support Provided: • Weekly 1:1 check-ins with manager • Access to writing and time-management resources • Mid-point review at day 30 [No language flagged as potentially discriminatory — all criteria are behavior- and outcome-based.]
Goal: Produce a personalized, one-page AI ethics reference card you can use immediately and update as your organization's policy evolves.
1. Open a blank document in your preferred tool (Word, Notion, Google Docs). 2. Create a header: 'My AI Ethics Reference Card' with today's date. 3. List the three AI tools you use most frequently at work and note their data privacy tier (free consumer / enterprise / self-hosted). 4. Copy the Pre-Submission AI Ethics Checklist table from this lesson into your document. 5. Add a 'My Sensitive Data Rules' section and write 2-3 sentences defining what data you will never input into a consumer AI tool. 6. Add a 'Disclosure Defaults' section: write one sentence each for internal use, client-facing use, and regulated/legal use, stating your default disclosure behavior. 7. Save the document and pin or bookmark it somewhere you'll see it daily.
Ethical AI at Work — Quick Reference Cheat Sheet
- Never input names, IDs, health data, financials, or trade secrets into consumer-tier AI tools
- You are legally and professionally responsible for every output you submit — AI is not
- Verify all facts, citations, and numbers before any AI output leaves your hands
- Review outputs for bias before they're used in decisions affecting people
- Disclose AI involvement whenever the recipient would consider it material to their trust
- Use enterprise tiers (ChatGPT Team, Claude for Work, Copilot for Microsoft 365) for sensitive work
- Document which tools your team uses and how outputs are reviewed
- Run the five-point ethics checklist before submitting any AI-assisted work
- Silence from your organization isn't permission — establish your own usage norms
- Bias in = bias out: garbage prompts produce discriminatory outputs
Key Takeaways
- Human accountability for AI output is non-negotiable — tools don't carry professional responsibility, people do
- Data privacy is your first ethical obligation: know what tier your tool operates at before you type anything sensitive
- Hallucinations are a structural feature of LLMs, not a bug to be patched — always verify high-stakes claims independently
- Bias enters AI systems at the training data level and surfaces in outputs affecting hiring, performance, and customer decisions
- Disclosure of AI involvement is increasingly expected and, in regulated contexts, legally required
- A short, repeatable personal checklist is more effective than a long policy document you'll never reread
- Enterprise-tier tools (ChatGPT Team, Claude for Work, Copilot) exist specifically to close the privacy gap for professional use
A lawyer submits a brief that contains AI-generated case citations without verifying them. The citations turn out to be fabricated. Who bears professional responsibility?
You're preparing a client-facing strategic analysis that used Claude to draft three sections. The client is paying for original consulting work. What is the most professionally sound approach?
Which of the following inputs is safest to use in a free consumer-tier AI tool like the standard ChatGPT free plan?
An AI tool trained primarily on historical hiring data recommends shortlisting fewer candidates from a particular university. What is the most likely cause?
Your organization has no formal AI usage policy. A colleague says, 'There's no rule against it, so anything goes.' What is the most accurate response?
Sign in to track your progress.
