Skip to main content
Back to AI Safety, Bias, and Ethics Basics
Lesson 4 of 10

Ethical use of AI at work: what you should and shouldn't do

~17 min read

Ethical Use of AI at Work: What You Should and Shouldn't Do

AI tools are now embedded in everyday professional work — drafting emails in ChatGPT, analyzing data with Gemini, generating visuals in Midjourney, writing code with GitHub Copilot. Most professionals using these tools have received zero formal guidance on where the ethical lines are. That gap creates real risk: reputational damage, legal liability, breached confidentiality, and biased decisions that affect real people. This lesson gives you a clear, practical framework for what responsible AI use actually looks like — not abstract principles, but specific behaviors you can apply starting today.

7 Things You Need to Know Before We Go Further

  1. AI tools can expose confidential data — anything you type into ChatGPT or Claude may be used for model training unless you explicitly disable that setting or use an enterprise plan.
  2. AI outputs are not neutral — they reflect the biases present in training data, which skews toward English-language, Western, and majority-group perspectives.
  3. You are legally responsible for AI-generated content you publish or act on — the AI is not a legal entity and carries no liability.
  4. Copyright on AI-generated content is unsettled — courts in the US and EU have begun ruling that purely AI-generated work may not be copyrightable.
  5. Disclosure norms are emerging fast — some employers, publishers, and regulators now require you to declare when AI was used in producing work.
  6. Automation bias is a documented cognitive trap — people systematically over-trust AI outputs, even when those outputs are factually wrong.
  7. The EU AI Act (effective 2024-2026) classifies some workplace AI applications as 'high-risk,' triggering legal obligations for the companies using them.

Data Privacy: The Risk You're Probably Already Taking

When you paste a client contract into ChatGPT to get a summary, you've just sent confidential business information to a third-party server. OpenAI's default settings for free and Plus accounts use conversations to improve future models. Claude by Anthropic has similar defaults. This isn't a hypothetical risk — Samsung engineers famously leaked proprietary semiconductor code by pasting it into ChatGPT in 2023, prompting the company to ban the tool internally. The fix isn't to stop using AI; it's to know which data categories should never enter a consumer AI product.

Enterprise plans change this equation significantly. ChatGPT Team and Enterprise, Claude for Enterprise, and Microsoft Copilot for Microsoft 365 all offer contractual guarantees that your data won't be used for training. If your organization hasn't purchased one of these plans, treat every AI prompt as a potentially public document. That framing sounds extreme, but it's the safest default. The practical rule: strip identifying details before prompting, or use only anonymized, aggregated, or publicly available information.

  • Never input: client names, contract terms, employee personal data, financial projections, health information, or unreleased product details into consumer AI tools.
  • Always check: whether your company has an enterprise AI agreement before assuming your inputs are private.
  • Safe to input: publicly available information, anonymized scenarios, your own writing for editing, general industry questions.
  • When in doubt: replace real names and figures with placeholders (e.g., 'Client A', '$X million') before prompting.

Quick Privacy Check Before Every Prompt

Ask yourself: 'Would I be comfortable if my client, employer, or regulator could read this exact prompt?' If the answer is no, anonymize the sensitive details before you paste anything. This 5-second habit prevents the majority of accidental data exposure incidents.

Data Risk Reference: What's Safe to Input vs. What Isn't

Data TypeConsumer AI (e.g. free ChatGPT)Enterprise AI (e.g. Copilot M365)Recommended Action
Client names & contact details❌ High risk⚠️ Check contractAnonymize before prompting
Internal financial data❌ High risk⚠️ Check contractUse aggregate or dummy figures
Employee personal data❌ High risk⚠️ GDPR/CCPA appliesNever input; use HR-approved tools only
Unreleased product info❌ High risk⚠️ Check NDA scopeTreat as trade secret; avoid AI entirely
Health / medical records❌ High risk❌ HIPAA riskNever input into general AI tools
Published industry reports✅ Safe✅ SafeUse freely
Your own draft documents✅ Generally safe✅ SafeRemove any embedded client data first
Anonymized case scenarios✅ Safe✅ SafeGood practice for all AI use
AI Input Risk Matrix — use this before every work-related AI session

Accuracy and Verification: AI Lies Confidently

AI language models hallucinate — they generate plausible-sounding but factually incorrect information — and they do it without signaling any uncertainty. ChatGPT has cited fake legal cases, fabricated academic papers with real-sounding authors, and invented statistics that are entirely plausible but completely wrong. A New York lawyer was sanctioned in 2023 after submitting an AI-generated brief containing six non-existent case citations. The model didn't malfunction; it functioned exactly as designed — predicting the next most likely token, not retrieving verified facts.

The practical implication is straightforward: AI output is a first draft, not a final source. Every factual claim, statistic, citation, or legal reference that comes from an AI tool must be independently verified before you act on it or share it. Perplexity AI reduces this problem by grounding answers in cited web sources, but even then, the citations can be misrepresented. Tools like ChatGPT with web browsing enabled are more reliable for recent facts than the base model, but verification remains your responsibility. Build verification into your workflow, not as an afterthought.

  1. Treat all AI-generated statistics as unverified until you've found the primary source.
  2. Never cite an AI-generated reference in a client deliverable, legal document, or published article without locating the original source yourself.
  3. Use Perplexity AI or ChatGPT with web browsing for fact-sensitive queries — but still check the linked sources directly.
  4. When an AI answer sounds surprisingly compelling, apply extra skepticism — fluency is not accuracy.
  5. For regulated industries (legal, medical, financial), assume AI output requires professional review before any use.

Verification Standards by Output Type

AI Output TypeHallucination RiskVerification RequiredVerification Method
Statistics and data pointsHighAlwaysFind the primary source (report, database, study)
Legal citations and case namesVery HighAlwaysCheck Westlaw, LexisNexis, or official court records
Medical or clinical informationHighAlwaysCross-reference peer-reviewed literature or clinical guidelines
Summaries of documents you providedLow-MediumSpot-checkRe-read the source document for key claims
Creative or persuasive writingN/AJudgment callReview for tone, accuracy of any embedded facts
Code generation (GitHub Copilot)MediumTest and reviewRun tests; check for security vulnerabilities
Historical facts (pre-2021)MediumSpot-checkVerify key claims via encyclopedic or academic sources
Current events (post-training cutoff)Very HighAlwaysUse web-enabled tools and verify via news sources
Verification Standards Matrix — apply before using AI output in professional contexts

Training Cutoffs Matter More Than You Think

GPT-4's training data cuts off in early 2024. Claude 3.5's cutoff is April 2024. Any question involving recent regulations, market data, personnel changes, or current events falls outside what these models reliably know. Asking ChatGPT about a regulation that changed six months ago may produce a confident, outdated, and professionally damaging answer.

What Good AI Verification Looks Like in Practice

Scenario: Verifying an AI-generated market statistic

Prompt

What percentage of Fortune 500 companies are currently using generative AI in their operations?

AI Response

According to a 2024 McKinsey Global Survey, approximately 65% of organizations report regularly using generative AI — nearly double the figure from the previous year. Fortune 500 adoption specifically tracks close to this figure, with many companies piloting tools across marketing, legal, and software development functions. [VERIFICATION STEP SHOWN]: The user then searches McKinsey's website directly, locates the 2024 'State of AI' report, confirms the 65% figure applies to 'organizations surveyed' (not specifically Fortune 500), and adjusts their language accordingly before including it in a client presentation. The stat is real; the framing the AI used was subtly misleading.

Transparency and Disclosure: The Honesty Layer

Using AI to help you work faster is not inherently dishonest. Passing off AI-generated work as fully your own — when the context implies otherwise — can be. The distinction matters because professional relationships run on trust, and undisclosed AI use can damage that trust if discovered. A consultant who submits a 40-page strategy report generated almost entirely by ChatGPT, billed at senior consultant rates, is misrepresenting the value being delivered. The same consultant using ChatGPT to accelerate research and structuring, while applying genuine expertise to the analysis, is using AI responsibly.

Disclosure norms vary by context. Academic institutions typically require explicit declaration of AI assistance. Many publishers — including the Financial Times, Nature, and major legal journals — have formal AI disclosure policies. Some employers now include AI use clauses in their work product policies. Even where no formal policy exists, proactive transparency is the professionally safer choice. Saying 'I used Claude to draft this initial structure, which I then reviewed and revised' demonstrates competence and honesty simultaneously. It positions you as someone who uses tools skillfully, not someone who outsources judgment.

ContextDisclosure ExpectationRecommended Practice
Academic submissionsMandatory at most institutionsDeclare AI tools used; check institution policy
Client deliverablesVaries; increasingly expectedDisclose if AI generated substantial content or analysis
Internal reports and memosLow formal requirementMention AI assistance if it materially shaped the output
Published articles or thought leadershipRequired by many publishersCheck publication's AI policy before submitting
Code submitted in professional projectsEmerging expectationDisclose Copilot or similar tools used in production code
Job applications and cover lettersNo formal rule; ethically grayAvoid submitting purely AI-generated applications
Marketing copyMinimal formal requirementDisclose if required by advertising standards in your sector
Legal documents and filingsCourt-specific rules emergingCheck jurisdiction rules; disclose AI assistance to supervising attorney
AI Disclosure Expectations by Professional Context

The 'No Policy Means No Problem' Trap

Absence of a formal AI disclosure policy at your organization does not mean undisclosed AI use is risk-free. If a client or employer later discovers that a deliverable was substantially AI-generated without disclosure, the reputational and contractual consequences fall entirely on you. Policies are catching up to practice — often with retroactive effect on professional standards.

Quick-Start Task: Audit Your Current AI Use

Personal AI Ethics Audit

Goal: Produce a one-page personal AI use snapshot that identifies your current exposure to data privacy risk and clarifies your disclosure obligations.

1. Open a blank document and list every AI tool you've used at work in the past 30 days (e.g., ChatGPT, Copilot, Grammarly, Notion AI, Midjourney). 2. For each tool, note whether your organization has an enterprise agreement or whether you're using a personal/consumer account. 3. Recall one specific prompt you sent that contained client, employee, or company-specific information — write down what data was included. 4. Using the Data Risk Matrix in this lesson, classify that data type and assess whether the input was appropriate. 5. Check whether your organization has a published AI use policy (search your intranet, ask your manager, or check your employment contract addenda). 6. Write one sentence describing how you will handle disclosure of AI use in your next client or stakeholder deliverable.

Part 1 Cheat Sheet: Core Rules for Ethical AI Use at Work

  • Consumer AI tools (free ChatGPT, Claude.ai) may use your inputs for training — never paste confidential client, employee, or financial data.
  • Enterprise AI plans (ChatGPT Enterprise, Copilot M365, Claude for Enterprise) offer data protection guarantees — verify your organization has one before assuming privacy.
  • AI hallucinations are systematic, not rare — every factual claim from an AI tool needs independent verification before professional use.
  • Training cutoffs mean AI models don't know recent events — GPT-4 cuts off early 2024, Claude 3.5 cuts off April 2024.
  • You hold legal liability for AI-generated content you publish or act on — the AI carries none.
  • Disclosure expectations vary by context but are tightening — academic, legal, and publishing contexts often now require explicit AI disclosure.
  • Automation bias causes people to over-trust AI outputs — stay actively skeptical, especially when answers sound fluent and authoritative.
  • Anonymize data before prompting: replace names, figures, and identifiers with placeholders to reduce privacy risk without sacrificing usefulness.
  • The EU AI Act introduces legal obligations for high-risk AI applications in the workplace — your company may already be affected.

Key Takeaways from Part 1

  1. Privacy risk starts the moment you type into a consumer AI tool — the prompt itself is the exposure, not just the response.
  2. Verification is a professional obligation, not optional due diligence — AI fluency and AI accuracy are completely different things.
  3. Disclosure is increasingly a formal requirement in academic, legal, and publishing contexts — and an ethical expectation everywhere else.
  4. Your personal liability for AI-generated work is unchanged by the fact that AI produced it — you signed off on it.
  5. Enterprise AI agreements are the primary mechanism for managing organizational data risk — know whether your company has one.

Bias in AI: What It Looks Like at Work

Bias in AI isn't abstract — it shows up in hiring tools that downrank female candidates, credit models that penalize certain zip codes, and content generators that default to stereotyped imagery. These aren't edge cases. Amazon scrapped an internal recruiting tool in 2018 after discovering it systematically penalized resumes containing the word 'women's.' The model had trained on a decade of hiring data from a male-dominated industry. The AI learned the pattern, then amplified it. Understanding where bias enters the pipeline helps you catch it before it causes real damage.

Where Bias Enters AI Systems

  • Training data bias: The dataset reflects historical inequalities — the model learns and perpetuates them
  • Label bias: Human annotators bring their own assumptions when tagging data used to train models
  • Measurement bias: Proxies used to represent a concept (e.g., 'zip code' for creditworthiness) encode structural inequities
  • Deployment bias: A model trained on one population is applied to a different one with different characteristics
  • Feedback loop bias: User behavior shapes model outputs over time — if biased users interact more, the model drifts further

As a professional using AI tools, you're unlikely to be training models from scratch. But you are making decisions about which outputs to trust, which to act on, and which to pass upstream. That makes you a checkpoint in the bias pipeline. When ChatGPT or Gemini returns a recommendation, ask: what population was this likely optimized for? Does the output reflect a narrow worldview? If you're using AI to screen, rank, or evaluate people — candidates, customers, employees — the stakes of unchecked bias are legal, not just ethical.

Bias in High-Stakes Decisions

Never use AI-generated rankings or scores as the sole basis for decisions affecting people's employment, credit, housing, or healthcare. In many jurisdictions, this triggers legal obligations under anti-discrimination law (e.g., the EU AI Act classifies these as 'high-risk' AI use cases requiring human oversight and audit trails). Always apply human judgment as a final layer.
Use CaseBias Risk LevelWhat to Watch ForMitigation
Resume screening with AIHighGender, name, or school prestige biasBlind review + human final decision
Customer sentiment analysisMediumDialect and cultural expression misread as negativeTest across demographic groups
AI-generated marketing copyMediumStereotyped assumptions about audienceReview for implicit assumptions before publishing
Meeting summarizationLowSpeaker misattribution, dominant voice biasCross-check against your own notes
Code review with CopilotLowSuggestions reflect dominant coding culturesApply your own style guide and standards
Bias risk levels by common workplace AI use case

Transparency and Disclosure at Work

Knowing when to disclose AI use is one of the murkiest areas of workplace ethics right now. Norms are forming in real time. Some organizations have explicit policies; most don't. The default principle is straightforward: if AI materially shaped the output someone else is evaluating or relying on, they deserve to know. A client reading your strategic analysis has a legitimate interest in knowing whether it was written by you or generated by Claude and lightly edited. A colleague using your research summary to make a business decision needs to know its source.

Disclosure doesn't mean apologizing. It means being accurate about provenance. 'I used ChatGPT to draft this and then reviewed and edited it' is a professional statement, not a confession. What erodes trust is the gap between what people assume and what actually happened. The higher the stakes — legal advice, medical guidance, financial projections, published content — the more explicit the disclosure should be. When in doubt, disclose. The professional cost of transparency is almost always lower than the reputational cost of concealment discovered later.

  1. Check your organization's AI use policy before deploying any tool on work tasks
  2. Disclose AI use to clients, managers, or collaborators when the output is being evaluated or acted upon
  3. Don't submit AI-generated work as entirely your own in contexts where original authorship is assumed (academic, legal, creative briefs)
  4. When AI assists with data analysis, note the tool and any known limitations in your methodology section
  5. In regulated industries (finance, healthcare, law), check whether AI-assisted outputs require specific disclaimers
  6. Keep records of significant AI interactions when working on high-stakes deliverables — these are your audit trail
ScenarioDisclosure Required?Recommended Action
Internal brainstorm doc drafted with ChatGPTUsually noFollow team norms; note if asked directly
Client deliverable substantially written by AIYesState the tool used and your editorial role
AI-generated images in a client presentationYesLabel as AI-generated; confirm client accepts this
Code written with GitHub CopilotContext-dependentCheck IP clauses in contracts; disclose in open-source projects
AI used to summarize research for your own useNoNo external disclosure needed
AI-assisted legal or compliance documentYesFlag for human expert review and note AI involvement
Social media post drafted by Notion AIPlatform-dependentCheck platform rules; brand voice review essential
Disclosure decisions by workplace scenario

The Policy Gap Is Real

A 2023 survey by the Society for Human Resource Management found that fewer than 25% of organizations had a formal AI use policy for employees. That means most professionals are making judgment calls without institutional guidance. Until your organization catches up, use this rule: treat AI disclosure the same way you'd treat disclosure of any significant outside assistance on work product.
Asking AI to flag its own limitations

Prompt

I'm going to use this market analysis in a client presentation. Before I finalize it, tell me: what are the three most significant limitations or potential inaccuracies in what you've written? What should I verify independently?

AI Response

Three limitations worth flagging: 1) Market size figures are based on my training data (cutoff early 2024) — verify against current industry reports from IBISWorld or Statista. 2) The competitive landscape section reflects publicly available information only; recent M&A activity or private funding rounds may have changed it. 3) Regional breakdowns assume global trends apply uniformly — your specific geography may diverge significantly. I'd recommend verifying the top-line numbers and checking for any major industry news from the past six months before presenting.

Intellectual Property and Ownership

AI tools generate text, images, code, and data — and the ownership of that output is legally unsettled in most jurisdictions. The U.S. Copyright Office has stated that purely AI-generated content without meaningful human creative input is not eligible for copyright protection. That creates a real gap: if you publish AI-generated content, competitors can copy it freely. More practically, it means the value you create with AI tools is in your curation, editing, strategy, and judgment — the human layer — not in the raw output itself.

There's a second IP issue that runs in the opposite direction: what you feed into AI tools. When you paste a client contract, internal financial projections, or proprietary product specs into ChatGPT or Claude, you may be exposing confidential information to third-party servers. OpenAI's default settings previously used conversations to train future models (users can opt out). Anthropic and Google have similar data handling policies with enterprise tiers offering stronger protections. The rule is simple: never put information into a consumer AI tool that you wouldn't be comfortable seeing in a data breach or a competitor's hands.

  • AI-generated content has weak or no copyright protection — your editorial additions are what you can protect
  • Check your vendor's data retention policy before inputting any confidential or client data
  • Enterprise tiers of ChatGPT (Team/Enterprise), Claude for Work, and Gemini for Workspace offer data isolation
  • Code generated by GitHub Copilot may contain fragments of open-source code — check licensing implications for commercial projects
  • Some jurisdictions (EU, UK) are actively legislating on AI-generated content ownership — monitor for updates
  • Your employment contract may assign AI-assisted work product to your employer — review it

Use Enterprise Tiers for Sensitive Work

If your organization processes client data, financial information, or anything under NDA, use enterprise-grade AI tools with data processing agreements in place. ChatGPT Team ($30/user/month) and Claude for Enterprise both offer contracts that prevent your data from being used in model training. Consumer free tiers provide no such guarantee.
Information TypeSafe for Consumer AI Tools?Safer Alternative
Public market data or newsYesAny tier fine
Internal strategy documentsNoEnterprise tier with DPA, or keep offline
Client names and contact dataNoAnonymize before inputting, or use enterprise tier
Financial projections or M&A detailsNoEnterprise tier only, or avoid AI entirely
Personal employee informationNoCheck GDPR/local law; enterprise tier minimum
Product specs under NDANoEnterprise tier with legal review
Your own draft writing or ideasGenerally yesStandard caution applies
Data sensitivity guide for AI tool inputs
Audit Your Current AI Tool Usage

Goal: Identify gaps between your current AI tool usage and appropriate data handling practices, producing a concrete risk summary you can act on immediately.

1. List every AI tool you've used for work in the past two weeks — include ChatGPT, Copilot, Grammarly, Notion AI, and any embedded AI features in tools like Salesforce or LinkedIn. 2. For each tool, identify the most sensitive type of information you've entered — use the Data Sensitivity table above as your reference. 3. Check whether each tool's default tier is consumer or enterprise — look at your account settings or your organization's software inventory. 4. Flag any tool where you've entered Medium or High sensitivity data on a consumer tier — these are your risk items. 5. For your top two risk items, look up the tool's data handling policy and note whether an enterprise tier is available and at what cost. 6. Draft a one-paragraph summary of your findings to share with your manager or IT team — include which tools you're using, what data has been entered, and what policy or tier upgrade you recommend.

You now understand AI limitations and bias risks. This section closes the loop with the third pillar of ethical AI use at work: accountability. Who owns the output? Who gets blamed when it goes wrong? How do you build habits that protect you, your team, and your organization? These aren't philosophical questions — they have real answers, and ignoring them has real consequences. Professionals who use AI without accountability frameworks are one bad output away from a compliance issue, a client complaint, or a reputational hit.

Accountability: Who Owns AI Output at Work

When ChatGPT drafts a report and that report contains an error, the model doesn't get called into a meeting. You do. AI tools produce outputs — humans remain responsible for them. This principle is called human-in-the-loop accountability, and it applies whether you're using Copilot to write code, Gemini to summarize a contract, or Claude to draft a client email. The output carries your name the moment you send, publish, or act on it. No vendor terms of service shift that liability to the model provider.

Accountability also runs upward. If you're a manager whose team uses AI tools, you're responsible for establishing clear usage norms — even if your organization hasn't issued formal policy yet. Silence isn't permission, and 'the AI did it' is not a defensible position with clients, auditors, or regulators. Document which tools your team uses, for what tasks, and how outputs are reviewed before they leave the building. That documentation protects everyone.

  • You are the author of any AI-assisted output you submit or publish
  • Errors in AI output are your professional responsibility to catch
  • Managers inherit accountability for their team's AI usage patterns
  • Undisclosed AI use in regulated contexts (legal, medical, financial) creates compliance exposure
  • Audit trails matter — keep records of what tools generated what outputs for high-stakes work
  • Vendor terms do not indemnify you for misuse or negligent review

The Attribution Trap

Several high-profile cases involved lawyers submitting AI-generated briefs with fabricated case citations. The AI hallucinated the citations; the lawyers signed the documents. Courts sanctioned the lawyers — not OpenAI. Before any AI output leaves your hands in a professional or regulated context, verify every specific claim, name, number, and reference independently.

Disclosure: When to Tell People AI Was Involved

Disclosure norms are evolving fast, but a practical framework already exists. The core question is whether the person receiving your work would consider AI involvement material to their decision to trust or use it. A client paying for original strategic analysis has a different expectation than a colleague receiving a meeting summary. When in doubt, disclose — it costs you almost nothing and protects your credibility. Many organizations are now building disclosure requirements directly into project workflows and contracts.

ScenarioDisclosure Required?Recommended Action
Internal meeting notes drafted with Notion AINoOptional note in footer
Client-facing report with AI-generated sectionsYesState AI-assisted drafting in methodology
Marketing copy written by ChatGPTContext-dependentFollow platform/brand guidelines
Academic or certification submissionYes — often mandatoryCheck institution policy first
Legal document with AI-drafted clausesYesDisclose to all parties; attorney review required
Job application materialsContext-dependentFollow employer or platform rules
Code generated by GitHub CopilotCheck license termsReview for open-source license conflicts
AI Disclosure Decision Guide by Scenario

Disclosure ≠ Weakness

Disclosing AI involvement doesn't undermine your credibility — hiding it and getting caught does. Forward-thinking professionals treat AI disclosure the same way they treat citing sources: it signals rigor, not laziness. Clients and employers increasingly respect teams that have a clear, honest AI usage policy.

Building Your Personal AI Ethics Checklist

Policy documents are useful. Habits are better. The professionals who use AI most safely don't stop before each task to consult a rulebook — they've internalized a short mental checklist that runs automatically. The checklist below is designed to become second nature within two to three weeks of consistent use. It covers the four failure modes that cause the most real-world harm: data exposure, hallucination, bias, and undisclosed use.

CheckpointQuestion to AskIf the Answer Is No...
Data SafetyIs this input free of personal, confidential, or proprietary data?Anonymize or use a private/enterprise-tier tool
AccuracyHave I verified key facts, figures, and citations independently?Do not publish or submit — verify first
Bias CheckHave I reviewed output for unfair assumptions about people or groups?Revise or regenerate with more explicit instructions
DisclosureDoes the recipient need to know AI was involved?Add a disclosure note before sending
OwnershipDo I understand and accept responsibility for this output?Do not send — escalate or review further
Pre-Submission AI Ethics Checklist

Make It a Habit, Not a Hurdle

Pin the checklist above to your desktop or add it as a template in Notion. Run through it in under 60 seconds before any AI-assisted output leaves your hands. After two weeks, you won't need the list — the questions will fire automatically.
Requesting an Ethically Framed Output

Prompt

Draft a performance improvement plan for an underperforming team member. Do not include any assumptions about the employee's age, background, or personal circumstances. Focus only on observable behaviors and measurable outcomes. Flag any language that could be interpreted as discriminatory.

AI Response

Performance Improvement Plan — Behavioral Focus Objective: Support [Employee Name] in meeting defined performance standards within 60 days. Observed Behaviors (past 30 days): • Missed 3 of 5 weekly reporting deadlines • Deliverables reviewed contained an average of 4 factual errors per document • Two client escalations linked to delayed responses Expected Standards: • 100% on-time submission of weekly reports • Deliverable error rate below 1 per document after peer review • Client response time within 24 business hours Support Provided: • Weekly 1:1 check-ins with manager • Access to writing and time-management resources • Mid-point review at day 30 [No language flagged as potentially discriminatory — all criteria are behavior- and outcome-based.]

Build Your Personal AI Ethics Reference Card

Goal: Produce a personalized, one-page AI ethics reference card you can use immediately and update as your organization's policy evolves.

1. Open a blank document in your preferred tool (Word, Notion, Google Docs). 2. Create a header: 'My AI Ethics Reference Card' with today's date. 3. List the three AI tools you use most frequently at work and note their data privacy tier (free consumer / enterprise / self-hosted). 4. Copy the Pre-Submission AI Ethics Checklist table from this lesson into your document. 5. Add a 'My Sensitive Data Rules' section and write 2-3 sentences defining what data you will never input into a consumer AI tool. 6. Add a 'Disclosure Defaults' section: write one sentence each for internal use, client-facing use, and regulated/legal use, stating your default disclosure behavior. 7. Save the document and pin or bookmark it somewhere you'll see it daily.

Ethical AI at Work — Quick Reference Cheat Sheet

  • Never input names, IDs, health data, financials, or trade secrets into consumer-tier AI tools
  • You are legally and professionally responsible for every output you submit — AI is not
  • Verify all facts, citations, and numbers before any AI output leaves your hands
  • Review outputs for bias before they're used in decisions affecting people
  • Disclose AI involvement whenever the recipient would consider it material to their trust
  • Use enterprise tiers (ChatGPT Team, Claude for Work, Copilot for Microsoft 365) for sensitive work
  • Document which tools your team uses and how outputs are reviewed
  • Run the five-point ethics checklist before submitting any AI-assisted work
  • Silence from your organization isn't permission — establish your own usage norms
  • Bias in = bias out: garbage prompts produce discriminatory outputs

Key Takeaways

  1. Human accountability for AI output is non-negotiable — tools don't carry professional responsibility, people do
  2. Data privacy is your first ethical obligation: know what tier your tool operates at before you type anything sensitive
  3. Hallucinations are a structural feature of LLMs, not a bug to be patched — always verify high-stakes claims independently
  4. Bias enters AI systems at the training data level and surfaces in outputs affecting hiring, performance, and customer decisions
  5. Disclosure of AI involvement is increasingly expected and, in regulated contexts, legally required
  6. A short, repeatable personal checklist is more effective than a long policy document you'll never reread
  7. Enterprise-tier tools (ChatGPT Team, Claude for Work, Copilot) exist specifically to close the privacy gap for professional use
Knowledge Check

A lawyer submits a brief that contains AI-generated case citations without verifying them. The citations turn out to be fabricated. Who bears professional responsibility?

You're preparing a client-facing strategic analysis that used Claude to draft three sections. The client is paying for original consulting work. What is the most professionally sound approach?

Which of the following inputs is safest to use in a free consumer-tier AI tool like the standard ChatGPT free plan?

An AI tool trained primarily on historical hiring data recommends shortlisting fewer candidates from a particular university. What is the most likely cause?

Your organization has no formal AI usage policy. A colleague says, 'There's no rule against it, so anything goes.' What is the most accurate response?

Sign in to track your progress.