top of page
Search

Avoid being biased about bias




At ChangeKind we teach practical, ethical AI and data literacy so people lead with confidence and competence, not assumptions. During a recent enterprise-wide Copilot Chat rollout we noticed a pattern: we fixate on bias in historical data while missing the far more the more common bias.  The unconscious bias encoded in our very own prompts.


The larger problem beyond historical data


Most conversations about AI bias default to datasets: skewed samples, legacy discrimination, label errors. Those are real and important. But in practice, when humans interact with GenAI, the immediate and repeated source of distortion is the prompt.


A prompt carries your frame, priorities, blind spots, and unstated constraints. When those are biased, the model reliably amplifies them. That’s how a harmless-looking instruction can produce exclusionary, stereotyped, or risky outcomes faster than you can audit training data.


The anatomy of a biased prompt


A biased prompt isn’t just rude or explicit. It follows predictable elements:

  • Unstated assumptions: Missing qualifiers that force the model to fill gaps with stereotypes or dominant-culture norms.

  • Leading language: Words that steer the model toward a preferred answer (e.g., “obviously”, “competent”, “typical”).

  • Narrow scope: Instructions that limit perspectives (demographics, geographies, languages) without reason.

  • Missing counterfactuals: No instruction to check alternative viewpoints, edge cases, or fairness constraints.

  • Overreliance on defaults: Assuming the model’s first output is neutral and sufficient.

  • Single-metric focus: Optimising for speed or cost alone, not proportionality, safety, or inclusivity.


These elements combine to produce outputs that look precise but are brittle and exclusionary.


Example: biased prompt versus unbiased prompt


Below is a concise example that shows how small changes shift outcomes.


Biased prompt


“List the top candidates for this sales role; prioritise those with the usual experience and leadership style common in our industry.”


What makes this prompt biased?


“Usual” and “common” invoke the incumbent profile; “prioritise” gives no fairness constraints; leadership style favors the dominant group.


Unbiased prompt


“List top candidates for this sales role with a one-paragraph justification for each. Include candidates with non-traditional but relevant experience, note any potential biases in selection criteria, and provide at least two interview questions that test inclusive leadership competencies.”


Why is this prompt better?


The second prompt specifies inclusivity, asks for explicit bias checks, and forces alternative evidence it changes what the model surfaces and how it reasons.


Practical checklist to spot and fix biased prompts


Use this short checklist every time you draft instructions for Copilot Chat or any generative agent:

  1. State scope and exceptions: Define populations, geographies, and cases you want considered.

  2. Require disconfirming evidence: Ask the model to list counterexamples or edge cases.

  3. Ask for transparency: Request assumptions, confidence, and data limitations in the output.

  4. Include fairness criteria: Define what fairness or inclusivity means for this task.

  5. Produce audit artifacts: Ask for the reasoning steps, key terms used, and sources applied.

  6. Iterate prompts: Run at least two prompt variants and compare outputs for drift or stereotyping.


Closing prescription


Bias in AI is not just a dataset problem. It’s a human problem that shows up first in the way we ask. If you’re rolling out Copilot Chat or any GenAI tool, build prompt hygiene into governance: train people to write inclusive prompts, require bias-check prompts as part of workflows, and treat the prompt like a design artifact that must be reviewed.


That’s the quickest, most practical way to reduce harm and improve outcomes and it’s exactly the kind of capability we build at ChangeKind.


 
 
 

Comments


bottom of page