Back to playbooks

Human Command in Employment Decisions

Workplace | playbook | Updated 2026-03-14

Tags

ai, workplace, governance, human-command

Human Command in Employment Decisions

Use this as a plain-language rider for procurement, HR policy, internal governance, or vendor contracts when AI tools affect worker opportunity, pay, scheduling, discipline, promotion, evaluation, or hiring.

This is a minimum floor, not a full governance program.

What problem this playbook solves

Employment systems are often low-exit systems. Workers and applicants cannot always walk away cheaply, and bad calls can damage income, reputation, or future access quickly.

That means AI governance here cannot rely on “the market will sort it out.” It needs a due-process floor. This playbook also shows how a short-term safeguard becomes a durable rule: procurement language, HR policy, and audit rights are how a voluntary minimum floor becomes normal practice and eventually a binding baseline.

Failure pattern to prevent

  • the tool shapes a consequential decision
  • notice is vague or absent
  • reasons are too generic to use
  • logs exist internally but the affected person cannot access enough to challenge the result
  • the human reviewer rubber-stamps because they lack time, authority, or information

Minimum requirements

1. Notice

The organization must tell affected workers or applicants when AI is used in:

  • hiring or screening
  • performance evaluation
  • scheduling or work assignment
  • discipline or termination
  • promotion or compensation review

Notice must be plain-language, timely, and easy to find.

2. Reason

The organization must be able to provide a plain-language explanation of how the tool affected a consequential decision.

This does not require publishing trade secrets. It does require enough information for a person to understand what happened and why.

3. Appeal

Workers and applicants must have a real path to challenge a consequential AI-mediated decision.

That path must include:

  • a named contact or office
  • a response timeline
  • human review by someone with authority to change the result

4. Records

The organization must keep records sufficient to reconstruct:

  • when the tool was used
  • what decision it affected
  • what inputs were relied on
  • what output or score was produced
  • whether a human overrode, affirmed, or escalated the result

Logs must be retained long enough for audit, appeal, and investigation. The affected person must be able to access enough of the record to use the appeal path meaningfully.

5. Override

Qualified humans must be able to stop, reverse, or escalate consequential outputs.

No one should be penalized for making a good-faith override where the tool appears wrong, unsafe, discriminatory, or out of scope.

6. Audit

The organization must make high-stakes systems auditable.

That means:

  • internal audit access
  • access for an independent reviewer, regulator, or authorized worker representative where applicable
  • documentation of known limits, failure modes, and update history

7. Retaliation protection

Workers and applicants must be able to question, challenge, or report AI-mediated harms without retaliation.

Enforcement

If the system fails these requirements, the organization must:

  1. pause or narrow use of the tool
  2. review affected decisions
  3. notify affected people where harm may have occurred
  4. remediate the policy, workflow, or vendor configuration before reuse

Repeated failure should trigger contract review, narrowing of use, or termination.

Questions to ask before buying or using a tool

The sections above define the rights floor in practice. The questions below are for buyers, policy owners, and review teams deciding whether a tool should be purchased, renewed, or kept in scope.

Before purchase, rollout, or renewal, ask:

  1. In what decisions will this tool be used?
  2. What records are produced automatically, and what records must the buyer maintain?
  3. How can a human override or disable the system?
  4. What known failure modes, bias risks, or confidence limits exist?
  5. How are model updates documented?
  6. What information can be shared with affected workers or applicants after a contested decision?
  7. How portable are logs, workflows, and policy settings if the buyer switches systems?
  8. Who owns remediation if the tool contributes to wrongful hiring, evaluation, or discipline outcomes?

Minimal stance

This is not anti-AI.

It is a minimum rule for keeping speed from replacing accountability and for keeping employment decisions contestable when exit is weak.

Back to playbooks