Back to blog

When Watching Becomes Control

core-model | 2026-04-21 | economyforeveryone

The surveillance problem isn't only privacy or accuracy. It's what happens when monitoring gets cheap and contesting the flag stays weak.

One small action: The next time a monitoring tool comes up, ask what happens after the alert and what the flagged person can actually do.

A rideshare driver gets a notification: account deactivated. No explanation. No real person to call. He tries anyway. Then emails. Then in-app messages. Weeks go by.

A few miles away, an alert fires in the middle of the night. Officers arrive faster than a 911 call would have sent them. No confirmed crime. The people on the block aren’t charged. They are also never told they were flagged.

Two systems from two industries with the same basic structure:

  1. an automated flag
  2. nominal human review
  3. no functional appeal
  4. the friction cost absorbed by the person at the bottom

This isn’t mainly a story about punishment, it’s a story about pressure.

What’s happening

Surveillance has gotten cheap enough to run all the time. That changes what it does.

A system no longer needs to end in arrest, firing, or formal punishment every time to shape behavior. When monitoring becomes cheap enough to operate continuously, and opaque enough to contest poorly, the result is coercion without punishment.

That’s why this post isn’t really about privacy in the narrow sense. Privacy matters, but the deeper issue is power:

  • who gets watched, scored, queried, or flagged?
  • who can challenge it?
  • who can realistically leave?

Why it’s happening

The shift is plain: monitoring got cheaper and adjudication didn’t.

When a system can watch, score, query, and flag at very low cost, it can operate at a scale where real hearings, real review, and real appeals would be expensive, slow, or politically inconvenient.

That means the real power moves upstream to the alert, query, score, deactivation, or dispatch. Not to the courtroom or the hearing.

And once that happens, a lot of the pressure lands before any formal punishment ever arrives.

That’s coercion without adjudication.

Accuracy isn’t the whole argument

Accuracy failures and rights failures are different problems. Both matter. Neither substitutes for the other.

A system can be inaccurate and harmful. It can also be relatively accurate and still be unacceptable if people can’t know they were flagged, can’t inspect the record, and can’t contest the action.

Institutions like to defend these systems on accuracy grounds alone because it sounds clean and technical. But a system that correctly identifies a worker, driver, tenant, or resident can still be abusive if the action lands without notice, explanation, or recourse. “It usually gets the right person” is not the same as due process.

That is why this case is bigger than error rates. The real question is whether the system is allowed to watch, infer, and trigger consequences faster than the person affected can respond.

The deeper issue isn’t being watched. It’s being shaped.

The system doesn’t need to punish everyone. It only needs to make enough examples, and create enough uncertainty, that people start self-adjusting.

That’s why coercion is the right word here. The person changes behavior because the system can watch, infer, and act with weak notice and weak recourse.

Exit matters here too. If workers, drivers, immigrants, tenants, or residents can’t meaningfully escape the monitored system, then formal choice is fake and power concentrates.

Who benefits, and who carries the risk

Who benefits?

Institutions that can act faster, cover more ground, and lower the cost of watching benefit first.

Who carries the risk?

  • gig workers living one flag away from lost income
  • people in heavily monitored neighborhoods
  • workers under productivity surveillance
  • immigrants facing targeting systems with weak contest rights
  • communities with the least political voice and the least room to absorb extra friction

And once you picture the person at the bottom of that list, the whole thing gets less abstract. It is not “a surveillance system.” It is a driver trying to get a human being on the line before rent is due, or a family finding out after the fact that a system had already marked them as suspicious.

What good looks like

The better question is: what minimum floor should exist if a system can materially affect your income, movement, freedom, or legal exposure?

In this domain, notice and inspectability matter as much as appeal. Use cases should be narrow and named. There should be independent corroboration before action in high-stakes settings. Audit logs should be inspectable. People should be told when they were flagged in any case where notice will not defeat the purpose of the system. And if a system can trigger a life-changing action, people need records access, limits on how monitoring outputs can be used, and an appeal path that works in practice.

That’s the healthier standard: a system where the watching doesn’t outrun rights.

What to do

Ask one concrete question any time a monitoring or flagging system shows up:

What happens after the flag?

  • Who sees it?
  • What record gets kept?
  • Can I inspect it?
  • Can I challenge it?
  • Can anyone independent reverse it?

That question changes the conversation fast.

How to talk about it

The issue isn’t whether a system can watch. The issue is whether it can flag and act without giving normal people a real way to know, challenge, or escape it.

Or even shorter:

The problem isn’t just being watched. It’s being watched by systems that can shape your life without meaningful notice or appeal.

One steady action to take this week

The next time you hear about a monitoring tool at work, in a school, in housing, or in public safety, ask one follow-up question:

What happens after the alert, and what can the flagged person actually do?

Action ladder

Short term

  • Workers, residents, and parents: Ask what happens after the flag, what record gets kept, and what the flagged person can actually do.
  • Journalists and organizers: Save examples where a system flagged someone without clear notice, explanation, or a working appeal path.

Medium term

  • Schools, employers, housing providers, and local officials: Require notice, records access, and real reversal paths before deploying high-stakes monitoring systems.
  • Community groups: Push for narrow use, independent corroboration, and human review with actual authority rather than symbolic sign-off.

Long term

  • Policymakers and civil-liberties advocates: Build enforceable rights floors for monitoring systems that can shape income, mobility, legal exposure, or housing.
  • Institutions: Stop defending these systems on accuracy alone. If people cannot inspect, challenge, or escape them, the system is not governed well enough to use.

Back to blog