When AI Steers Systems You Can't Avoid
core-model | 2026-04-23 | economyforeveryone
The physical-world AI problem starts when routing, pacing, access, or infrastructure control can act first while the affected person has little practical recourse.
One small action: Pick one physical system you rely on and ask who holds the logs, who can override it, and what you can do if it gets something important wrong.
A warehouse worker gets a route she did not choose.
Her handheld tells her where to go next, how fast to move, and which task takes priority. The pace changes before she can ask why.
Across town, a resident gets home to a building door that will not open because an access rule has fired. On another street, a rider sits in a vehicle that keeps moving after the moment when a human driver would have hesitated.
In each case, the important shift is the same: AI is no longer just recommending. It is steering the system that ordinary people have to live inside.
That’s what makes this case different from a lot of knowledge-work AI discussions.
The issue isn’t whether the model sounds impressive, it’s whether the system can act first while the person affected has almost no practical path to contest it.
What’s happening
AI is moving from analysis into systems that can take action across logistics, transport, utilities, and building systems.
That means the decision is no longer just advice on a dashboard. It is the route, the pace, the dispatch, the access rule, or the load adjustment itself.
The human may still exist in the loop on paper.
But if the person in the loop has no time, no records, no authority, or no safe way to override, the system is effectively in charge.
That is the shift to watch: recommendation becomes system behavior, and nominal oversight starts to look a lot weaker. Most people will never describe it that way, of course. They will describe it as the door not opening, the route making no sense, or the vehicle doing something unsettling before anyone can explain it.
Why it’s happening
The basic shift is this: it got cheaper to let the system run, and contestability didn’t keep up.
The pitch is always faster, cheaper, more reliable. That’s usually accurate - for the operator. The worker gets the route. The operator gets the margin.
Once an operator can automate the route, pace, or access decision, the burden shifts to whoever is downstream from it. The operator holds the logs. The affected person gets a notification, a denial, or nothing.
That is why this case sits downstream from surveillance and coercion. Monitoring is one form of control. Taking action in the physical world is the next step.
And the stakes rise because the consequence window is often short. If the route is bad, if the access is blocked, if the pacing is unsafe, or if the vehicle acts unpredictably, the person affected may not have time to navigate an appeal at all.
Why this matters
For most people, these systems are harder to leave than an app or a feed.
You may not be able to choose a different grid operator. You may not be able to opt out of AV traffic on your street. You may not be able to escape building automation or employer routing systems without major cost.
When exit is weak, the governance bar has to go up.
It’s also not random who absorbs the cost when these systems get something wrong. Workers with the least legal protection, tenants in affordable housing, riders in low-income corridors - these are the people most dependent on these systems and least positioned to contest them.
That’s what makes this a civic and systemic AI story. These aren’t only workplace tools or consumer features. They’re systems that shape basic conditions of movement, access, safety, and daily life.
The deeper issue is asymmetry
The operator has the logs.
The affected person has the consequence.
That asymmetry is the core problem. If the system acts first and the institution controls the records, then accountability becomes something the public or the worker has to fight to obtain after the fact.
That is too late for a lot of physical harms.
What good looks like
If a system can route you, pace you, lock you out, or expose you to danger, then notice, records, override, fail-safe behavior, and liability can’t be optional.
What good looks like is simple enough to name:
- the affected person can find out what happened
- the operator has to preserve the relevant logs
- a qualified human can override the system in time
- the system has a safe fallback when it is uncertain or fails
- liability is assigned clearly when harm occurs
The cases where this has worked - Seattle’s gig worker protections, the UK’s AV liability law, Denmark’s grid AI - all share the same pattern: a floor built before deployment scaled, not an investigation after something went wrong.
If those conditions are missing, the system is asking for trust it hasn’t earned.
What to do
Ask three blunt questions about any AI-controlled physical system:
- Who holds the logs?
- Who can override the system in time?
- Who is legally responsible if it causes harm?
If none of those answers are clear, the system is asking the public for a lot of trust without giving much back.
How to talk about it
“The issue isn’t just that AI can recommend. It’s that it can now steer systems people can’t realistically avoid.”
Or:
“If the system can route you, pace you, lock you out, or put you in danger, then notice, records, override, and liability can’t be optional.”
One steady action to take this week
Pick one physical system you rely on and ask who holds the logs, who can override it, and what you can do if it gets something important wrong.
If nobody can answer that clearly before something goes wrong, they are asking you to live inside the experiment.
Action ladder
Short term
Residents, workers, and riders:Ask who holds the logs, who can override the system in time, and who is liable if it causes harm.Journalists and community watchdogs:Treat routing, pacing, access, and dispatch systems like real governance systems, not just background tech.
Medium term
Cities, employers, and building operators:Require notice, log retention, override authority, and safe fallback behavior before scaling automated control systems.Workers and community groups:Push for override rights and records access in the systems you cannot realistically avoid.
Long term
Policymakers and regulators:Build clear floors for inspectable logs, real contestability, fail-safe behavior, and assigned liability in AI-controlled physical infrastructure.Public institutions:Stop approving systems that can act first while leaving the people affected with no practical path to challenge what happened.