Back to blog

What Young People Should Build When the Job Ladder Is Moving

perspectives | 2026-05-12 | economyforeveryone

A lot of young people are not afraid of AI hype. They are afraid the door will narrow before they reach it. That fear is grounded. Here is what still makes sense to build.

Young people have every right to be angry.

A lot of the world we are handing them feels harder to enter, more expensive to live in, and less stable than the one we were promised. Housing is brutal. School is expensive. The entry-level ladder is wobbling. And now on top of all that, they are getting told some version of: “Just learn AI.”

I get why that lands badly. You are not crazy for feeling like the rules changed in the middle of the game.

What is happening

Some of the fear here is real. A lot of the old starter work is changing fast. The easy proof is weaker than it used to be. A polished essay, a clean deck, a decent first draft, a block of code that looks right on first glance - those things do not mean what they used to mean.

Entry-level job postings in AI-exposed fields have contracted meaningfully since their 2022 peak. The roles that remain are asking for more. And the credentials that used to signal effort - a clean first draft, a well-formatted report - are harder to distinguish from AI-assisted work done in minutes.

That does not mean young people are stuck. And it does not mean there is nothing worth building.

It means the thing to build changed.

Why it is happening

When polished output gets cheap, what it signals changes.

What does not get cheaper: genuine understanding of when the output is wrong.

A lot of companies are learning that the hard way right now. They bought speed. They did not buy judgment. They bought summaries, drafts, rankings, recommendations, and shortcuts. Then they left a tired human at the end of the process and called that oversight.

That is how you get nonsense decisions, angry customers, bad press, and legal exposure.

There are active lawsuits against insurers where physicians signed off on algorithmic claim denials at a pace that made individual review impossible. Courts have sanctioned attorneys for filing AI-generated briefs full of fabricated citations. In both cases, the defense was some version of: “the AI made the mistake.” That defense has not held.

Two-second approval? That is not some futuristic breakthrough. That is a lawsuit in waiting.

The accountability is staying with the human. That means the human’s ability to actually evaluate the output is load-bearing.

The same pattern runs at the individual level. Using AI to produce work you don’t fully understand feels like efficiency - until it doesn’t. Early career is when the foundation is supposed to form. That is the part of the job where you do the work badly, slowly, and learn it. That formation is what lets you catch errors, ask better questions, and take on harder problems next year. Skipping it for polished output now is borrowing against a foundation you have not built yet. The output looks fine. Nobody flags the gap. Until the gap shows up somewhere it actually matters.

What good looks like

The people who are going to matter most are not the people who can just get output from a tool. That part is getting cheaper every day. The people who are going to matter are the ones who can look at the output and say: this part is solid, this part is wrong, here is why, and here is what we do next.

That is judgment and trust. It’s the part companies cannot fake for long.

Good looks like a person who can answer, specifically and honestly: here is what the AI got wrong and why I caught it. Not: I used AI to do this faster. The question is whether you understand the work well enough to catch the errors - not just produce output faster.

The workers who will matter are the ones who entered the right loop early. They used AI to go deeper into their domain - to find things faster, ask better questions, get to the hard part sooner. Their skills compound because they are doing the judgment work. The workers in the other loop produce good-looking work with a growing gap underneath. The output masks the problem until it can’t.

This is not equally available to everyone. Workers who start with stronger domain foundations, better mentorship, and more institutional support are better positioned to use the tools well from the start. Workers still building those foundations - which is most entry-level workers, especially those without strong networks - are the most exposed. The tool amplifies what you bring to it. If you don’t have much yet, that can work against you.

The rules are moving in that direction too. California now prohibits health insurance denials based solely on AI. The EU requires that humans overseeing high-risk AI systems have actual competence in the relevant domain, not just a human presence. The broader point is simple: even the systems adopting AI fastest still need someone who can actually judge the work.

What to do

So if I were talking to a high school senior, a college student, or someone early in their career, I would not tell them to panic. I would not tell them to worship the tools either.

I would say: build the part that still matters after the tool changes again.

Build one area of real depth. Learn one workflow where mistakes matter. Show your work. Keep the rough draft, the revision, the change in thinking, the reason you chose one path over another. Learn how to check the machine, not just use it.

The companies that are using AI well are going to need people with judgment. And the companies using it badly are going to need them even more.

People are more adaptable than the panic makes it sound. Young people already learn new tools constantly. That part I am not worried about.

What I would build now is not loyalty to one interface.

I would build:

  • judgment
  • proof of process
  • the ability to use the tools without leaning on them blindly
  • the habit of checking
  • enough depth in one real area to know when something is off

That is the kind of thing that travels.

One honest limit: this advice is not equally available to everyone. Access to the tools, the mentorship, and the time to build real depth is not evenly distributed. The advice is sound - it lands differently depending on where you start. And none of this resolves the structural problem. If the entry ladder is collapsing, individual merit does not reopen it. That requires a different kind of fix.

How to talk about it

The ladder is changing. Some of that is unfair. Some of it is going to hurt. I am not going to pretend otherwise.

But “harder” does not mean “hopeless.”

It means the cheap signals are weaker now.

So build the stronger ones.

You do not have to be anti-AI to care about this. The question is not whether the tools are powerful. The question is whether a person - especially one without strong networks or institutional cushioning - can still build something real, get it seen fairly, and develop the kind of judgment that compounds over time.

A useful frame for those conversations: when output gets cheap, judgment gets expensive. Build toward judgment, not toward any specific interface. The tools will change. What does not change is whether you understand the work well enough to know when something is wrong - and whether you can show your work.

One steady action to take this week

Pick one piece of work this week and document the process as you go - first attempt, what was wrong, what you changed, why. Keep that record. That is the kind of proof that holds.

Action ladder

Short term

  • Students and young workers: Pick one piece of work this week and keep the process - first attempt, what was wrong, what you changed, why. Start building proof that you can think, not just output.
  • Parents and mentors: Ask to see the draft, the revision, and the explanation. Help young people build habits of process, checking, and honest reflection instead of rewarding polished output alone.

Medium term

  • Schools and colleges: Build more assignments that make process visible - revision history, short defenses, tool-removal checks, and work that requires explanation rather than just a clean final artifact.
  • Managers and employers: Protect some apprenticeship work instead of stripping all first-pass tasks out of the ladder. If AI removes starter tasks, create a deliberate training path or you will starve your own future pipeline.

Long term

  • Institutions and policymakers: Rebuild the rung itself - more apprenticeships, paid early-career training, stronger entry ramps, and public support for the parts of formation that the market is now quietly cutting out.
  • Parents, educators, and employers together: Stop pretending individual grit alone can solve a structural ladder problem. Push for systems that still give young people a real way to learn slowly, make mistakes safely, and build judgment before the stakes get high.

Back to blog