Human Choices Will Decide What Happens To Jobs
The binary options of apocalypse or utopia miss the real force that produces the outcome: human choices.
getty
Almost daily, new headlines claim that artificial intelligence will either destroy work as we know it or usher us into a golden age of human creativity. Both stories contain a grain of truth. But the binary options of apocalypse or utopia miss the real force that produces the outcome: human choices. How employers redesign jobs, how policymakers steer adoption, and how quickly people learn new skills will matter more than any technical milestone.
To cut through the noise, here are five AI and jobs story lines on a spectrum from “machines take over” to “humans level up.” This spectrum approach may help us communicate with each other in more nuanced ways rather than arguing past each other.
1) Displacement: The Job Apocalypse Story
This is the starkest view. As AI becomes cheaper and more capable, it replaces people across a widening set of tasks, eliminating roles faster than new ones appear. Advocates point to junior white-collar jobs—coders, writers, analysts—where entry-level work overlaps with what AI can now draft or debug. The fear isn’t just layoffs. It’s a robot future that breeds fatalism. It can lead to structural unemployment and widening gaps between those who own or orchestrate AI technology and those who don’t. This perspective is helpful as a warning label. It pushes us to build sturdier safety nets and ask who captures the gains.
2) Reallocation: The Tasks Not Jobs Story
Another approach centers on tasks. Work is a bundle of subtasks. AI will automate some routine tasks, augment others, and hand the rest back to people. Job titles may stay the same, but what it means to be a paralegal or a financial analyst will change. Employers can use this moment to redesign roles that elevate human judgment, quality assurance, and client interaction. Or they can quietly squeeze headcount by letting one person do the work of three. This is where early pain concentrates. If firms offload routine tasks to AI without replacing them with structured learning, the first rung of the ladder breaks. That’s the experience gap problem: employers want two or three years of experience for what were once considered entry-level jobs, though few employers are willing to provide that experience.
3) Augmentation: The Copilot Dividend Story
A third approach sees AI primarily as a complement. These AI digital assistants or copilots speed up drafting, analysis, and routine administration while often improving quality—especially for novices. Doctors, teachers, lawyers, case managers, and small-business owners report that AI reduces drudgery, allowing them to spend more time on judgment, relationships, and design. Augmentation is not magic. It requires guardrails, new workflows, and training. But done well, it expands human capacity. The key to achieving this is ensuring broad access across income levels, regions, and systems, so we don’t hard-wire a productivity divide between those who have copilots and those who don’t.
4) Demand Expansion: The Second Order Story
This approach looks past first-order substitution to second-order demand. When the monetary cost of a capability falls—such as drafting code, designing collateral, or composing briefings—new products, services, and markets usually appear. Disruptive innovation theory describes how startups spring up to exploit cost reductions. Incumbents launch offerings that previously seemed uneconomical. Some argue that we saw this with the internet and mobile: cheap digital distribution created entire industries. Timing is uncertain. Demand expansion typically lags the headline breakthrough. Entrepreneurship needs quarters or years to metabolize new possibilities. But if history is a guide, the net-jobs story depends on this long tail of new work, not just the initial automation wave.
5) Human Renaissance: The Human Advantage Story
At the optimistic but demanding end of the spectrum is a labor market where distinctly human advantages—trust, empathy, leadership, moral reasoning, hands-on craft—become scarcer and more valuable. In this view, AI is an accelerant. Value still hinges on human judgment and relationships. The best systems combine features of humans and machines. This creates an explicit quality-and-safety layer—editors, reviewers, auditors, and product managers—and elevates durable skills like communication, collaboration, and creativity that travel across sectors. This isn’t wishful thinking, but it does create a design challenge. You only get a renaissance if institutions teach and reward skills that complement AI, and if companies redesign jobs to surface them.
Which story wins? All of them, in patches. Displacement is real in some firms and functions. The augmentation dividend is real in others. Task reallocation is nearly universal. Demand expansion usually takes longer than boosters expect. And human-centered value is a choice, not an inevitability. That’s why debating the “true” effect of AI on jobs is less valuable than asking: what would it take for the better stories to beat the worse ones—where you have control?
A Two-Step Way To Use This Continuum
Step one: Take the five options along the continuum:
–Displacement: jobs eliminated.
–Reallocation: tasks reshuffled; roles redesigned.
–Augmentation: copilots lift productivity and quality.
–Demand expansion: new products and markets.
–Human-centered renaissance: trust, problem-solving, and leadership rise in value.
Step two: Map them against supply and demand questions. For example:
–Is AI substituting for the core tasks an individual supplies or complementing them?
–Is AI adoption shrinking demand for what indidivuals do or expanding it?
If you’re in a substitution supply and shrinking demand environment, your job may be to create a stronger safety net, a slower rollout, and a redesign plan before you scale. If you’re in a complementary and expanding supply environment, your job may be to ensure access, training, and standards for all so that the gains spread. Most organizations straddle quadrants, with different functions sitting in other boxes, which is precisely why one-size-fits-all responses fail.
Moving To Action: Five Design Principles
Here are five design principles to help you work through these issues.
1) Redesign the first rung of the career, not just the top rung. If AI absorbs much of the work that used to be taught to entry-level workers, build new ways to learn on the job. Create apprenticeship-like roles across professional services, finance, media, health care, and government. Make “earn-and-learn” a default, not a niche. Use portfolios and skills trials to prove ability, not just diplomas. When early career tasks collapse, you either replace them with structured practice or eliminate mobility for a generation. A credible next step is to tie financial support for these new ways to learn on the job to outcomes. Fund apprenticeships and short programs that lead to stable employment, not just seats filled. Reward providers and employers who show wage gains and progression, not just completion. If AI accelerates entry-level productivity, the return on these investments improves.
2) Give everyone an AI copilot and teach them how to use it. If augmentation is real, education and training are paramount. Workers and students should learn AI-assisted workflows, just as previous cohorts learned spreadsheets and search. That takes two things: tools and evidence-based training. The first is a procurement and access problem. The second is a curriculum and coaching problem. Both are solvable. Firms can publish internal AI playbooks that define safe uses, privacy constraints, and good prompts for everyday tasks. Schools can embed AI-assisted writing, coding, data analysis, and feedback loops into core courses, without outsourcing thinking. The goal isn’t to replace teachers or managers. It’s to give them leverage.
3) Make hiring truly skills-first. If roles change faster than credentials, employers must learn to see and validate skills. Showcase real work such as projects, portfolios, performance tasks, recognize prior learning, and use brief, observable work trials to judge early performance. Degrees still matter, but they’re no longer a sufficient signal. Hiring for potential, with structured on-ramps and clear progression, can widen the talent pool and speed adoption. The uncomfortable truth is that many companies talk skills-first but still screen by pedigree because it’s easy. AI removes excuses. If you can assess ability cheaply and fairly—and you’ve redesigned early roles to teach—then “no experience” is no longer a dead end.
4) Build the human-in-the-loop layer on purpose. Trust is the next economy’s currency. Bake it into job architecture, including clear accountability for AI-assisted work, auditable workflows, and fundamental decision rights for the humans overseeing machines. In regulated sectors—finance, health, public services—this is non-negotiable. In creative and service work, the best outcomes still come from a person who can say, “I checked this. It meets our standard. I’m accountable.” This layer also creates jobs—editors, reviewers, evaluators, and safety and compliance teams. Underfund it, and brittle deployments trigger backlash.
5) Don’t wait for the market to fix gaps. Demand expansion is real but uneven. Left alone, it concentrates in hubs and leaves other places behind. Policymakers and philanthropies can alleviate this situation by providing early funding for startups in overlooked regions. Public-private training compacts with employers should be part of this approach. Data systems should track outcomes, so money follows what works. The goal isn’t to pick winners. It’s to ensure every region has a shot at building complementary systems around cheaper intelligence.
The Culture Change Underneath AI Change
We also need a mental shift. In the pre-AI world, early careers were mostly “learn by doing.” In the AI world, early careers become “earn-and-learn by doing—with a coach at your elbow.” That coach might be a person, a copilot, or both. Employers that treat entry-level workers as learners—setting up practice repetitions, feedback loops, and human oversight—will outrun competitors who outsource novice work to a model and hope for the best.
Education must shift, too. If we want a human-centered economy, schools and colleges should double down on knowledge and durable skills that AI can’t cheapen: reasoning, communication, team leadership, ethical judgment, and hands-on craft. They should also weave real work into learning—projects with external partners, apprenticeships, and simulated practice—so graduates show up with more than a transcript.
Don’t Predict The Future Of Work—Build It
The AI-and-jobs debate isn’t fate. It’s about design. The labor market is not weather to be endured but architecture to be drawn. These five stories aren’t prophecies. They’re the products of choices—how we deploy tools, train people, and redesign jobs.
Treat AI as a way to thin payrolls and you’ll get a thinner future: a broken first rung, displacement, and widening inequality. Use AI to teach as well as to do—to rebuild early roles with structured practice, widen on-ramps, and keep humans firmly in the loop—and “skills-first” becomes real. Entry-level roles start higher. Careers climb faster. Institutions finally see skill.
The story is more nuanced than the headlines or the technical milestones suggest. Models won’t write that future. Our choices decide what will prevail. Don’t wait to see which story wins. Pick the mix you want, And build it with others.