AI at Work: Displacement, Power, and What Comes Next

Welcome to the latest edition of The Sunday Prompt, where we explore the AI-driven shifts redefining work.

This week’s prompt takes a deeper look at AI and jobs. What is happening now, what is coming over the next decade, and what we can do about it.

I have spent 15 years working in AI. I use it every day. It is embedded in nearly every tool we touch—email, search, social, shopping, design, CRM, ERP, HCM, marketing automation, research, you name it. Most of the time, it makes those tools better. Used well, it can improve your work product and double your output. It has certainly done both for me.

And the upside does not end there. AI holds real promise for science, medicine, education, and forms of innovation we are only beginning to understand.

But that is only part of the story.

As the technology advances, the implications move beyond productivity and decision-making. They become existential. It may be hard to imagine something like ChatGPT coming for your job, but it is. And so are the people building it.

So the real question is: what are you going to do about it?

Let’s dig in. 👇

PROMPT: HOW IS AI REALLY IMPACTING JOBS? SHOULD I BE CONCERNED?

Every week in my Friday digest, I report on AI and its impact on jobs.

AI is going to take your job. Or maybe not. You might just end up on a hybrid team with AI as your tireless coworker. Then again, some companies have already replaced 5% of their workforce with AI. Only to reverse course. It didn't work as expected. They regret it. You can have your job back. For now. Until the next update rolls out.

These contradictory headlines reflect the current reality. And they make it difficult for anyone, whether policymakers, executives, workers, or students, to form a clear picture of what is happening in the labor market.

So I decided it was time to dig deeper. Behind the headlines. Into the data. Into the earnings call transcripts, industry reports, policy studies and academic research. And here is what I found.

The Current State of Play

Artificial Intelligence is already altering the structure of the workforce. The early story is one of disruption, but not always in the way the headlines suggest.

AI-driven job displacement has been most pronounced in roles defined by routine, repetitive tasks—think manufacturing (45% impact rate), retail (35%), and office support. Meanwhile, sectors like education and healthcare are seeing major investment and job creation thanks to AI augmentation. Healthcare, for example, is projected to add 50% more AI-supported roles, with human practitioners relying on tools to enhance diagnosis, reduce errors, and manage data.

But job creation does not mean job stability. The most exposed jobs are not necessarily low-skilled. Brookings Institution research finds that workers with a bachelor’s degree face five times more exposure to AI than those with only a high school diploma. The rise of generative AI has hit white-collar roles hardest, from legal document review to financial analysis to technical writing.

And it is not just exposure. It is volatility. Klarna replaced 700 customer service agents with AI, then walked it back. McDonald’s tested AI in its drive-thrus, then reversed course. More than half of the companies that implemented AI-driven layoffs have since reconsidered their approach.

What is emerging is a complex rebalancing. Skills gaps are growing. Some jobs are vanishing. Others are transforming. Companies are shedding and hiring simultaneously. Microsoft laid off 7,000 workers this year while allocating $80 billion to AI initiatives. Meta, Amazon, and Apple have followed similar patterns.

The current wave of AI is not replacing human labor at scale, but has begun to reconfigure the nature of work. The buzzword out of boardrooms is augmentation. But in practice, we are seeing a churn of roles, rising pressure on employees to adapt quickly, and a sharp divide between those who can effectively use these tools and those who cannot.

The shift is already playing out in unexpected places. UBS, one of the world’s top investment banks, has begun digitally cloning its analysts, replicating their voices and likenesses to generate AI-powered video summaries of research reports. These synthetic personas are marketed to clients while the real analysts fade into the background. In parallel, JPMorgan is discouraging headcount growth in favor of AI-driven efficiency. The message is clear: productivity is welcome. Presence is optional. And human identity is increasingly negotiable.

We’ve Reached an Inflection Point

Optimists argue this disruption follows a familiar pattern. The steam engine, electricity, and the internet eliminated jobs in their time, then created many more. Productivity increased. Entirely new industries emerged. Living standards rose. From this perspective, AI will do the same. Displacement may occur, but the economy will absorb it through adaptation and innovation. However, the scale, speed, and broad capabilities of today’s AI systems suggest this transition could be fundamentally different.

For every optimistic projection, there are stories already unfolding that challenge the idea that this transition will be smooth. For some, the disruption is already here. Shawn K., a veteran software engineer, lost his $150,000 job last year and has since applied to more than 800 roles—landing fewer than 10 interviews, some with AI agents instead of people. Now living in a trailer and delivering DoorDash to get by, he describes the current shift as “a social and economic disaster tidal wave.”

He is not bitter about AI. He is frustrated that companies are using it to cut costs instead of multiplying output. “The Great Displacement,” he writes, “is already well underway.”

The software industry is merely the first wave. The very companies building AI are testing its limits on their own workforce before turning it outward. Career paths for software engineers in these firms are already being rerouted—sometimes to dead ends, as entry-level hiring is drying up, promotions are slowing, and some firms are outsourcing entire development pipelines to generative tools. IBM recently confirmed it replaced hundreds of HR staffers with AI. AskHR, the company’s internal bot, now automates 94% of routine personnel tasks. Klarna, Salesforce, and others are following suit. The coders and HR reps are only the beginning.

My friend Steven Baker, an infrastructure leader at Greenfly and an optimist by nature, recently posted a thought experiment: in a very imaginable near future, feed an AI tool a product idea at night and wake up to a working SaaS app built, tested, optimized, and launched with conversion data and tiered pricing. Traditional engineering fades into the background. One person does the work of five. The implications are staggering, on both ends: productivity and displacement.

What to Expect: The Next Five Years (and Beyond)

Projections for the future vary, but most models agree on one thing: massive occupational churn.

Goldman Sachs estimates that up to 300 million full-time jobs worldwide could be affected by generative AI. McKinsey predicts that nearly 30% of current hours worked in the U.S. and Europe could be automated by 2030, requiring tens of millions of job transitions.

The World Economic Forum forecasts that while 85 million jobs may be displaced, 97 million new ones will be created, many in areas that do not yet exist. Roles in AI development, cybersecurity, and human-machine collaboration are expected to grow. Soft skills like critical thinking, ethical judgment, and adaptability are rising in value. Entry-level jobs may vanish, while coaching, mentoring, and hands-on training will become harder to access. Workers will be expected to start their careers "in the middle."

And looming over all of this is the question of Artificial General Intelligence, or AGI. AGI refers to an AI system with the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or greater than that of a human. Unlike current AI tools that excel at narrow, specific functions, AGI would be capable of general reasoning, problem-solving, and adaptation across domains.

Some researchers believe AGI is imminent. OpenAI's Sam Altman says it could arrive within a few years. Others, like Meta’s Yann LeCun, argue it may never materialize at all. But should AGI arrive, its implications for the labor market would dwarf anything we are experiencing now. A system that can match or exceed human cognitive abilities across most tasks could collapse entire categories of employment. Knowledge work, management, even creative and strategic roles could all come under pressure. The effect would not be a reshuffling of roles but a wholesale redefinition of what work even is.

Even the federal government now acknowledges this possibility. In a recent interview, former White House AI advisor Ben Buchanan told Ezra Klein that AGI is likely to emerge in the next few years and that the United States is fundamentally unprepared for the consequences. 'This will have implications for the way we organize society,' he said. 'And what we saw inside the White House made the trend lines unmistakable.'

Further out, experts are now debating the timeline for what comes after AGI: Artificial Superintelligence, or ASI. ASI refers to an AI system that exceeds human capabilities across every cognitive domain, including strategic thinking, creativity, and emotional intelligence. Predictions vary, with some forecasts placing its arrival as early as the 2040s. At that point, the conversation moves from workforce disruption to a deeper reckoning. If machines become more capable than humans in every intellectual pursuit, the meaning of work itself begins to dissolve. So does our traditional sense of purpose. The implications reach into the core of human identity and the value we place on being the ones who think, create, decide, and lead.

If most jobs are displaced by AI, the economic consequences will be matched by equally profound social ones. What happens to the middle and lower classes when stable employment becomes scarce? How do people live in a system where income is decoupled from contribution?

The American ideal is built on effort, advancement, and work as a source of dignity. That foundation begins to crack in a world where machines outperform people at nearly everything of value. The loss of upward mobility is especially destabilizing. When work no longer provides a pathway to a better life, ambition fades, inequality calcifies, and aspiration is replaced by hopelessness and resentment.

A society organized around productivity must face a hard question: what happens when human labor is no longer essential?

The trajectory is clear. AI is changing the nature of work. The more urgent concern is how society adapts. Will reskilling efforts match the speed of technological advancement? Will companies and institutions treat workforce development as a core responsibility or as a cost to be minimized? And where does government fit in? Policymakers have a role to play in shaping the future of work through regulation, investment, and civic leadership. So far, many have chosen hesitation or deference over action. 

Meanwhile, the companies building these technologies have amassed unprecedented influence with little accountability. They claim to empower workers, yet their products increasingly displace them. Their rhetoric has shifted from caution to competition. Beating China has become the justification for every acceleration, every deregulation, every cut to oversight. Buchanan called it the central priority in U.S. AI policy: reaching AGI before China does. As one official put it, we must lead, even if we do not yet understand what we are leading toward. But if winning the AI race means sacrificing the very fabric of our economy, our civic institutions, and our sense of shared purpose, what exactly are we winning? 

The assumption that this is a zero-sum game, that robust international competition in AI necessitates abandoning societal safeguards and values, must be challenged. Surely, prudent stewardship would involve pioneering AI responsibly, proving that innovation and societal well-being can, and indeed must, advance together.

And that is where the second half of our story begins.

Listen to what they are telling you.

The tech elite shaping this future are speaking plainly. Sam Altman predicted that AI agents would “join the workforce” by 2025 and “destroy a lot of people’s jobs.” Mark Zuckerberg plans to hand all software engineering over to AI. NVIDIA’s Jensen Huang says 80% of jobs are ripe for automation.

Sam Altman

Marc Andreessen openly embraces a future without work. In his Techno-Optimist Manifesto, he dismisses fears of AI-driven unemployment as “luddite lies” and writes, “We should be teaching young people that a life of pure leisure, a life of art and science and games, is a perfectly respectable choice.”

There is no mention of how people will pay rent in this leisure economy, or how wealth and resources will be distributed. He does not view mass unemployment as a danger to be managed, but as an ideal to be realized. It is a vision entirely untethered from economic reality or human need.

These are not hypotheticals. They are operating philosophies. They are investment theses. And they are writing human labor out of the story without public input or accountability.

This should not sit right with anyone

The people executing this vision live far from its consequences. They do not just hold unimaginable wealth. They live in another world entirely. They fly private, skip public spaces, outsource daily life, and insulate themselves from the very society their technologies are reshaping. Former OpenAI scientist Ilya Sutskever reportedly spoke of needing a doomsday bunker for the day AGI arrives—an outcome he was actively working toward at the time. This is not just socioeconomic distance. It is philosophical detachment.

Today’s most powerful AI systems are being built by a handful of engineers, many of them young, without children, some experimenting with altered states of mind (ketamine, anyone?), working on tools with generational consequences. Detached from the public they affect, untested by the responsibilities that ground long-term thinking, they treat world-altering AI as a personal legacy project. Their companies are led by men who have shamelessly evaded responsibility for their platforms' real-world harms for over a decade.

This is who we are entrusting with the future of labor and humanity.

Newsflash: the tech billionaires and AI gurus are not coming to save us. Quite the contrary.

We go to extraordinary lengths to secure nuclear weapons: hardened facilities, oversight layers, international treaties. One mistake can ripple for generations. With AI, we have handed the controls to a small circle of engineers and venture capitalists with virtually no guardrails.

In 2023, Sam Altman stood before Congress to advocate for AI oversight. Two years later, he warned that oversight would be “disastrous” for American competitiveness. Innovation first. Everything else second.

Critics, including former government officials and AI researchers, have raised alarms about this deregulation-first strategy. They cite documented harms: deepfakes, synthetic exploitation, algorithmic bias, and economic disruption. As MIT’s Max Tegmark put it, we now live in a world where restaurants require health inspections, but frontier AI can launch without oversight.

We Can No Longer Afford to Be Passive Observers

None of this demands we reject progress. I support innovation. I have spent my career building companies, launching technology products, and creating jobs. I have worked in AI since 2009. I am an advocate. But there comes a point when the scale and influence of a few firms grow so large that oversight is no longer optional. It becomes a civic responsibility.

Look at the past decade. Has the growing concentration of power in the tech industry produced broad societal benefits? The evidence suggests otherwise. What we have seen instead is election interference, the unchecked spread of misinformation, online environments harmful to children, and a steep rise in youth isolation and mental health crises—all unfolding faster than regulators have been able to respond.

None of those harms was the price of progress. We would still have powerful smartphones. We would still have cool apps. What we would not have is a generation at risk, enslaved by their devices.

Has this taught us nothing?

The time to intervene is now.

I do not pretend to have the solution. I am out of my depth on this one. But I do know this: we need a plan. And we need to start cracking on it now.

"Wait and see" is not a strategy. The tools being built today are too powerful, too far-reaching, and moving too quickly for us to respond after the fact. By then, the decisions will already be made by people who were never elected, never asked, and never told no.

Any credible plan must place humanity first. It must serve society, not just shareholders. It must foster innovation, but not at the expense of the public good. Progress matters. So do dignity, stability, and self-determination.

There will be no silver bullet. The fix will be messy, multi-faceted, and iterative. It will take a blend of regulation, public policy, education, and principled resistance. It will require the best minds in government, civil society, business, labor, law, and technology. The builders of AI should have a seat at the table, but they cannot set the agenda or steer the process.

I plan to get smarter in this space, quickly. And I will use this newsletter to share what I find—ideas, frameworks, and questions worth wrestling with.

If your response is, “That’s naive. We have to win the race against China, or else,” then fine. Let us win it. But let us compete on our terms. Let us protect our values, our institutions, and our commitment to human rights. We competed with the Soviets under tighter rules and tougher constraints, and we prevailed. And we’ve been out-innovating China on an uneven playing field for decades. There is no reason to believe we cannot do it again.

We do not need to abandon who we are to shape what comes next. So let's get started.

The tools being built today will shape everything—markets, jobs, institutions, and identity. They will either reflect our values or erase them. The outcome depends on whether we show up.

Want to chat about AI, work, and where it’s all headed? Let’s connect. Find me on LinkedIn and drop me a message.

If this email was forwarded to you and you’d like it delivered directly to your inbox each day, subscribe below.