- The Big Shift: AI @ Work
- Posts
- AI fumbles in high places, OpenAI delivers a roadmap for real adoption, and avatars raise new questions about who’s actually in charge.
AI fumbles in high places, OpenAI delivers a roadmap for real adoption, and avatars raise new questions about who’s actually in charge.
In the wake of the week.

Your go-to rundown on AI’s impact on the future of work—delivered every Friday. Each edition highlights three to five must-read stories on everything from job disruption and upskilling to cultural shifts and emerging AI tools—all in a crisp, Axios-style format.
Was this newsletter forwarded to you? Sign up to get it in your inbox.
In the wake of the week…
Leadership credibility took a hit on both sides of the aisle as high-profile politicians (and lawyers) were caught relying on AI without understanding it. At the same time, OpenAI released a playbook for real-world deployment that shows what thoughtful implementation can look like. Co-governance emerged as a serious alternative to top-down AI regulation, and CEO avatars made headlines for raising questions about what leadership actually means when it is filtered through prompts and presets.
Meanwhile, this week’s Extra Credit is a smorgasbord. We’ve got OpenAI quietly hatching a social network and dropping its most detailed playbook yet, billionaires dreaming of AI empires in space, the latest data on which AI tools enterprises are actually using, a darkly brilliant essay on performative work, and sure signs that the global AI arms race is heating up.
Let’s dive in. 👇
Is It the Bots Leading the Blind or the Blind Leading the Bots?
In a rare show of unity from both sides of the aisle, Donald Trump and Andrew Cuomo managed to demonstrate exactly how not to use AI. Each released major policy proposals this month that carried unmistakable signs of chatbot assistance. Trump’s tariff plan matched a formula generated by multiple chatbots when asked for “a simple way” to level trade imbalances. The approach was economically unsound, but aligned with responses from ChatGPT, Claude, Grok, and Gemini. In New York, Cuomo’s housing plan closed with grammatical confusion, contradictory logic, and a direct citation to ChatGPT, raising questions about whether the campaign ran out of time, expertise, or both.
Not the Lawyers Too
And these AI indiscretions are not limited to the politicians. At Morgan & Morgan, one of the largest law firms in the country, three attorneys were recently sanctioned for filing legal briefs containing fictitious case citations produced by the firm’s internal AI tool. The lawyers failed to verify the output before submission. Despite swift damage control—withdrawn motions, fee reimbursement, and mandatory training—the reputational hit was real. The firm reminded more than 1,000 attorneys that every AI-assisted citation still requires human verification.
Why It Matters
AI is already shaping government policy, legal proceedings, and campaign platforms. What matters now is how these tools are used, who is accountable, and whether the systems guiding decisions are equipped to handle real-world consequences.
Leadership Insight
AI can boost productivity and sharpen output but cannot replace diligence. Leaders must stay engaged in the process and hold the line on quality and ethics. Treating AI as a co-pilot means staying in the cockpit. Delegating judgment to a machine is both lazy and dangerous. It compromises a leader’s credibility, undermines the people they serve, and weakens the institutions they are trusted to uphold.
The Bottom Line: AI is entering institutions without the guardrails to match. What we saw this past few weeks is not a fluke. It is a preview. If our leaders cannot exercise basic judgment or demonstrate elementary competence, how can we trust them to lead us through the most consequential technological shift in a generation?
Sources: The Verge, Hell Gate, The New York Times [dynamic paywall], Newsweek, Futurism, National Law Review

Cartoon generated with ChatGPT 4o
OpenAI Report Provides Blueprint for Corporate AI Adoption
OpenAI released a tactical guide showing how real organizations are using generative AI to drive productivity, reduce friction, and deliver measurable results. Based on insights from 300 enterprise implementations and more than 2 million users, the report lays out six foundational “use case primitives” that apply across departments: content creation, research, coding, data analysis, ideation, and automation. These are not moonshot ideas. They are simple, accessible, repeatable workflows that scale.
Executives from firms like Fanatics, Tinder, Poshmark, and Estée Lauder shared how they’re embedding AI across workflows to free up time, unlock creativity, and reduce organizational drag. One key: empowering employees to find high-impact use cases themselves, while leadership supports with structure, prioritization, and momentum.
Real Business Impact
OpenAI’s report highlights how companies are turning AI from a novelty into a performance driver. Promega saved 135 hours in six months by automating first-draft content creation. Poshmark accelerated business performance reporting by generating Python code with ChatGPT. Tinder’s engineers are reclaiming low-priority tickets thanks to faster code prototyping. Across industries, teams are reducing manual work, eliminating bottlenecks, and reallocating time toward higher-value work. The playbook works because it is built around structured use cases, mapped workflows, and department-led experimentation—scalable, not speculative.
Why It Matters
Most companies have yet to realize meaningful returns from AI. OpenAI’s report reveals that success depends less on the technology itself and more on how organizations define use cases, train teams, and scale solutions that solve real problems.
Leadership Insight
Structured adoption beats passive experimentation. Leaders who take the time to map workflows, teach core use cases, and prioritize what matters will create real leverage. AI is not magic. It is a force multiplier. Without clarity and ownership, the opportunity gets lost in noise.
The Bottom Line: This is what real AI adoption looks like: targeted, practical, and led with intent. The organizations winning with AI are not improvising. They are executing a plan. Everyone else is just playing with prompts.
Sources: OpenAI [report]
Executive Presence (Kind Of)
Last week, we looked at California’s proposed “No Robo Bosses” Act, which aims to ensure human oversight when AI systems make decisions about hiring, firing, and promotions. That was about AI making decisions for the boss. This week, we’re looking at something different—AI standing in for the boss entirely.
Otter.ai CEO Sam Liang has built an AI-powered avatar of himself trained on thousands of meetings, emails, and internal docs. Dubbed the “Sam-bot,” the virtual CEO can attend meetings, answer questions, and even mimic Liang’s speaking style. He’s not alone. CEOs at Zoom, Klarna, and even venture titan Reid Hoffman are experimenting with digital twins to handle everything from financial presentations to client updates. The idea is to reclaim time lost to non-critical meetings, which have increased 51% since 2019, and let executives focus on strategy, not status checks.
Researchers recently tested the tech at Zapier, feeding a CEO-style LLM responses to employee questions. The bot’s replies were indistinguishable from the real CEO 41% of the time. In short: the AI didn’t need to be perfect; it just needed to be close enough.
I dug in to see how workers might respond to this trend and was surprised to learn that one in five say they would actually prefer to be managed by an AI boss.
By the Numbers
51% increase in time executives spend in non-essential meetings since 2019 (Asana)
$100 million in potential annual costs from low-value meetings at large companies
59% of Zapier employees could correctly distinguish the CEO from his AI twin
2/3 of workers say a digital boss would at least avoid showing favoritism
1 in 5 workers say they would prefer to be managed by an AI boss
Why It Matters
AI is moving beyond productivity tools and into the domain of leadership. The rise of executive avatars forces a deeper examination of authenticity, authority, and trust in a world where presence can be simulated and tone can be tuned.
Why Stop With the CEO?
Zoom CEO Eric Yuan envisions a future where every employee—not just the C-suite—has a digital twin. These avatars would attend meetings, handle emails, and even make decisions based on personalized LLMs tuned to reflect individual behavior and preferences. Yuan imagines a world where your AI tells you which meetings to skip and which to attend, while the avatar handles the rest. The promise is increased productivity. The cost is a potential collapse of direct human interaction and a workplace experience filtered through settings, scripts, and simulated presence.
If everyone has an avatar, who is really present? Bueller? Bueller? Anyone? Anyone Bueller?
Business Impact
AI avatars could save companies millions by streamlining executive communication and cutting down on meeting fatigue. But the real value lies in how AI reshapes executive function: parsing strategy decks, prepping for investor calls, or representing the company in low-stakes settings. Still, early tests show real limitations—empathy, emotional nuance, and improvisation remain elusive. Bots may deliver the message, but they struggle to read the room.
Leadership Insight
Delegating decisions to AI may boost efficiency, but it also risks eroding the human core of leadership. Trust is not built by presence alone. It is earned through tone, timing, and empathy—traits bots can mimic but rarely embody. Effective leaders do not just communicate. They connect.
The Bottom Line: AI is coming to the corner office, not with takeover ambitions but with an invitation to outsource leadership presence. The temptation is real. The risk is just as real. As executives begin to rely on avatars to represent them, their influence starts to look like a performance managed by settings and scripts. Which raises a deeper question: how will we define leadership in an age when authority, communication, and presence can all be delegated to a bot?
Editor’s Note: In full disclosure, I, too, have created an avatar of myself. Two, actually. I had to delete the first one. I was deeply conflicted, but it knew too much. While I would never go so far as to manage people or send the avatar to meetings on my behalf, I have achieved new levels of productivity and improved output thanks to my surviving AI doppelgänger. I plan to share more about this experience in a future post.

Cartoon generated with ChatGPT 4o
A Seat at the Table: Co-Governance as the Future of AI Regulation Hits a Fever Pitch
Our lead story this week revealed just how ill-prepared today’s leaders are to guide us into the AI era. A landmark chapter in the latest Harvard Law Review makes the case that top-down regulation is a poor fit for a technology as far-reaching as AI. The authors propose a co-governance model, one that gives citizens, communities, and public interest groups real decision-making power.
Drawing on examples from participatory budgeting, rural development, and elder care, the paper outlines a path toward regulation that is accessible, responsive, and rooted in the same democratic values that shape open-source debates: transparency, accountability, and shared authority.
By the Numbers
80% of Americans believe elected officials are out of touch with people like them
70% believe they have too little influence over congressional decisions
Why It Matters
When few control how AI is governed, public trust erodes. Co-governance offers a way to expand participation, embed transparency, and ensure that regulation reflects the needs and experiences of the people most affected.
Leadership Insight
Leadership in the AI era means making space for others. Executives, policymakers, and technologists must design structures that include more voices early and often. Co-governance relies on sharing authority by distributing responsibility in ways that reflect the complexity and reach of the technology. That is how trust is earned and sustained.
The Bottom Line: Co-governance creates conditions for better decisions, deeper participation, and more resilient institutions. AI has already begun shaping public life. The people affected by it should have power over how it is built, deployed, and managed.
Sources: Harvard Law Review
Extra Credit
For the overachievers: These are the stories that didn’t crack the top three but are too important to ignore—quick hits on what’s happening and why it matters.
The AI Race Is Global, and the U.S. May Not Stay Ahead
Key Takeaway: Stanford’s 2025 AI Index shows a sharp rise in international competition, with China’s DeepSeek-R1 model delivering performance on par with OpenAI and Google while using far less compute. The race for frontier AI is now crowded and global.
Why It Matters: China leads in AI papers and patents, and new models are emerging across Europe, Latin America, the Middle East, and Southeast Asia. The performance gap between open and closed models narrowed from 8% to just 1.7% in 2024. As AI efficiency improves and open-weight models gain traction, innovation is expanding beyond a handful of U.S. companies and into a wider global field with major economic and strategic consequences.
Source: Wired [paywall]
Silicon Valley Billionaires’ AI Fantasy Misses the Point
Key Takeaway: Science journalist Adam Becker’s new book More Everything Forever dismantles Silicon Valley’s utopian dream of a future ruled by AI and powered by limitless growth in space. He argues the fantasy is flawed at its core and diverts attention from urgent crises here on Earth.
Why It Matters: Becker delivers a sharply researched critique of the tech elite’s transhumanist vision, exposing its scientific limits and moral blind spots. Large language models still hallucinate basic facts and absorb the same cultural biases embedded in the internet, and now increasingly from themselves. As some billionaires imagine AI as a tool to escape human fragility, More Everything Forever offers a sobering reminder: society’s greatest problems cannot be solved with more compute. The real work of building a livable future cannot be outsourced to machines.
Source: ScienceNews

Cartoon generated with ChatGPT 4o
OpenAI Releases Its Most Detailed Prompt Engineering Guide Yet
Key Takeaway: All sorts of goodies from OpenAI this week. They also released an internal guide outlining best practices for prompting GPT-4.1. The document covers advanced use cases including agentic workflows, long-context prompting, chain-of-thought reasoning, instruction tuning, and code patching via diff formats, suggesting just how far prompting has evolved from simple chat queries to full-blown system design.
Why It Matters: This guide demonstrates a major shift in how serious builders are approaching prompt design. It is no longer viewed as copywriting. It is treated as architecture. GPT-4.1 is optimized for literal interpretation, persistent agent behavior, tool orchestration, and long-context retrieval. Developers are encouraged to approach prompt engineering as a rigorous discipline, closer to systems thinking than scripting. For organizations building internal copilots or automating decision flows, this guide offers a masterclass in moving generative AI from clever responses to reliable systems.
Source: OpenAI [pdf]
The 10 Most-Used GenAI Tools in the Enterprise Right Now
Key Takeaway: A new Wharton-backed survey confirms that ChatGPT, Microsoft Copilot, and Google Gemini are leading adoption inside large enterprises. But even among the top 10 tools, most companies are still testing and evaluating rather than fully scaling generative AI.
Why It Matters: Despite billions in investment, only a small percentage of gen AI use cases have moved beyond pilot stage. ChatGPT leads with 62% usage, but tools like Perplexity, Claude, and Midjourney also show meaningful traction. The big theme? Fragmentation. No single tool covers every use case, and most require human oversight or post-processing. For CIOs and business leaders, the stack is growing more complex, and the gap between promise and production remains wide.
Source: CIO
OpenAI Is Quietly Building a Social Network to Rival X and Meta
Key Takeaway: OpenAI is developing an early-stage social media prototype with a real-time feed and image generation tools, potentially positioning ChatGPT as a direct competitor to platforms like X and Meta’s upcoming AI-integrated assistant app.
Why It Matters: A social layer would give OpenAI access to the one thing it currently lacks—fresh, dynamic, user-generated data at scale. That kind of input is essential for training more responsive, personalized models. It also intensifies Sam Altman's rivalry with Elon Musk and Mark Zuckerberg, both of whom are moving aggressively to fuse AI with social content. Whether OpenAI rolls this out as a standalone product or builds it into ChatGPT, the move marks a strategic shift: from LLM provider to full-stack consumer platform.
Source: The Verge
Essay: When Work Persists but Meaning Disappears
Key Takeaway: A long-form exploration of “occupational downgrading” argues that the future of work is not mass unemployment, but something quieter and more unsettling; jobs that continue without substance, performed to preserve structure, not productivity.
Why It Matters: While many fear job loss, this essay predicts something more insidious: the persistence of work as performance. AI handles the core tasks, and humans remain for optics, oversight, and emotional scaffolding. As productivity decouples from presence, the true challenge becomes how to reclaim purpose in a workplace that no longer needs you, but still expects you to show up.
Source: State of the Future [Substack]
This edition of The Big Shift: AI @ Work may have been edited with the assistance of ChatGPT, Claude, Copilot, Gemini, Perplexity, or none of the above.
Want to chat about AI, work, and where it’s all headed? Let’s connect. Find me on LinkedIn and drop me a message.