There has been another, fresh news cycle triggered by high-profile people in AI commenting on the potentially very disruptive economic impacts of AI. Of particular note, Dario Amodei, the CEO of Anthropic recently spoke very bluntly about his views on the potential impact of AI on jobs and the economy, inviting responses from many high-profile figures.
The Sedan Plowshare Crater. Wikimedia Commons.
Part 1: Links Round-up
First, let’s just review some links. This may be handy if you want to catch up on various interrelated new essays and news coverage (and I will update this, with a change log, if anything else comes up!)
Amodei quoted in Axios: “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years” and “possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions”
Kevin Roose covered the topic in the New York Times
There’s also been several widely circulated essays that are highly related:
Another piece in Time published on June 4 (after the first draft of this post was written): "AI Will Devastate the Future of Work. But Only If We Let It” from Gary Rivlin, quoting Brynjolfsson and others
See also “Addressing the U.S. Labor Market Impacts of Advanced AI” from Sam Manning
And importantly, there’s also been pushback to the narrative of inevitability
The “AI as Normal Technology” essay from Narayanan and Kapoor also provides a set of counterpoints to some of the above claims.
Part 2: Adding Collective Bargaining for Information into the Conversation
What do I want to add to this conversation?
First, while I think the totality of above essays/articles/posts already provides a very comprehensive set of perspectives, considerations, and possible interventions, I of course want to reiterate the potential role of data leverage, and more specifically, “collective bargaining for information”:
With Matt Prewitt and Hanlin Li, we have a preprint paper out that specifically lays out this vision for Collective Bargaining for Information (which we abbreviate as CBI), tying the CBI argument to both classical information economics and more modern power concentration and AI safety concerns
Our arguments very much resonate with the above, with particular focus on concrete policy and research actions needed to actually achieve real countervailing force through collective bargaining
I’d like to think that CBI can help with coalition building, with opportunities for both (a) people very concerned about near term economic impacts and (b) those more worried about AI hype and other risks to contribute to a shared cause (though of course this needs battle testing!)
Second, I want to provide another stab at a bullet point level analysis of the current evidence and the “mechanistic argument” for why economic power concentration is possible and even likely. I’ll also touch on how this prediction is compatible with “hype concerns” and the possibility that in some domains AI will face some data-related challenges.
I certainly think considering both theoretical and empirical work to understand the potential impact of AI on jobs and power will be critical. It’s also important to keep in mind that most people making predictions have wide bounds on their estimates right now (note Amodei’s “1-5 years” qualifier), and that there’s a split between people who want to focus more heavily on the data (the “it’s not something to freak out about until we actually see sectoral unemployment spiking” stance) vs. the theory (the “it’s something to freak out about because of the nature of information, cognitive labour, compute, power accumulation feedback loops” stance).
Part 3: Another simple model for thinking about AI impacts
One way to think about labour substitution — a very simple2 model in which “workers output information sequences at each turn”
More on this model
Part 4: We should keep in mind the goal of the AI field and the plausbility of “augmenting AI”
The whole goal of much of the AI field is to be able to replicate or surpass human-level capabilities on “produce the right sequence of information in response to some context”
“Just make augmenting AI”
So, large scale economic disruption from AI is possible. This disruption should be roughly forecastable based on task-specific and job-specific data availability (and it’s great to see more research on these topics, e.g. work from Labeskin et al. extending the “GPTs are GPTs” paper from Eloundou et al.), and there are some levers (see also a recent AI Now report that discusses, among other things, labour organizing for this purpose).
Part 5: In conclusion
Much of the AI field is focused on taking records of human work and creating compressed artifacts that can replicate the “output sequence of information” actions that workers must take regularly to maintain the leverage needed to keep their jobs.
One of the main goals of the field is to get better at replicating these sequences!
So if the field is successful, this will disrupt the economy.
AI might be limited in certain domains because of data availability (and in particular, will be limited in where it can be deployed because of eval data leverage).
But the core challenges in designing markets for information create conditions where powerful actors with existing capital needed to operate AI systems can create feedback loops to accumulate more information and build more powerful AI systems.
We should work to prevent this.
Recap of all links above:
Acemoglu, D. (2024). The Simple Macroeconomics of AI (NBER Working Paper 32487). National Bureau of Economic Research. https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf
AI Now Institute. (2025). Artificial Power: 2025 Landscape Report. https://ainowinstitute.org/2025-landscape
VandeHei, J. & Allen, M. (2025, May 28). Behind the Curtain: A White-Collar Bloodbath. Axios. https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
Hard Fork (2025, May 30). The A.I. Jobpocalypse + Building at Anthropic with Mike Krieger [podcast episode]. https://www.nytimes.com/2025/05/30/podcasts/hardfork-ai-jobpocalypse.html
De Cremer, D., & Kasparov, G. (2021, March 18). AI Should Augment Human Intelligence, Not Replace It. Harvard Business Review. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
Drago, L. (2025, January). The Intelligence Curse [Substack essay]. https://lukedrago.substack.com/p/the-intelligence-curse
Drago, L., & Laine, R. (2025, May 30). What Happens When AI Replaces Workers? TIME. https://time.com/7289692/when-ai-replaces-workers
Narayanan, A. & Kapoor, S. (2025, April 15). AI as Normal Technology. Knight First Amendment Institute. https://knightcolumbia.org/content/ai-as-normal-technology
Kulveit, J., Douglas, R., Ammann, N., Turan, D., Krueger, D., & Duvenaud, D. (2025). Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (arXiv:2501.16946). https://arxiv.org/abs/2501.16946
Labaschin, B., Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2025). Extending “GPTs Are GPTs” to Firms. AEA Papers and Proceedings, 115, 51–55. https://doi.org/10.1257/pandp.20251045
Manning, S. (2025, March). Addressing the U.S. Labor Market Impacts of Advanced AI. https://cdn.governance.ai/RFI_Labor_Impacts_March-2025_Sam_Manning.pdf
Merchant, B. (2025, May 31). The “AI Jobs Apocalypse” Is for the Bosses. https://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the
OpenAI. (2018). OpenAI Charter. https://openai.com/charter
Rivlin, G. (2025, June 4). AI Will Devastate the Future of Work — But Only If We Let It. TIME. https://time.com/7290751/ai-future-of-work-essay
Roose, K. (2025, May 30). For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here. The New York Times. https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html
Sun, J. (2025, April 26). Deconstructing “The Aesthetic Genealogy of the Beige Tech Microsite” (Macrodoses #7). Reboot. https://joinreboot.org/p/macrodoses-7
Vincent, N., Prewitt, M., & Li, H. (2025). Collective Bargaining in the Information Economy Can Address AI-Driven Power Concentration. https://nickmvincent.com/static/cbi_paper.pdf
Whitfill, P., & Wu, C. (2025, June 1). Estimating the Substitutability Between Compute and Cognitive Labor in AI Research. Effective Altruism Forum. https://forum.effectivealtruism.org/posts/xoX936hEvpxToeuLw
Change log
June 5, 2025: published. Made minor tweaks to bibliography.
While putting this together, I was repeatedly reminded of Jasmine Sun’s post on “The Aesthetic Genealogy of the Beige Tech Microsite”, which touches on all of the above, and feel called about recently reworking my personal site to use the Crimson Pro font…
Note this is simpler than the conceptual frameworks in other, more elaborate work:
Simple Macroeconomics of AI - “The production of a unique final good takes place by combining a set of tasks” defined via a function that accounts for elasticity of substitution, dependence between tasks, etc.
Whitfill and Wu focus on a “theoretical model of researching better algorithms”.