Dear politicians, we need you now!
The biggest economic transformation since the industrial revolution is about to hit our economies. We are not prepared.
Major problem: theory- and policy-oriented crowd lacks knowledge about the current state of AI, nerds don’t engage with deep theory?
I often envision a scenario where an alien lands on Earth in the near future. The alien approaches a human and asks, “What is humanity’s greatest challenge?” The human responds, “We’re running out of jobs!” Perplexed, the alien replies, “How could that possibly be a problem?”
In my view, the coming months — or, if we’re lucky, years — will inevitably produce the problem mentioned by the human in this example: AI will replace much of human labour. I am not sure whether these jobs might be gained back at some point or not, but I am absolutely convinced we will inevitably face a massive and extremely fast increase in unemployment rates in the next 6 to 24 months.
The reason I am writing this post is that the public, as well as most people in charge seem largely ignorant of this imminent upheaval of our established economic systems. Their attention — and action — is however crucial to shape the trajectory and long-term outcomes of this upcoming crisis. If the next few months are used for preparation, our economies could be able to absorb this massive shock to their labour markets, distribute wealth generated by AI, and provide perspectives for those who will not have a source of income anymore. If not, we will very quickly face social issues only comparable to those of the industrial revolution.
This post puts forward three major arguments: i) the AI transformation of the economy will be much more disruptive than previous economic transformations; ii) this disruption can lead to an economy providing abundance or poverty for the vast majority of humans; and iii) which of these realities materializes is entirely dependent on political decisions made within the next twelve months. Politicians need to wake up to the severe impact AI will have on our labour markets and prepare for it. We need a normative public debate about desirable economic models of the future where individuals’ survival is not dependent on their ability to generate income through work.
This Time is Different
The most common argument raised against the idea that AI will result in massive job losses is that past economic transformations increased productivity and produced more and better jobs for everyone. I think this argument is usually the result of a reflex rather than a rational assessment of the technology and its impact on labour markets in the foreseeable future. For this reason, I will briefly describe the technology and how I believe they will affect many industries in the coming months.
General Purpose Technology
I presume that the reader is familiar with ChatGPT. GPT stands for ‘Generative Pre-trained Transformer’, a fairly technical description underlying this technology. However, in economics, GPT also stands for ‘General Purpose Technology’, and this meaning of the acronym applies just as well. Large Language Models like GPT-4o have crossed the boundary between task-specific machine learning models — designed to classify text, diagnose MRI images, or predict stock markets — to a technology able to tackle a large variety of tasks.
As someone familiar with the basic fundamentals of this technology, this ability is rather puzzling. If you talked to me following the release of GPT-3, you might have heard me explain that these models only predict which words are statistically following a given text sequence and are therefore unable to do much beyond that. They just look smart because humans are prone to interpret more into the responses of these models than is actually there.
The surprising fact that has changed my mind is that models trained in such fashion (and some additional fine-tuning) are on-par with human — and increasingly expert — performance on a variety of tasks ranging from expert knowledge in a variety of fields [U,V,W] to math and logic problems [X,Y,Z], long considered safe from solution with machine learning. In fact one of the major problems in current AI development is the design of new tasks to properly assess where these models do not yet outperform humans [ß].
I maintain that this is not an indicator of machine super-intelligence but rather an indicator that human cognition is not as complex and special as we’d often like to think. But this is entirely irrelevant for my greater point: their versatility enables such models to fulfill a variety of tasks using the same technology. This feature is the reason this technology can reasonably be described as ‘general purpose’ (and potentially more so than any other technology preceding it).
LLM Agents will Replace, not Improve Human Productivity
A second — and entirely novel — feature of this technology is that it is not dependent on human input. Any reader familiar with ‘agentic frameworks’1 is aware that generative models can be used to supervise the output from other generative models: testing code, checking answers or translating output into tasks for other agents are tasks readily performed by LLMs. In addition, these model can transparently contract with each other using Blockchain technology [TYOUP]. This means that the often-cited argument that these models will increase the productivity of human workers does not hold: LLMs are not dependent on human supervision (or at least won’t be in the very near future). These models will therefore not only eliminate specific tasks but entire industries.
While human performance on some tasks might require one or two additional generations of models, the major obstacle to industry adoption is not the development of the technology per se anymore, but the implementation in any given industry. Major AI companies put increasing focus on the development of infrastructure to simplify the adoption of AI agents in industry2.
Once a single company figured out how to replace their workers with LLMs, a given sector will be transformed very rapidly: LLMs ability to produce the daily output of a single human in a matter of minutes will decrease the value of human labour in affected sectors to the cost of a few API calls. This means that the company will be able to provide their services at a fraction of the cost that other companies can offer3.
The immense scalability of these models also means that LLM pipelines can be rolled out on a large scale almost instantly. The first companies to develop working agentic frameworks will quickly dominate entire industries - they simply need to activate a few more GPUs on a remote server instead of hiring and training more human labour to meet increased demand. The oligopolization resulting from this scalability will destroy many competitive markets. More importantly, the value of human labour will drop almost instantly across entire industries. Companies’ inertia will not save their employees either: the immense competitive advantage gained from virtually free labour means that late-adopters are not going to survive this transformation.
Who Will be Affected?
Many reports by major economic institutions have been concerned with the question which industries will be affected most by the upcoming wave of AI. Their methodologies vary, but most of them follow a framework defining a set of tasks for each occupation, assessing how likely it is that a given task is executed by an AI, and then reporting which occupations are most exposed. A few simply ask AI to make an assessment.
I fear this approach does not sufficiently capture how AI will develop in the next few years, particularly with regards to self-supervision without human intervention. None of the reports I read had considered agentic frameworks. Even in occupations which will be merely ‘augmented’ through AI, most people might lose their jobs given that a single person might now do the job of twenty. We simply don’t need twenty times as many lawyers, consultants, or policy experts.
Even if we take these reports at face value, the future looks bleak: the IMF estimates that “in advanced economies, about 60 percent of jobs are exposed to AI, due to prevalence of cognitive-task-oriented jobs”, though they estimate that half of those “could benefit from enhanced productivity through AI integration” (p. 2), leaving us with around thirty percent negatively affected. Even if these jobs are gained back elsewhere over time, the disruptions are going to be enormous. And we are not even considering automation of manual labour through developments in robotics yet.
But the biggest issue I have with these estimates is that they assume that productivity gains from AI will result in growing demand for human labour and higher wages. “If AI strongly complements human labor in certain occupations and the productivity gains are sufficiently large, higher growth and labor demand could more than compensate for the partial replacement of labor tasks by AI, and incomes could increase along most of the income distribution” (IMF, p.2). But, as we established earlier, AI does not complement human labour anymore once it self-supervises. Labour demand will be demand for machine labour, and much of the human workforce becomes superfluous. Those with the capital to hire automated labour will not require them anymore.
At a Crossroads
When i share my dystopian vision, I am sometimes met with disbelief: we will have AI do everything, we will work very little! Essentially, this echoes the alien’s sentiment in the introduction: there will be a world of plenty. I do believe that there are paths leading in this direction, but they are not the default outcome. The main issue is that, in order to use such services, consumers need to spend their income. In general, most economies are dependent on citizens’ consumption. If you don’t earn anything, how can you get the robots to do your bidding?
This question essentially asks who will hold control over AI ‘labour’ in the future. While others are worried that AI will take control and turn against humanity, my worry is less scifi-inspired: if those with capital are not dependent on human labour anymore and workers can simply be produced by investing more capital, those without capital have no sources of income anymore.
We therefore need to debate how to organize a future with less or no work in which all members of society can still live with dignity. How do we face large-scale unemployment? What if the job losses are not temporary? Do we need UBI? Can we ensure that everyone gets a share of the profits from AI revenues? How can the state provide public services when the major source of taxation (labour income) is suddenly unavailable?
The extent of ignorance to these questions among policymakers and the general public discourse is baffling to me. The economist’s website does not feature a single story about AI today. The incoming German coalition partners have not devoted a single sentence of their programs to the labour market risks resulting from AI. If we don’t manage to discuss at least some of these issues in the coming months, I fear we will be unable to react to the historic transformation in a constructive manner. Fear and unemployment will likely produce more polarized, less problem-oriented politics, unable to produce any solutions. Instead of abundance for the many, we will live in a world where few capital owners are able to impose their will on everyone, while the rest is competing for fewer remaining jobs.
The opposite vision is however similarly realistic. If states manage to safeguard individuals from the economic consequences of imminent job losses, if progressive parties can provide a vision of where things are going, and if the fruits of machine labour are distributed, it might indeed be the case that running out of jobs is a blessing.
The Embarrassing Inaction of Policymakers
German coalition agreement does not mention AI really. Especially from the Social Democrats, whose very ideology is a product of the last major societal transformation brought about by technology, this is surprisingly ignorant.
Much investment, little talk about regulation and prevention of/coping wiht massive layoffs
Footnotes
To all others, I recommend familiarising themselves.↩︎
Examples include Anthropic’s model context protocol, which standardises the interaction of LLMs with external data sources, recently also adopted by OpenAI.↩︎
I acknowledge that this scenario requires that many rather similar jobs exist within an industry. However, I also believe that many sectors have many rather similar roles which can reasonably be sufficiently standardized to be replaced by LLMs.↩︎