This article is an on-site version of the Free Lunch newsletter. Premium subscribers can sign up here to get the newsletter delivered every Thursday and Sunday. Standard subscribers can upgrade to Premium here, or explore all FT newsletters
Welcome back. This week I interview Erik Brynjolfsson, a professor, author and senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
Lately, I’ve been thinking about how path dependency shapes the way technology develops. The internet and nuclear power, for example, trace their origins to defence projects. Silicon Valley itself was born from the electronics boom of the cold war era. Even today, network effects continue to guide app and hardware development around dominant iOS and Android ecosystems.
This line of thought brought me back to an FT Economists Exchange I did with Brynjolfsson last year, where he discussed the “Turing Trap” — his theory on how humanlike artificial intelligence emerged.
The concept stems from the “Turing Test” proposed by mathematician and computer scientist Alan Turing in 1950 to determine whether a machine exhibits intelligent behaviour indistinguishable from a human, based on a conversation.
Brynjolfsson argues that it inspired generations of researchers to replicate our thinking in machines. But as generative AI, underpinned by large language models, begins to achieve that, he says this might have been the wrong objective all along.
He suggests that we’d be better off if AI — and technology in general — focused on augmenting human specialisms, and doing things we can’t. (This echoes David Ricardo’s theory of comparative advantage in international trade.) Instead, imitation risks creating a trap.
Some gen AI applications are the result of our tendency to reward systems for mimicking and replacing humans at existing tasks, rather than unlocking new, complementary capabilities.
This tilts the technology towards labour substitution, raising productivity but concentrating gains among those who control the technology and capital. The “trap” is about this concentration of economic power, which creates political power. This can leave others with no way to change the system.
It can set too low a ceiling on growth. Imagine if Henry Ford had set out to simply match humans — a vehicle that could walk or run as fast as a human.
LLMs are already complementing humans in areas such as search and analysis. But recent advances, for example, in applications that create adverts and films from simple text have raised concerns among some professionals that the technology will be used to replace them entirely.
It is important to ponder whether the emulation of human comparative advantages, such as in creative fields, is a worthwhile use and goal of technology, not least because we’ve created structures that reinforce the drive towards mimicry.
Imitation is attractive, not just culturally, but because our institutions encourage automation. The fact that a human can do a task provides an “existence proof”, making imitation a more natural benchmark, a safer thesis topic, and a clearer grant proposal. Coming up with an entirely new capability requires a lot more creativity.
In business, managers see reducing headcount as easy to measure, and it shifts rents from workers to capital owners. That gives capital owners excessive incentives to automate and weak ones to complement, given the externalities of innovation and the boost to workers’ bargaining power.
Policymakers amplify the tilt: in most countries taxes are lower on capital than labour. While both automation and augmentation can be valuable and profitable, the net result is a systematic pull towards human mimicry — even when superhuman, complementary systems might create more total economic value and less inequality.
This raises a fascinating counterfactual. Where might technology be today if the Turing Test wasn’t so alluring? Here are some suggestions I came across: neural prosthetics and mind-machine symbiotics might be decades ahead; instead of chatbots, we might have thought tools, like visual ideation spaces; sensory interfaces instead of conversations; and AI that can reason, rather than probabilistic LLMs.
Of course, the technology may have evolved towards more nefarious ends, or even beyond human intelligence, as fans of the 2016 film Arrival might imagine.
But back to reality, a more useful exercise now is to visualise how generative AI and humans can evolve together.
This isn’t just about limiting job losses (some may be necessary), but also ensuring the technology generates greater economic and social value (which creates new opportunities for humans). Brynjolfsson has been leading work in this area:
If our north star had been capability expansion with humans rather than imitating humans, we would have steered earlier towards systems that improve human decision-making, accelerate learning and create new design spaces. We would have been benchmarking AI with people, not instead of them. This is something my company, Workhelix, focuses on.
In my Stanford classroom, we created an AI-powered “Erik avatar” that interactively discussed each student’s homework with them individually. That helped them understand the key concepts more deeply and required students to defend their submission in real time, removing the option of mindlessly submitting an AI-generated assignment.
If we scale up this mindset to the whole economy we would be aiming for quality, innovation and welfare improvements, not just cost substitution.
This is already happening to an extent. In software, code co-pilots expand exploration and testing. In healthcare, structured note generation frees clinicians’ time for empathy and judgment.
The common thread is task reallocation: machines handle high-frequency activities where there is plenty of training data, and humans focus on the “long tail of exceptions”, as well as defining the goals, says Brynjolfsson. But optimising this on a macro scale requires reorganising work and our economies.
History’s big productivity jumps came from restructuring work around new general-purpose technologies. Ford reorganised factories to complement people with interchangeable parts, moving assembly, and power tools, multiplying each worker’s output.
We need to do that too, by building systems — combinations of humans and machines — that do things no human and no machine could do alone.
This is possible, even for creative tasks. AI can automate expensive sub-tasks (like search and rough drafts), while raising returns to distinctively human ones (taste, storytelling, authenticity and relationships). But that requires diffusion: affordable tools, training and platforms so more creators benefit, not just companies that use AI outright for creative tasks.
So, even if the Turing Test set tech development down the path of imitation, there is still a way to create a more productive and socially optimal division of labour between humans and the machines we have now created. Brynjolfsson has three recommendations:
First, improve firm-level metrics for what good AI adoption looks like. We need to focus on, and identify, how well humans and machines solve practical problems that boost patient outcomes, customer satisfaction and software quality.
Second, market rules need to promote competition and diffusion, including interoperability, data portability and procurement that rewards augmentation outcomes rather than headcount reduction. That means rebalancing tax and accounting incentives that currently favour capital deepening over human capital.
Third, we need public goods and guardrails. That starts with measuring the right things, such as quality-adjusted output and developing secure data infrastructures that enable experimentation while protecting privacy and intellectual property. Targeted liability and audit regimes might focus on use cases with externalities, such as health and finance, without freezing experimentation elsewhere.
Ultimately, if we align incentives with augmentation, the technology frontier and the opportunity frontier can advance together.
I’d be keen to hear your reflections. Send them to freelunch@ft.com or on X @tejparikh90.
Food for thought
Just how rational are investors and equity analysts? This paper compares their stock market performance with a machine-learning algorithm to find out.
Free Lunch on Sunday is edited by Harvey Nriapia
Recommended newsletters for you
The AI Shift — John Burn-Murdoch and Sarah O’Connor dive into how AI is transforming the world of work. Sign up here
Unhedged — Robert Armstrong dissects the most important market trends and discusses how Wall Street’s best minds respond to them. Sign up here