At first Lola thought the job ads were fake. Promising $90 per hour for remote work in fields from consultancy to philosophy, the roles circulating in her former classmates’ WhatsApp groups seemed too good to be true.
When the business graduate started at Silicon Valley start-up Mercor, the paychecks were real. But the job had one stark difference to her usual management consultancy projects. Rather than corporations, her clients were the AI models of companies such as OpenAI and Anthropic. Her role was to train the AI to do the consulting work she is qualified in.
“It’s training the LLM [large language models] to do the job,” says Lola, who asked to use a pseudonym due to terms in her contract. In a tough labour market, she finds the work interesting and preferable to none. But the thought of future job displacement worries her. “At the start [my graduating class and I] didn’t really think about it, but working with this kind of model we now have the sense that it could be scary . . . in terms of [future] unemployment.”
Founded by three school friends, Mercor recruits and runs teams of people to train AI to in effect do their jobs. It is part of a growing subsector of AI training that includes Scale AI, in which Meta holds a 49 per cent stake, and Turing, which says it will “accelerate superintelligence” by using AI in simulated environments. Surge AI, founded by Edwin Chen, was in July reported to be in funding talks seeking a valuation of $25bn.
Until recently, the freelance labour force training AI models was predominantly low-paid, low-status workers in the global south, labelling basic data or identifying toxic or traumatising material. Now, it is increasingly made up of highly qualified contractors paid by the hour. They guide and assess the AI’s output, training models in how to do often sophisticated and economically valuable knowledge work. Current roles on Mercor’s platform span a dizzying array of professions, from journalism to real estate, but also include beauty therapy and social work.
Less than three years after it was founded, Mercor raised finance at a valuation of $10bn. It now pays out about $2mn a day to some 30,000 experts. Chief executive Brendan Foody — who at 22 became one of San Francisco’s youngest ever billionaires, according to Forbes — says Mercor is building a new “category of work”: training AI agents. Over the coming years, he says, a growing number of human workers will need to fine-tune and advance AI, training it to do much of the work they currently perform.
Foody paints a rosy future, distant from the often clunky, inconsistent and error-prone AI tools familiar to many workers trying to get to grips with the new technology today. He says AI will take over mundane activities and function as an assistant, allowing people to move to higher-level tasks with the potential to do “incredible things that otherwise wouldn’t have been possible”. In the meantime, the company says it is supporting workers by creating these new jobs and enabling them to gain exposure to advances in AI.
Mercor co-founder Brendan Foody says that while there is going to be ‘some job displacement [from AI], there’s also going to be all sorts of new job categories’ © Winni Wintermeyer/FT
Yet to some observers, companies such as Mercor look like a short-sighted gamble, in which skilled workers accept lucrative temporary gigs training the technology that could replace them.
Already, concerns about the effects that AI will have on jobs are rising. Last week London mayor Sadiq Khan warned AI risks creating “mass unemployment” in the capital unless measures are taken to protect white-collar jobs in areas such as finance, professional services and creative industries.
Anton Korinek, director of the University of Virginia’s Economics of Transformative AI initiative, says there is still much uncertainty about whether AI will become good enough to significantly replace humans. But current trajectories suggest it will be able to perform “a lot of the functions” of knowledge work in a way that “might become very disruptive for the labour market”, including taking over some jobs. Training AI still happens within a “human master, AI apprentice model”, he says. “But at some level, the teacher always gets replaced by her student [and] we don’t know yet how powerful AI will become.”
At Mercor, project teams range from a few contractors to hundreds, who pose questions to AI, critique its responses and walk through how to approach problems.
One, Jay Katoch, spends between 40 and 80 hours a week on Mercor work. After decades in consulting, he describes working with AI as fulfilling. “I give [the AI] a problem, [such as] how a company like Boeing could handle the challenge post-crash — damage control, how would it impact the business, how would they handle stakeholders . . . and look at how the AI responds,” he says. “You’re challenging [the models] and correcting them.”
Mercor says its average hourly wage is more than $95, but the most sought-after roles — radiologists, for example — can receive up to $375 an hour. Projects might be a number of weeks in length, with no guarantee of ongoing work.
Zoe Cullen, a labour economist at Harvard Business School, says the short-term, gig nature of the work means workers are not protected if they help build models that ultimately threaten their jobs.
She suggests AI trainers could retain a stake in revenue produced by the models that use their skills and knowledge. “Workers could be part of the productivity enhancements and reap the rewards — you could have a collective agreement of data trainers,” she says. “If what you’re teaching the model to do is your core expertise, by definition you’re reducing your labour power.”
Foody acknowledges there is “definitely [going to] be some job displacement”. “There’s also going to be all sorts of new job categories, so we want to help people navigate that displacement to help humanity and society accomplish far more.”
He says that over time AI training will become a bigger part of more jobs. He sees a future in which humans will do things once, creating a rubric for AI to do the same task many times over. “We’ll move towards this world where everyone is managing dozens or hundreds of agents and . . . their day-to-day is interacting with and training these agents to do economically valuable work.”
Many of Mercor’s contractors, speaking anonymously, shared some of Foody’s confidence in the benefits of AI, at least to them personally.
A study into the AI training industry by Oxford Economics, commissioned by Scale AI, found most workers in the field were highly educated — 41 per cent held a master’s or doctoral degree — and 94 per cent were also doing other kinds of work or studying.
It found the US data annotation industry contributed $5.7bn to US GDP in 2024, and that this would rise to $19.2bn in 2030.
“Me and the people who work on these products have the experience not to lose their job — people who don’t have that opportunity, I believe they will,” says one 18-year-old contractor, praising the high salary, flexibility and stimulating work environment.
Another contractor says he enjoys being in a “front-row position” in the development of AI. AI will “assist not replace”, and “if I’m not the one doing it and earning some money from it, there are a lot of other people who are up for doing the same thing.”
“Societally there’s concerns but not within the cohort doing it,” says Amjad Hamza, one of Mercor’s more than 350 permanent employees, about job displacement. “I just take a very long view — the pattern of history is people work less, but they can do more with that time.”
Sundeep Peechu, managing partner at Felicis, which led Mercor’s $10bn funding round, says the company has become “really good at identifying the experts that are required in order to go train these models”.
He regards this as positive for the future of work. Mercor’s contractors, he explains, “coax the model and teach it” in areas with labour scarcity, such as nursing or law.
The University of Virginia’s Korinek is more pessimistic. If the most ambitious projections of AI’s ability come to pass, he says, Mercor’s trainers will not just be doing themselves out of a job.
“If the technology is much more transformative [than conservative estimates] the big issue is not necessarily whether the specific workers who train those models are fairly compensated,” he says. “The big issue is what do we do with everybody, essentially.”