Once the domain of speculative science-fiction writers, the new realities of automation and artificial intelligence are rapidly insinuating their way into how we think and act. Itâs changing how the organizations we work in operate, and blowing up old assumptions of how a careerâand, indeed, a lifeâmight be constructed. It can all feel a little overwhelming. Am I a doomed handloom weaver at the outset of the Industrial Revolution? Or will my skills find a new niche in the coming world?
To try to get a better idea, VICE phoned up Matt Beane, professor of technology management at the University of California and author of The Skill Code: How to Save Human Ability in an Age of Intelligent Machines.
VICE: First of all, letâs get a handle on the terms weâre using here. Whatâs the difference between automation and AI?
Matt Beane: Automation is the process of applying technology to increase throughput or quality in an organization. That is agnostic to the tool used. Itâs fewer calories spent to get similar or better outcomes. And so thatâs different in kind than AI, which is a tool that can be used to automate things, but it can be used for other things too.
The cocktail party question I most often get is: âWhat about AI and jobs?â And I think this is profoundly the wrong question. There will be jobs lost because of AI, just as there have been with many other previous technologies. But the number of jobs lost is going to be a rounding error compared to the number of jobs changed. So, you keep your job, but between 10 percent and 40 percent or 70 percent of your tasks, you have to perform differently. Or you have new tasks and old tasks drop away.
We still have dentists, but they mostly donât pull teeth anymore, they mostly coat teeth with substances to prevent that, right? But theyâre still there, and in fact, we need more of them. That job change is a far more existential threat to our prosperity than job loss or gain, actually. It just gets limited attention.
So, in your view, the issue is people needing to adapt to new tasks in their jobs rather than jobs disappearing?
Yeah. Or the task itself has the same name, but you must perform it differently to get the job done now that you have this new tool. Cutting wood to size was a tool for a carpenter before the circular saw, and after, you perform it differently with a circular saw than with a hand saw. Thatâs in most cases the kind of change that weâll see.
But that is an existential threat because in order to do your job differently with these new tools, you have to learn how to do it differently. You have to build new skills.Â
By orders of magnitude, the research is very clear: we learn new skills best by doing, by doing alongside an expertâsomebody who knows moreâand working on a real problem. Thatâs been true for millennia. And the trouble now is that these tools allow an expert to solve more of a problem independently. For somebody in coding, or a lawyer, they can make astounding progress in a small amount of time, alone, with one of these models. Whereas before they wouldâve had to rely on, say, a paralegal to help them along, or a surgeon relying on a medical resident.
When you make novices optional, you break that natural learning chain. The senior person races aheadâtheyâre delighted. The organization that hires them is delighted. This is what return on investment looks like, in practice. There are no coordination costs. Novices, by definition, introduce errors and slowdowns into work. And so, this technology will unleash great productivity boosts for us all⊠but at the expense of the involvement of the next generation of talent.
What are the costs when we lose that expert-novice relationshipâfor an organization and for society in general?
That novice will not build skill as well or as quickly as they used to. And I have good data on the longer term effects of this in surgery. A typical robotic prostatectomy in the United States takes around four hours in a very well-run surgical institution or teaching hospital. But I have sat through nine-hour procedures by surgeons who have left their institution, are now legally empowered to use this [AI-powered, robotic tool], have landed in a hospital that has this system and expects them to use it, and they donât know, really, how to do it confidently.
We donât have increased injury rates, but we also donât study how long the patients are under anesthesia in these procedures, and thatâs stroke risk and so on. I think in the end we may see these kinds of costs showing up. But certainly, the receiving organization has to pay for it in some way, typically by taking a senior surgeon offline and matching them up with this new surgeon and giving them remedial training.
For the last two years, Iâve been studying the use and deployment of generative AI in organizations and, in particular, in software development, because software development has gone first with these tools. There started to be public blog posts and essays by senior software engineers about a year ago speculating about the negative effects for junior developers. Thereâs a wonderful one from June of last year called The Death of the Junior Developer by Steve Yagee. Software is really ground zero for this moment, at incredible scale and velocity.
Iâm interested in your idea that itâs not just the novice or the organization thatâs losing out here, but also the expert, because theyâre now not getting any input from a younger person whoâs more clued up about changing developments, whatever they might be.
Exactly. Iâve done research on this phenomenon and called it âinverted apprenticeships.â Basically, where senior people learn about new tech from junior people because senior people donât have time to devote themselves. And so they set up these arrangements where they reach out to a junior person and learn a bit about that and the junior person gets access to better opportunity as they sort of collaboratively figure out how to use the tech. So, experts will lose a learning opportunity as well.
There is another, more insidious and difficult to predict aspect, which has to do with human relationships: with trust, care, empathy. One of the things I show in my research is that you cannot build skill [well] without a bond of trust and respect with another human being. Autodidacts exist, but theyâre the exception that proves the rule. If you have someone who can push you further than you could go on your own and guide you, yes, you build your skill, but youâre also doing that through this mutual bond of trust and respect thatâs part of what makes work meaningful.
We want to earn the trust and respect of someone we look up to. And every senior manager I talked to is eternally grateful for the chances and guidance they got as a junior person. And it is part of what they take to be the value of their work: to turn around to the next generation and help someone surpass their own greatest expectations for themselves and do incredible work just by being that good enabler. Itâs part of what people reflect on at the end of their life. But the more you can use these tools to be distanced and self-sufficient, the more that human relationship becomes optional. Itâs hard to imagine that having positive knock-on effects.
Are these changes going to apply equally to white-collar jobs and blue-collar jobs?
No, decisively not. Thereâs a stark divide in the economy with respect to both AI and automation right now. AI is racing ahead. So, our ability to automate and transform digital work around the transmission, transformation, and processing of informationâ knowledge work, so to speakâthose effects are dramatic compared to those associated with automating anything physical. We are making progress in robotics, but at best itâs perhaps 6 percent growth per year in terms of deployed capability. The latest estimate I saw is 280 percent growth per year in AI deployed capability.
Most blue-collar jobs have an aspect to them that involves physical interaction, even involving customer service in a store or working in a factory. There are extraordinary AI-enabled robots being deployed in warehouse facilities and per facility they need fewer people of some types, but they have many more facilities, so their net employment increases. Those jobs that still involve moving atoms in some way, are far less exposed to these technologies.
âThose jobs that still involve moving atoms in some way, are far less exposed to these technologies.â
So, would you say that non-digital jobs which require some form of manual dexterity are safe?
It depends on what you mean by âsafe.â Theyâre going to be less subject to this job change that I was talking about, because AI is relevant to a smaller percentage of the bundle of things you have to do, so youâll have to adapt less. And given that the amount of physical work that the world demands seems likely only to increase, and given that you canât automate it away with AI, and only weakly with roboticsâand we also have a demographic inversion going on, with many fewer young people over time, certainly in most First World countries, so the demand for physical work is going to spike and the population of available workers is going downâthatâll create more and more pressure and need for advanced robotic systems. But at least for the next decade, maybe even 20 years, they will be increasingly prevalent and still irrelevant to the global economy.
So, thereâll be an upward pressure then on wages for those physical jobs?
Yeah, the more specialized, the better. I live in Santa Barbara, California. This is a strange place economically, but a plumber here does better than quite a few technical workers. And a plumber is a very sophisticated, physical job, but even large industrial-scale construction is the same; pouring concrete into form-fitted beams and so on.
Last week, I gave a keynote at a multi-billion-dollar construction firmâs annual leadership gathering, and they asked me to come talk to them about robotics and what they can anticipate. I basically just said, you see this trend line for robotic capability, itâs basically flat compared to AI. Look at these two over just the last few years, and if you project them out with varying levels of uncertainty, that robotics one is not going to bend upwards, itâs just not going to happen. I mean, they must continue to invest in those ways, but if anything, thatâs going to expand the need for construction and hiring physical people to do those physical tasks.
Wonât that supply imbalance be counteracted to some extent by people shifting out of certain jobs and into these roles that are needed more and being paid better?
Itâs possible. Although thereâs a lovely paradox that comes out of economics, which is the Jevons paradox. The basic idea there is that the more inexpensive you make it to produce something, the more it is demanded. So, for instance, in the case of computer code, say itâs a thousand times cheaper to create computer code right now than it was four years ago. Is it high quality code? Absolutely not. Itâs 80 percent or 85 percent. Itâs very, very good, but it needs some quality testing and fitting.
One view is that all this will mean many fewer software developers. The reality of the consuming world is that the minute that software becomes a thousand times cheaper, a giant new demand pocket is unleashed and people need code for many, many things that it simply wasnât cost effective for before. And so, demand just keeps pace with the expanded capacity. And if historyâs any guide, weâll need many more software developers than we did before, and the volume of software produced will just boggle the imagination.
So, computer programming remains a sensible career choice?
Well, yes and no. The character of the demand is liable to change rather extensively. I donât need a .NET developer because I can ask AI to generate that code for me. What I do need is a .NET developer who also understands cafĂ© operations. Or emotional support animals. Or physical network infrastructure. In other words, creating the code is no longer the relevant thing, but using expertise to make it suited to purpose and allowing it to reach across the divide into the real world.
âAs with the internet, I think the unintended effects of all of this are going to be the biggest ones, not the intended things.â
What do you see as the most powerful forces that are going to shape how the AI revolution plays out?
Bottom line: This is a genie. Itâs been let out of the bottle. People are using it for many thingsâin a wide variety of contextsâat high velocity, and itâs getting better rapidly. I donât think we have much control over whatâs happening or liable to happen. Plus, we have the geopolitical thing between China and the States. And the volume of the spend on data centers and infrastructure is booming the US economy. Itâs extraordinary. So, I donât think much is going to bend that factor.
Itâs sort of like the arrival of the internet. There will be many voices in the choir, complainers and supporters and so on, and itâll not play out in a way that anyone quite predicts. And as with the internet, I think the unintended effects of all of this are going to be the biggest ones, not the intended things, not the things that people can point to now.
What role do you think government regulation has?
Itâs a race for AGI [Artificial General Intelligence] at this point between China and the States, and I see the reins coming down on both sides of the Pacific, basically. And so decreased regulation, increased governmental support, but in very different ways. And it is therefore of some consequence what happens in Europe and Southeast Asia and South America and so on. I think there are opportunities to make a counter-statement in terms of protecting aspects of culture and human interaction and society and work that these two competing parties may just let slide. But I think theyâre just going to drive the agenda and regulation be damned. Itâs basically what both seem to be saying.
Lastly, whatâs your biggest piece of advice to young people thinking about what theyâre going to do with their working lives?
The only class that I teach now to Masters studentsâwhich I would teach to grade schoolers, tooâis called Learning to Work with Intelligent Machines. And yes, we learn about AI and I make them practically apply and build skills, because I think thatâs important. But itâs much more important to understand how you learn personally; under what conditions you learn best. Because learning is the thing we are definitely all going to be needing to do over the next five to 55 years, better than ever before. To know not just what I personally need to do, but what kind of settings, what kind of situations, help me learn best. Itâs a skill to build skills. So, I focus mostly with my students on getting ridiculously good at building skill in whatever domain you choose. And I feel like thatâs the only set of skills that I can give them that will be eternally relevant.
Follow Joe Banks on X @joepbanks