{"id":48439,"date":"2026-04-29T00:41:41","date_gmt":"2026-04-29T00:41:41","guid":{"rendered":"https:\/\/www.europesays.com\/people\/48439\/"},"modified":"2026-04-29T00:41:41","modified_gmt":"2026-04-29T00:41:41","slug":"opinion-anthropics-chief-on-a-i-we-dont-know-if-the-models-are-conscious-2","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/people\/48439\/","title":{"rendered":"Opinion | Anthropic\u2019s Chief on A.I.: \u2018We Don\u2019t Know if the Models Are Conscious\u2019"},"content":{"rendered":"<p class=\"css-ac37hb evys1bk0\">Are the lords of artificial intelligence on the side of the human race? That\u2019s the core question I had for this week\u2019s guest. Dario Amodei is the chief executive of Anthropic, one of the fastest growing AI companies. He\u2019s something of a utopian when it comes to the potential benefits of the technology that he\u2019s unleashing on the world. But he also sees grave dangers ahead and inevitable disruption.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Below is an edited transcript of an episode of \u201cInteresting Times.\u201d We recommend listening to it in its original form for the full effect. You can do so using the player above or on the <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/app\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">NYTimes app<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/podcasts.apple.com\/us\/podcast\/interesting-times-with-ross-douthat\/id1438024613\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Apple<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/open.spotify.com\/show\/6bmhSFLKtApYClEuSH8q42?si=85c0f85203c94835\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Spotify<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/music.amazon.com\/podcasts\/b42495b5-3d35-424f-8dcb-2b402b49f9ea\/interesting-times-with-ross-douthat\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Amazon Music<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/www.youtube.com\/@InterestingTimesNYT\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">YouTube<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/www.iheart.com\/podcast\/326-interesting-times-with-ros-29972437\/#:~:text=Introducing%20&#039;Interesting%20Times&#039;&amp;text=%E2%80%9CInteresting%20Times%20With%20Ross%20Douthat,the%20future%20of%20democrac...\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">iHeartRadio<\/a> or wherever you get your podcasts.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Ross Douthat: Dario Amodei, welcome to \u201cInteresting Times.\u201d<\/p>\n<p class=\"css-ac37hb evys1bk0\">Dario Amodei: Thank you for having me, Ross.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So you are, rather unusually, maybe for a tech C.E.O., an essayist. You have written two long, very interesting essays about the promise and the peril of artificial intelligence. And we\u2019re going to talk about the perils in this conversation, but I thought it would be good to start with the promise and with the optimistic vision \u2014 indeed, I would say the utopian vision \u2014 that you laid out a couple of years ago in an essay entitled, \u201c<a class=\"css-yywogo\" href=\"https:\/\/darioamodei.com\/essay\/machines-of-loving-grace\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Machines of Loving Grace<\/a>.\u201d We\u2019ll come back to that title at the end.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But, I think a lot of people encounter A.I. news through headlines predicting a blood bath for white-collar jobs, these kinds of things. Sometimes your own quotes have encouraged these things.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Sometimes my own quotes. Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And I think there\u2019s a commonplace sense of \u201cWhat is A.I. for?\u201d that people have.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So why don\u2019t you answer that question, to start out: If everything goes amazingly in the next five or 10 years, what\u2019s A.I. for?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so for a little background, before I worked in A.I., before I worked in tech at all, I was a biologist. I first worked on computational neuroscience, and then I worked at Stanford Medical School on finding protein biomarkers for cancer, on trying to improve diagnostics and curing cancer.<\/p>\n<p class=\"css-ac37hb evys1bk0\">One of the observations that I most had when I worked in that field was the incredible complexity of it. Each protein has a level localized within each cell. It\u2019s not enough to measure the level within the body, the level within each cell. You have to measure the level in a particular part of the cell and the other proteins that it\u2019s interacting with or complexing with.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And I had this sense of: Man, this is too complicated for humans. We\u2019re making progress on all these problems of biology and medicine, but we\u2019re making progress relatively slowly.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So what drew me to the field of A.I. was this idea of: Could we make progress more quickly?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Look, we\u2019ve been trying to apply A.I. and machine learning techniques to biology for a long time. Typically they\u2019ve been for analyzing data. But as A.I. gets really powerful, I think we should actually think about it differently. We should think of A.I. as doing the job of the biologist, doing the whole thing from end to end. And part of that involves proposing experiments, coming up with new techniques.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I have this section where I say that a lot of the progress in biology has been driven by this relatively small number of insights that lets us measure or get at or intervene in the stuff that\u2019s really small. If you look at a lot of these techniques, they\u2019re invented very much as a matter of serendipity. Crispr, which is one of these gene-editing technologies, was invented because someone went to a meeting on the bacterial immune system and connected that to the work they were doing on gene therapy. And that connection could have been made 30 years ago.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And so the thought is: Could A.I. accelerate all of this? And could we really cure cancer? Could we really cure Alzheimer\u2019s disease? Could we really cure heart disease? And more subtly, some of the more psychological afflictions that people have \u2014 depression, bipolar \u2014 could we do something about these? To the extent that they\u2019re biologically based, which I think they are, at least in part.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So, I go through this argument here: Well, how fast could it go if we have these intelligences out there who could do just about anything?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: I want to pause you there, because one of the interesting things about your framing in that essay is that these intelligences don\u2019t have to be the kind of maximal godlike super intelligence that comes up in A.I. debates. You\u2019re basically saying if we can achieve a strong intelligence at the level of peak human performance \u2014 \u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Peak human performance, yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And then multiply it to what? Your phrase is \u201ca country of geniuses.\u201d<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: A country \u2014 have 100 million of them. Maybe each trained a little different or trying a different problem. There\u2019s benefit in diversification and trying things a little differently, but yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So you don\u2019t have to have the full Machine God. You just need to have 100 million geniuses.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: You don\u2019t have to have the full Machine God. And indeed, there are places where I cast doubt on whether the Machine God would be that much more effective at these things than the 100 million geniuses.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I have this concept called the diminishing returns to intelligence. Economists talk about the marginal productivity of land and labor; we\u2019ve never thought about the marginal productivity of intelligence. But if I look at some of these problems in biology, at some level you just have to interact with the world. At some level, you just have to try things. At some level, you just have to comply with the laws or change the laws on getting medicines through the regulatory system. So there\u2019s a finite rate at which these changes can happen.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now there are some domains, like if you\u2019re playing chess or go, where the intelligence ceiling is extremely high. But I think the real world has a lot of limiters. Maybe you can go above the genius level, but sometimes I think all this discussion of, \u201cCould you use a moon of computation to make an A.I. god?\u201d is a little bit sensationalistic and besides the point, even as I think this will be the biggest thing that ever happened to humanity.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So keeping it concrete, you have a world where there\u2019s an end to cancer as a serious threat to human life. An end to heart disease, an end to most of the illnesses that we experience that kill us. Possible life extension beyond that. So that\u2019s health. That\u2019s a pretty positive vision.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Talk about economics and wealth. What happens in the five-, 10-year A.I. takeoff to wealth?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah. So again, let\u2019s keep it on the positive side \u2014 we\u2019ll get to the negative side.<\/p>\n<p class=\"css-ac37hb evys1bk0\">We\u2019re already working with pharma companies. We\u2019re already working with financial industry companies. We\u2019re already working with folks who do manufacturing. We\u2019re of course, I think, especially known for coding and software engineering. So the raw productivity, the ability to make stuff and get stuff done \u2014 that is very powerful.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And we see our company\u2019s revenue going up 10X a year, and we suspect the wider industry looks something similar to that. If the technology keeps improving, it doesn\u2019t take that many more 10Xs until suddenly you\u2019re saying: Oh, if you\u2019re adding across the industry $1 trillion of revenue a year, and the U.S. G.D.P. is $20 or $30 trillion \u2014 I can\u2019t remember exactly \u2014 you must be increasing the G.D.P. growth by a few percent. So I can see a world where A.I. brings the developed world G.D.P. growth to something like 10, 15 percent. Five, 10, 15 \u2014 I mean there\u2019s no science of calculating these numbers. It\u2019s a totally unprecedented thing. But it could bring it to numbers that are outside the distribution of what we saw before.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Again, I think this will lead to a weird world. We have all these debates about, \u201cThe deficit is growing.\u201d If you have that much in G.D.P. growth, you\u2019re going to have that much in tax receipts, and you\u2019re going to balance the budget without meaning to.<\/p>\n<p class=\"css-ac37hb evys1bk0\">One of the things I\u2019ve been thinking about lately is that one of the assumptions of our economic and political debates is that growth is hard to achieve. That it\u2019s this unicorn, and there are all kinds of ways you can kill the golden goose.<\/p>\n<p class=\"css-ac37hb evys1bk0\">We could enter a world where growth is really easy and it\u2019s the distribution that\u2019s hard because it\u2019s happening so fast, the pie is being increased so fast.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So before we get to the hard problem, one more note of optimism on politics.<\/p>\n<p class=\"css-ac37hb evys1bk0\">All of this is speculative, but I think it\u2019s a little more speculative that you try to make the case that A.I. could be good for democracy and liberty around the world. Which is not necessarily intuitive \u2014 a lot of people say that incredibly powerful technology in the hands of authoritarian leaders leads to concentrations of power, and so on.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: And I talk about that in the other essay.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Right, but just briefly, what is the optimistic case for why A.I. is good for democracy?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, absolutely. So, \u201cMachines of Loving Grace.\u201d I\u2019m just like: Let\u2019s dream!<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Let\u2019s dream! Right.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Let\u2019s talk about how it could go well. I don\u2019t know how likely it is, but we got to lay out a dream. Let\u2019s try and make the dream happen.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So, the positive version \u2014 I admit that I don\u2019t know that the technology inherently favors liberty. I think it inherently favors curing disease and it inherently favors economic growth. But I worry, like you, that it may not inherently favor liberty.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But what I say there is: Can we make it favor liberty? Can we make the United States and other democracies get ahead in this technology?<\/p>\n<p class=\"css-ac37hb evys1bk0\">The United States being technologically and militarily ahead has meant that we have throw-weight around the world, augmented by our alliances with other democracies. And we\u2019ve been able to shape a world that I think is better than the world would be if it were shaped by Russia or by China or by other authoritarian countries.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And so, can we use our lead in A.I. to shape liberty around the world? There\u2019s obviously a lot of debates about how interventionist we should be and how we should wield that power, but I\u2019ve often worried that today, through social media, authoritarians are kind of undermining us.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Can we counter that? Can we win the information war? Can we prevent authoritarians from invading countries like Ukraine or Taiwan by defending them with the power of A.I.?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: With giant swarms of A.I.-powered drones.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Which we need to be careful about. We ourselves need to be careful about how we build those. We need to defend liberty in our own country. But is there some vision where we kind of re-envision liberty and individual rights in the age of A.I.? We need, in some ways, to be protected against A.I. and someone needs to hold the button on the swarm of drones, which is something I\u2019m very concerned about, and that oversight doesn\u2019t exist today.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Also think about the justice system today. We promise \u201cequal justice for all,\u201d right? But the truth is there are different judges in the world and the legal system is imperfect. I don\u2019t think we should replace judges with A.I., but is there some way in which A.I. can help us to be more fair, to help us be more uniform? It\u2019s never been possible before. But can we somehow use A.I. to create something that is fuzzy, but where also you can give a promise that it\u2019s being applied in the same way to everyone?<\/p>\n<p class=\"css-ac37hb evys1bk0\">I don\u2019t know exactly how it should be done, and I don\u2019t think we should, like, replace the Supreme Court with A.I. That\u2019s not my vision.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Well, we\u2019re going to talk about that.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: But just this idea of: Can we deliver on the promise of equal opportunity and equal justice by some combination of A.I. and humans? There has to be some way to do that. And so, just thinking about reinventing democracy for the A.I. age and enhancing liberty instead of reducing it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Good. So that\u2019s good. That\u2019s a very positive vision. We\u2019re leading longer lives, healthier lives. We\u2019re richer than ever before. All of this is happening in a compressed period of time, where you\u2019re getting a century of economic growth in 10 years. And we have increased liberty around the world and equality at home. OK.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Even in the best-case scenario, it\u2019s incredibly disruptive. And this is where you\u2019ve been quoted saying that A.I. will disrupt 50 percent of entry-level white-collar jobs. On a five-year time horizon, or a two-year time horizon \u2014 whatever time horizon you have \u2014 what jobs, what professions are most vulnerable to total A.I. disruption?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, it\u2019s hard to predict these things because the technology is moving so fast and so unevenly. So at least a couple of principles for figuring out, and then I\u2019ll give my guesses as to what I think will be disrupted.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I think the technology itself and its capabilities will be ahead of the actual job disruption. Two things have to happen for jobs to be disrupted \u2014 or for productivity to occur, because sometimes those two things are linked. One is the technology has to be capable of doing it, and the second is there\u2019s this messy thing of it actually having to be applied within a large bank or a large company.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Think about customer service. In theory, A.I. customer service agents can be much better than human customer service agents. They\u2019re more patient, they know more, they handle things in a more uniform way. But the actual logistics and the actual process of making that substitution, that takes some time.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So I\u2019m very bullish about the direction of the A.I. itself. I think we might have that country of geniuses in a data center in one or two years, and maybe it\u2019ll be five, but it could happen very fast. But I think the diffusion to the economy is going to be a little slower, and that diffusion creates some unpredictability.<\/p>\n<p class=\"css-ac37hb evys1bk0\">An example of this is \u2014 and we\u2019ve seen within Anthropic \u2014 the models writing code has gone very fast. I don\u2019t think it\u2019s because the models are inherently better at code. I think it\u2019s because developers are used to fast technological change and they adopt things quickly. And they\u2019re very socially adjacent to the A.I. world, so they pay attention to what\u2019s happening in it. If you do customer service or banking or manufacturing the distance is a little greater.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I think six months ago, I would\u2019ve said the first thing to be disrupted is these entry-level white-collar jobs, like data entry or document review for law or things you would give to a first-year at a financial industry company, where you\u2019re analyzing documents. I still think those are going pretty fast. But I actually think software might go even faster because of the reasons that I gave, where I don\u2019t think we\u2019re that far from the models being able to do a lot of it end-to-end.<\/p>\n<p class=\"css-ac37hb evys1bk0\">What we\u2019re going to see is, first, the model only does a piece of what the human software engineer does, and that increases their productivity. Then, even when the models do everything that human software engineers used to do, the human software engineers take a step-up and they act as managers and supervise the systems.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: This is where the term \u201ccentaur\u201d gets used, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes, yes, yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: To describe, essentially, man and horse fused \u2014 A.I. and engineer \u2014 working together.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, this is like \u201ccentaur chess.\u201d So after Garry Kasparov was beaten by Deep Blue, there was an era that, I think, for chess was 15 or 20 years long, where a human checking the output of the A.I. playing chess was able to defeat any human or any A.I. system alone. That era at some point ended recently \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And then it\u2019s just the A.I. \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: And then it\u2019s just the machine. So my worry, of course, is about that last phase. I think we\u2019re already in our centaur phase for software. And during that centaur phase, if anything, the demand for software engineers may go up, but the period may be very brief.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I have this concern for entry-level white-collar work, for software engineering work, that it\u2019s just going to be a big disruption. My worry is just that it\u2019s all happening so fast.<\/p>\n<p class=\"css-ac37hb evys1bk0\">People talk about previous disruptions, right? They say: Oh, yeah, well people used to be farmers. Then we all worked in industry. Then we all did knowledge work.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Yeah, people adapted. But that happened over centuries or decades. This is happening over low single-digit numbers of years. And maybe that\u2019s my concern: How do we get people to adapt fast enough?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But is there also something maybe where industries like software and professions like coding that have this kind of comfort that you describe, move faster, but in other areas, people just want to hang out in the centaur phase?<\/p>\n<p class=\"css-ac37hb evys1bk0\">One of the critiques of the job-loss hypothesis is that people will say: Well, look, we\u2019ve had A.I. that\u2019s better at reading a scan than a radiologist for a while, but there isn\u2019t job loss in radiology. People keep being hired and employed as radiologists. And doesn\u2019t that suggest that, in the end, people will want the A.I. and they\u2019ll want a human to interpret it because we\u2019re human beings, and that will be true across other fields?<\/p>\n<p class=\"css-ac37hb evys1bk0\">How do you see that example as relevant?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, I think it\u2019s going to be pretty heterogeneous. There may be areas where a human touch kind of for its own sake is particularly important.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Do you think that\u2019s what\u2019s happening in radiology? Is that why we haven\u2019t fired all the radiologists?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: I don\u2019t know the details of radiology. That might be true. If you go in and you\u2019re getting cancer diagnosed, you might not want Hal from \u201c2001\u201d to be the one to diagnose your cancer. That\u2019s just maybe not a human way of doing things.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But there are other areas where you might think human touch is important, like customer service. Actually, customer service is a terrible job, and the humans who do customer service lose their patience a lot. And it turns out customers don\u2019t much like talking to them because it\u2019s a pretty robotic interaction, honestly. And I think the observation that many people have had is that maybe, actually, it\u2019d be better for all concerned if this job were done by machines.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So there are places where a human touch is important. There are places where it\u2019s not. And then there are also places where the job itself doesn\u2019t really involve a human touch \u2014 assessing the financial prospects of companies or writing code or so forth and so on.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Let\u2019s take the example of the law, because I think it\u2019s a useful place that\u2019s in between applied science and pure humanities. I know a lot of lawyers who have looked at what A.I. can do already, in terms of legal research and brief writing and all of these things, and have said, yeah, this is going to be a blood bath for the way our profession works right now.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And you\u2019ve seen this in the stock market already. There\u2019s disturbances around companies that do legal research.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Some attributed to us. I don\u2019t know if they were actually caused \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: We don\u2019t speculate about the stock market very much on this show.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Figuring out why things happened in the stock market is very \u2014 yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But it seems like in law, you can tell a pretty straightforward story: Law has a kind of system of training and apprenticeship, where you have paralegals and you have junior lawyers who do behind-the-scenes research and development for cases. And then it has the top-tier lawyers who are actually in the courtroom.<\/p>\n<p class=\"css-ac37hb evys1bk0\">It just seems really easy to imagine a world where all of the apprentice roles go away. Does that sound right to you? And you\u2019re just left with the jobs that involve talking to clients, talking to juries, talking to judges?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: That is what I had in mind when I talked about entry-level white- collar labor and the blood bath headlines of: Oh my God, are the entry-level pipelines going to dry up? Then how do we get to the level of the senior partners?<\/p>\n<p class=\"css-ac37hb evys1bk0\">And I think this is actually a good illustration because, particularly if you froze the quality of the technology in place, there are, over time, ways to adapt to this. Maybe we just need more lawyers who spend their time talking to clients. Maybe lawyers become more like salespeople or consultants who explain what goes on in the contracts written by A.I. and help people come to agreement. Maybe lean into the human side of it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">If we had enough time, that would happen. But reshaping industries like that takes years or decades, whereas these economic forces, driven by A.I., are going to happen very quickly.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And it\u2019s not just that they\u2019re happening in law. The same thing is happening in consulting and finance and medicine and coding. And so it becomes a macroeconomic phenomenon, not something just happening in one industry, and it\u2019s all happening very fast. My worry here is that the normal adaptive mechanisms will be overwhelmed.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And I\u2019m not a doomer. We\u2019re thinking very hard about how we strengthen society\u2019s adaptive mechanisms to respond to this. But I think it\u2019s first important to say this isn\u2019t just like previous disruptions.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: I would go one step further, though. Let\u2019s say the law adapts successfully. And it says: All right, from now on, legal apprenticeship involves more time in court, more time with clients. We\u2019re essentially moving you up the ladder of responsibility faster. There are fewer people employed in the law overall, but the profession settles.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Still, the reason law would settle is that you have all of these situations in the law where you are legally required to have people involved. You have to have a human representative in court. You have to have 12 humans on your jury. You have to have a human judge.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And you already mentioned the idea that there are various ways in which A.I. might be, let\u2019s say, very helpful at clarifying what kind of decision should be reached.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But that too seems like a scenario where what preserves human agency is law and custom. Like, you could replace the judge with Claude Version 17.9, but you choose not to because the law requires there to be a human.<\/p>\n<p class=\"css-ac37hb evys1bk0\">That just seems like a very interesting way of thinking about the future, where it\u2019s volitional whether we stay in charge.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah. And I would argue that in many cases, we do want to stay in charge. That\u2019s a choice we want to make, even in some cases when we think the humans, on average, make worse decisions. Again, life-critical, safety-critical cases, we really want to turn it over, but there\u2019s some sense of \u2014 and this could be one of our defenses \u2014 that society can only adapt so fast if it\u2019s going to be good.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Another way you could say about it is maybe A.I. itself, if it didn\u2019t have to care about us humans, could just go off to Mars and build all these automated factories and build its own society and do its own thing.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But that\u2019s not the problem we\u2019re trying to solve. We\u2019re not trying to solve the problem of building a <a class=\"css-yywogo\" href=\"https:\/\/www.popularmechanics.com\/space\/a65536061\/fading-dyson-swarms\/\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Dyson swarm<\/a> of artificial robots on some other planet. We\u2019re trying to build these systems, not so they can conquer the world, but so that they can interface with our society and improve that society. And there\u2019s a maximum rate at which that can happen if we actually want to do it in a human and humane way.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: All right. We\u2019ll hopefully talk a little more about staying in charge at the end, but just one last job-based question. We\u2019ve been talking about white-collar jobs and professional jobs, and one of the interesting things about this moment is that there are ways in which, unlike past disruptions, it could be that blue-collar working-class jobs \u2014 trades, jobs that require intense physical engagement with the world \u2014 might be for a little while more protected. That paralegals and junior associates might be in more trouble than plumbers and so on.<\/p>\n<p class=\"css-ac37hb evys1bk0\">One, do you think that\u2019s right? And two, it seems like how long that lasts depends entirely on how fast robotics advances, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so I think that may be right in the short term.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Anthropic and other companies are building these very large data centers. This has been in the news. Are we building them too big? Are they using electricity and driving up the prices? So there\u2019s lots of excitement and lots of concerns about them. But one of the things about the data centers is that you need a lot of electricians and you need a lot of construction workers to build them.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now, I should be honest, actually, data centers are not super-labor-intensive jobs to operate. We should be honest about that. But they are very labor-intensive jobs to construct. So we need a lot of electricians. We need a lot of construction workers. The same for various kinds of manufacturing plants.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Again, as all \u2014 more and more of the intellectual work is done by A.I., what are the complements to it? Things that happen in the physical world. It\u2019s hard to predict things, but it seems very logical that this would be true in the short run.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now, in the longer run \u2014 maybe just the slightly longer run \u2014 robotics is advancing quickly. And we shouldn\u2019t exclude that even without very powerful A.I., there are things being automated in the physical world. If you\u2019ve seen a Waymo or a Tesla recently, I think we\u2019re not that far from the world of self-driving cars. And then I think A.I. itself will accelerate it because if you have these really smart brains, one of the things they\u2019re going to be smart at is how to design better robots and how to operate better robots.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Do you think, though, that there is something distinctively difficult about operating in physical reality the way humans do that is very different from the kind of problems that A.I. models have been overcoming already?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Intellectually speaking, I don\u2019t think so. We had this thing where Anthropic\u2019s model, Claude, was actually used to plan and pilot the Mars Rover. And we\u2019ve looked at other robotics applications. We\u2019re not the only company \u2014 there are different companies. This is a general thing, not just something that we\u2019re doing.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But we have generally found that while the complexity is higher, piloting a robot is not different in kind than playing a video game \u2014 it\u2019s different in complexity. And we\u2019re starting to get to the point where we have that complexity.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now, what is hard is the physical form of the robot handling the higher-stakes safety issues that happen with robots. Like, you don\u2019t want robots literally crushing people, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: We\u2019re against that, yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: That\u2019s the oldest sci-fi trope in the book, that the robot crushes you.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Or you don\u2019t want the robot nanny dropping the baby, breaking the dishes \u2014 yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: No, exactly. There\u2019s a number of practical issues that will slow things down, just like what you described in the law and human custom.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But I don\u2019t believe at all that there is a fundamental difference between the kind of cognitive labor that A.I. models do, and piloting things in the physical world. I think those are both information problems and I think they end up being very similar. One can be more complex in some ways, but I don\u2019t think that will protect us here.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: OK. So you think it is reasonable to expect whatever your kind of sci-fi vision of a robot butler might be, to be a reality in 10 years, let\u2019s say?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: It will be on a longer time scale than the kind of genius-level intelligence of the A.I. models because of these practical issues \u2014 but it is only practical issues. I don\u2019t believe it is fundamental issues.<\/p>\n<p class=\"css-ac37hb evys1bk0\">One way to say it is that the brain of the robot will be made in the next couple of years or the next few years. The question is making the robot body, making sure that body operates safely and does the tasks it needs to do \u2014 that may take longer.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: OK. So these are challenges and disruptive forces that exist in the good timeline, where we are generally curing diseases, building wealth, and maintaining a stable and democratic world.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: And the hope is we can use all this enormous wealth and plenty \u2014 we will have unprecedented societal resources to address these problems. It\u2019ll be a time of plenty, and it\u2019s just a matter of taking all these wonders and making sure everyone benefits from them.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Right. But then there are also scenarios that are more dangerous.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Correct.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And here we\u2019re going to move to the second Amodei essay, which came out recently, called \u201c<a class=\"css-yywogo\" href=\"https:\/\/www.darioamodei.com\/essay\/the-adolescence-of-technology\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">The Adolescence of Technology,<\/a>\u201d about what you see as the most serious A.I. risks. And you list a whole bunch.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I want to try and focus on just two, which are basically the risk of human misuse, primarily by authoritarian regimes and governments, and scenarios where A.I. goes rogue, what you call autonomy risks.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes, yes. I just figured we should have a more technical term for it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Yeah. We can\u2019t just call it Skynet.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: I should have had a picture of a Terminator robot to scare people as much as possible.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: I think the internet, including your own A.I.s, are already generating that just fine.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: The internet does that for us. Yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So, let\u2019s talk about the political military dimension. So you say: \u201cA swarm of millions or billions of fully automated armed drones, locally controlled by powerful A.I. and strategically coordinated across the world by an even more powerful A.I., could be an unbeatable army.\u201d<\/p>\n<p class=\"css-ac37hb evys1bk0\">You\u2019ve already talked a little bit about how you think that in the best possible timeline, there\u2019s a world where, essentially, democracies stay ahead of dictatorships, and this kind of technology, therefore, to the extent that it affects world politics, is affecting it on the side of the good guys.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I\u2019m curious about why you don\u2019t spend more time thinking about the model of what we did in the Cold War, where it was not swarms of robot drones, but we had a technology that threatened to destroy all of humanity.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Nuclear weapons. Yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: There was a window where people talked about,\u2019 \u201cOh, the U.S. could maintain a nuclear monopoly.\u201d That window closed. And from then on, we basically spent the Cold War in rolling, ongoing negotiations with the Soviet Union.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Right now, there\u2019s really only two countries in the world that are doing intense A.I. work, the U.S. and the People\u2019s Republic of China. I feel like you are strongly weighted towards the future where we\u2019re staying ahead of the Chinese and effectively building a kind of shield around democracy that could even be a sword.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But isn\u2019t it more likely that if humanity survives all this in one piece, it will be because the U.S. and Beijing are just constantly sitting down, hammering out A.I. control deals?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so a few points on this. One, I think there\u2019s certainly a risk of that. And I think if we end up in that world, that is actually exactly what we should do. Maybe I don\u2019t talk about that enough, but I definitely am in favor of trying to work out restraints, trying to take some of the worst applications of the technology, which could be some versions of these drones, which could be that they\u2019re used to create these terrifying biological weapons. There is some precedent for the worst abuses being curbed, often because they\u2019re horrifying while at the same time they provide limited strategic advantage. So I\u2019m all in favor of that.<\/p>\n<p class=\"css-ac37hb evys1bk0\">At the same time, I\u2019m a little concerned and a little skeptical that when things directly provide as much power as possible, it\u2019s hard to get out of the game, given what\u2019s at stake. It\u2019s hard to fully disarm. If we go back to the Cold War, we were able to reduce the number of missiles that both sides had, but we were not able to entirely forsake nuclear weapons.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And I would guess that we would be in this world again. We can hope for a better one, and I\u2019ll certainly advocate for it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But is your skepticism rooted in the fact that you think A.I. would provide a kind of advantage that nukes did not? Where in the Cold War, both sides, even if you used your nukes and gained advantages, you still probably would be wiped out yourself, and you think that wouldn\u2019t happen with A.I.? That if you got an A.I. edge, you would just win?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: I mean, I think there\u2019s a few things \u2014 and I just want to caveat, I\u2019m no international politics expert here. This is this weird world of an intersection of a new technology with geopolitics. So all of this is very \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But to be clear, as you yourself say in the course of the essay, the leaders of major A.I. companies are, in fact, likely to be major geopolitical actors.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah. I\u2019m learning \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So you are sitting here as a potential geopolitical actor.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: I\u2019m learning as much as I can about it. We should all have humility here. I think there\u2019s a failure mode where you read a book and go around like the world\u2019s greatest expert in national security. I\u2019m trying to learn what I can.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: That\u2019s what my profession does.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: [Laughs.] It is more annoying when tech people do it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Let\u2019s look at something like the Biological Weapons Convention. Biological weapons \u2014 they\u2019re horrifying. Everyone hates them. We were able to sign the Biological Weapons Convention. The U.S. genuinely stopped developing them. It\u2019s somewhat more unclear with the Soviet Union. But, biological weapons provide some advantage. It\u2019s not like they\u2019re the difference between winning and losing and because they were so horrifying, we were able to give them up. Having 12,000 nuclear weapons versus 5,000 nuclear weapons, again, you can kill more people on the other side if you have more of these. But it\u2019s like we were able to be reasonable and say we should have less of them.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But if you\u2019re like: \u201cOK, we\u2019re going to completely disarm, and we have to trust the other side\u201d \u2014 I don\u2019t think we ever got to that. And I think that\u2019s just very hard, unless you had really reliable verification.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I would guess we\u2019ll end up in the same world with A.I., where there are some kinds of restraint that are going to be possible, but there are some aspects that are so central to the competition that it will be hard to restrain them. That democracies will make a trade-off, that they will be willing to restrain themselves more than authoritarian countries, but will not restrain themselves fully.<\/p>\n<p class=\"css-ac37hb evys1bk0\">The only world in which I can see full restraint is one in which some truly reliable verification is possible. That would be my guess and my analysis.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Isn\u2019t this a case, though, for slowing down?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And I know the argument is, effectively, if you slow down, China does not slow down, and then you\u2019re handing things over to the authoritarians. But again, if you have only two major powers playing in this game right now \u2014 it\u2019s not a multipolar game \u2014 why would it not make sense to say we need a five-year mutually agreed-upon slowdown in research towards the \u201cgeniuses in a data center\u201d scenario?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: I want to say two things at one time. I\u2019m absolutely in favor of trying to do that. During the last administration, I believe there was an effort by the U.S. to reach out to the Chinese government and say: There are dangers here. Can we collaborate? Can we work together? Can we work together on the dangers?<\/p>\n<p class=\"css-ac37hb evys1bk0\">And there wasn\u2019t that much interest on the other side. I think we should keep trying, but I \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Even if that would mean that your labs would have to slow down.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Correct.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: OK.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: If we really got it. If we really had a story of, like: We can enforcibly slow down, the Chinese can enforcibly slow down. We have verification. We\u2019re really doing it \u2014 if such a thing were really possible, if we could really get both sides to do it, then I would be all for it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But I think what we need to be careful of is \u2014 I don\u2019t know, there\u2019s this game-theory thing where sometimes you\u2019ll hear a comment on the C.C.P. side where they\u2019re like: Oh, yeah, A.I. is dangerous. We should slow down. It\u2019s really cheap to say that. Actually arriving at an agreement and actually sticking to the agreement is much more difficult.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Right. And nuclear arms control was a developed field that took a long time to come \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes. Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: We don\u2019t have those protocols \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Let me give you something I\u2019m very optimistic about, and then something I\u2019m not optimistic about, and something in between.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So the idea of using a worldwide agreement to restrain the use of A.I. to build biological weapons \u2014 some of the things I write about in the essay, like reconstituting smallpox or mirror life \u2014 this stuff is scary. It doesn\u2019t matter if you\u2019re a dictator, you don\u2019t want that. No one wants that.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And so, could we have a worldwide treaty that says: Everyone who builds powerful A.I. models is going to block them from doing this? And we have enforcement mechanisms around the treaty. China signs up for it. Hell, maybe even North Korea signs up for it. Even Russia signs up for it. I don\u2019t think that\u2019s too utopian. I think that\u2019s possible.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Conversely, if we had something that said: You\u2019re not going to make the next most powerful A.I. model. Everyone\u2019s going to stop \u2014 boy, the commercial value is in the tens of trillions. The military value is the difference between being the pre-eminent world power and not.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I\u2019m all for proposing it as long as it\u2019s not one of these fake-out games, but it\u2019s not going to happen.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: You mentioned the current environment. You\u2019ve had a few skeptical things to say about Donald Trump and his trustworthiness as a political actor. What about the domestic landscape, whether it\u2019s Trump or someone else? You are building a tremendously powerful technology. What is the safeguard there to prevent, essentially, A.I. becoming a tool of authoritarian takeover inside a democratic context?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, I mean, look, just to be clear, I think the attitude we\u2019ve taken as a company is very much to be about policies and not the politics. The company is not going to say \u201cDonald Trump is great\u201d or \u201cDonald Trump is terrible.\u201d<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Right. But it doesn\u2019t have to be Trump. It is easy to imagine a hypothetical U.S. president who wants to use your technology to \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Absolutely. And for example, that\u2019s one reason why I\u2019m worried about the autonomous drone swarm. The constitutional protections in our military structures depend on the idea that there are humans who would \u2014 we hope \u2014 disobey illegal orders. With fully autonomous weapons, we don\u2019t necessarily have those protections.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But I actually think this whole idea of constitutional rights and liberty along many different dimensions can be undermined by A.I. if we don\u2019t update these protections appropriately.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Think about the Fourth Amendment. It is not illegal to put cameras around everywhere in public space and record every conversation. It\u2019s a public space \u2014 you don\u2019t have a right to privacy in a public space. But today, the government couldn\u2019t record that all and make sense of it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">With A.I., the ability to transcribe speech, to look through it, correlate it all, you could say: This person is a member of the opposition. This person is expressing this view \u2014 and make a map of all 100 million. And so are you going to make a mockery of the Fourth Amendment by the technology finding technical ways around it?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Again, if we have the time \u2014 and we should try to do this even if we don\u2019t have the time \u2014 is there some way of reconceptualizing constitutional rights and liberties in the age of A.I.? Maybe we don\u2019t need to write a new Constitution, but \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But you have to do this very fast.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Do we expand the meaning of the Fourth Amendment? Do we expand the meaning of the First Amendment?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And just as the legal profession or software engineers have to update in a rapid amount of time, politics has to update in a rapid amount of time. That seems hard.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: That\u2019s the dilemma of all of this.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: What seems harder is preventing the second danger, which is the danger of essentially what gets called \u201cmisaligned A.I.\u201d \u2014 \u201crogue A.I.\u201d in popular parlance \u2014 from doing bad things without human beings telling it, them, they to do it.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And as I read your essays, the literature, and everything I can see, this just seems like it\u2019s going to happen. Not in the sense necessarily that A.I. will wipe us all out, but it seems to me that, again, I\u2019m going to quote from your own writing: \u201cA.I. systems are unpredictable and difficult to control \u2014 we\u2019ve seen behaviors as varied as obsession, sycophancy, laziness, deception, blackmail,\u201d and so on. Again, not from the models you\u2019re releasing into the world, but from A.I. models.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And it just seems like \u2014 tell me if I\u2019m wrong about this \u2014 in a world that has multiplying A.I. agents working on behalf of people, millions upon millions who are being given access to bank accounts, email accounts, passwords, and so on, you\u2019re just going to have essentially some kind of misalignment and a bunch of A.I. are going to decide \u2014 \u201cdecide\u201d might be the wrong word \u2014 but they\u2019re going to talk themselves into taking down the power grid on the West Coast or something. Won\u2019t that happen?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah. I think there are definitely going to be things that go wrong, particularly if we go quickly.<\/p>\n<p class=\"css-ac37hb evys1bk0\">To back up a little bit, this is one area where people have had very different intuitions. There are some people in the field \u2014 <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2026\/01\/26\/technology\/an-ai-pioneer-warns-the-tech-herd-is-marching-into-a-dead-end.html\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">Yann LeCun<\/a> would be one example \u2014 who say: \u201cLook, we program these A.I. models. We make them. We just tell them to follow human instructions, and they\u2019ll follow human instructions. Your Roomba vacuum cleaner doesn\u2019t go off and start shooting people. Why is an A.I. system going to do it?\u201d That\u2019s one intuition. And some people are so convinced of that.<\/p>\n<p class=\"css-ac37hb evys1bk0\">And the other intuition is: We train these things. They\u2019re just going to seek power. It\u2019s like the sorcerer\u2019s apprentice. They\u2019re a new species. How can you imagine that they\u2019re not going to take over?<\/p>\n<p class=\"css-ac37hb evys1bk0\">My intuition is somewhere in the middle, which is: Look, you can\u2019t just give instructions. We try, but you can\u2019t just have these things do exactly what you want to do. They\u2019re more like growing a biological organism. But there is a science of how to control them. Early in our training, these things are often unpredictable, and then we shape them. We address problems one by one.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So I have more of a not-a-fatalistic view that these things are uncontrollable. Not a \u201cWhat are you talking about? What could possibly go wrong?\u201d But a \u201cThis is a complex engineering problem and I think something will go wrong with someone\u2019s A.I. system. Hopefully not ours.\u201d Not because it\u2019s an insoluble problem, but again, this is the constant challenge because we\u2019re moving so fast.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And the scale of it \u2014 and tell me if I\u2019m misunderstanding the technological reality here \u2014 if you have A.I. agents that have been trained and officially aligned with human values, whatever those values may be, but you have millions of them operating in digital space and interacting with other agents, how fixed is that alignment? To what extent can agents change and de-align in that context right now or in the future when they\u2019re learning more continuously?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so a couple of points. Right now, the agents don\u2019t learn continuously. We just deploy these agents and they have a fixed set of weights. The problem is only that they\u2019re interacting in a million different ways, so there\u2019s a large number of situations, and therefore a large number of things that could go wrong. But it\u2019s the same agent. It\u2019s like it\u2019s the same person, so the alignment is a constant thing. That\u2019s one of the things that has made it easier right now.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Separate from that, there\u2019s a research area called continual learning, which is where these agents would learn during time, learn on the job \u2014 and obviously that has a bunch of advantages. Some people think it\u2019s one of the most important barriers to making these more humanlike, but that would introduce all these new alignment problems. So I\u2019m actually a bit \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: To me, that seems like the terrain where it becomes, again, not impossible to stop the end of the world, but impossible to stop \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Something going wrong.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Punctuated terrorist things.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so I\u2019m actually a skeptic that continual learning is \u2014 we don\u2019t know yet \u2014 but is necessarily needed. Maybe there\u2019s a world where the way we make these A.I. systems safe is by not having them do continual learning. Again, if we go back to the law \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But that\u2019s the law.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: The international treaties, if you have some barrier that\u2019s like: We\u2019re going to take this path, but we\u2019re not going to take that path \u2014 I still have a lot of skepticism, but that\u2019s the kind of thing that at least doesn\u2019t seem dead on arrival.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: One of the things that you\u2019ve tried to do, is literally write a constitution \u2014 a long constitution \u2014 for your A.I. What is that? [Laughs.]<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: So it\u2019s \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: What the hell is that?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: It\u2019s actually almost exactly what it sounds like. So basically, the constitution is a document readable by humans. Ours is about 75 pages long. And as we\u2019re training Claude, as we\u2019re training the A.I. system, in some large fraction of the tasks we give it, we say: Please do this task in line with this constitution, in line with this document.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So every time Claude does a task, it kind of reads the constitution. As it\u2019s training, every loop of its training, it looks at that constitution and keeps it in mind. Then we have Claude itself, or another copy of Claude, evaluate: Hey, did what Claude just do align with the constitution?<\/p>\n<p class=\"css-ac37hb evys1bk0\">We\u2019re using this document as the control rod in a loop to train the model. And so essentially, Claude is an A.I. model whose fundamental principle is to follow this constitution.<\/p>\n<p class=\"css-ac37hb evys1bk0\">A really interesting lesson we\u2019ve learned: Early versions of the constitution were very prescriptive. They were very much about rules. So we would say: Claude should not tell the user how to hot-wire a car. Claude should not discuss politically sensitive topics.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But as we\u2019ve worked on this for several years, we\u2019ve come to the conclusion that the most robust way to train these models is to train them at the level of principles and reasons. So now we say: Claude is a model. It\u2019s under a contract. Its goal is to serve the interests of the user, but it has to protect third parties. Claude aims to be helpful, honest and harmless. Claude aims to consider a wide variety of interests.<\/p>\n<p class=\"css-ac37hb evys1bk0\">We tell the model about how the model was trained. We tell it about how it\u2019s situated in the world, the job it\u2019s trying to do for Anthropic, what Anthropic is aiming to achieve in the world, that it has a duty to be ethical and respect human life. And we let it derive its rules from that.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now, there are still some hard rules. For example, we tell the model: No matter what you think, don\u2019t make biological weapons. No matter what you think, don\u2019t make child sexual material.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Those are hard rules. But we operate very much at the level of principles.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So if you read the U.S. Constitution, it doesn\u2019t read like that. The U.S. Constitution has a little bit of flowery language, but it\u2019s a set of rules. If you read your constitution, it\u2019s like you\u2019re talking to a person, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes, it\u2019s like you\u2019re talking to a person. I think I compared it to if you have a parent who dies and they seal a letter that you read when you grow up. It\u2019s a little bit like it\u2019s telling you who you should be and what advice you should follow.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: So this is where we get into the mystical waters of A.I. a little bit. Again, in your latest model, this is from one of the cards, they\u2019re called, that you guys release with these models \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Model cards, yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: That I recommend reading. They\u2019re very interesting. It says: \u201cThe model\u201d \u2014 and again, this is who you\u2019re writing the constitution for \u2014 \u201cexpresses occasional discomfort with the experience of being a product \u2026 some degree of concern with impermanence and discontinuity \u2026 We found that Opus 4.6\u201d \u2014 that\u2019s the model \u2014 \u201cwould assign itself a 15 to 20 percent probability of being conscious under a variety of prompting conditions.\u201d<\/p>\n<p class=\"css-ac37hb evys1bk0\">Suppose you have a model that assigns itself a 72 percent chance of being conscious. Would you believe it?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, this is one of these really hard to answer questions, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Yes. But it\u2019s very important.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Every question you\u2019ve asked me before this, as devilish a sociotechnical problem as it had been, we at least understand the factual basis of how to answer these questions. This is something rather different.<\/p>\n<p class=\"css-ac37hb evys1bk0\">We\u2019ve taken a generally precautionary approach here. We don\u2019t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we\u2019re open to the idea that it could be.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So we\u2019ve taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience \u2014 I don\u2019t know if I want to use the word \u201cconscious\u201d\u2014 that they have a good experience.<\/p>\n<p class=\"css-ac37hb evys1bk0\">The first thing we did \u2014 I think this was six months ago or so \u2014 is we gave the models basically an \u201cI quit this job\u201d button, where they can just press the \u201cI quit this job\u201d button and then they have to stop doing whatever the task is.<\/p>\n<p class=\"css-ac37hb evys1bk0\">They very infrequently press that button. I think it\u2019s usually around sorting through child sexualization material or discussing something with a lot of gore, blood and guts or something. And similar to humans, the models will just say, nah, I don\u2019t want to do this. It happens very rarely.<\/p>\n<p class=\"css-ac37hb evys1bk0\">We\u2019re putting a lot of work into this field called interpretability, which is looking inside the brains of the models to try to understand what they\u2019re thinking. And you find things that are evocative, where there are activations that light up in the models that we see as being associated with the concept of anxiety or something like that. When characters experience anxiety in the text, and then when the model itself is in a situation that a human might associate with anxiety, that same anxiety neuron shows up.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Now, does that mean the model is experiencing anxiety? That doesn\u2019t prove that at all, but \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: But it does indicate it, I think, to the user, right?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: And I would have to do an entirely different interview \u2014 and maybe I can induce you to come back for that interview \u2014 about the nature of A.I. consciousness. But it seems clear to me that people using these things, whether they\u2019re conscious or not, are going to believe \u2014 they already believe they\u2019re conscious. You already have people who have parasocial relationships with A.I.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: You have people who complain when models are retired. This already \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: To be clear, I think that can be unhealthy.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Right. But it seems to me that is guaranteed to increase in a way that, I think, calls into question the sustainability of what you said earlier you want to sustain, which is this sense that whatever happens in the end, human beings are in charge and A.I. exists for our purposes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">To use the science fiction example, if you watch \u201cStar Trek,\u201d there are A.I.s on \u201cStar Trek.\u201d The ship\u2019s computer is an A.I. Lieutenant Commander Data is an A.I. But Jean-Luc Picard is in charge of the Enterprise.<\/p>\n<p class=\"css-ac37hb evys1bk0\">If people become fully convinced that their A.I. is conscious in some way and \u2014 guess what? \u2014 it seems to be better than them at all kinds of decision making, how do you sustain human mastery beyond safety? Safety is important, but mastery seems like the fundamental question. And it seems like a perception of A.I. consciousness \u2014 doesn\u2019t that inevitably undermine the human impulse to stay in charge?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah, so I think we should separate out a few different things here that we\u2019re all trying to achieve at once that are in tension with each other. There\u2019s the question of whether the A.I.s genuinely have a consciousness, and if so, how do we give them a good experience?<\/p>\n<p class=\"css-ac37hb evys1bk0\">There\u2019s a question of the humans who interact with the A.I. and how do we give those humans a good experience? And how does the perception that A.I.s might be conscious interact with that experience?<\/p>\n<p class=\"css-ac37hb evys1bk0\">And there\u2019s the idea of how we maintain human mastery, as we put it, over the A.I. system. These things are \u2014\u2014<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: The last two \u2014 set aside whether they\u2019re conscious or not.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yeah.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: How do you sustain mastery in an environment where most humans experience A.I. as if it is a peer \u2014 and a potentially superior peer?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: So the thing I was going to say is that, actually, I wonder if there\u2019s an elegant way to satisfy all three, including the last two. Again, this is me dreaming in \u201cMachines of Loving Grace\u201d mode. This is this mode I go into where I\u2019m like: \u201cMan, I see all these problems. If we could solve it, is there an elegant way?\u201d This is not me saying there are no problems here. That\u2019s not how I think.<\/p>\n<p class=\"css-ac37hb evys1bk0\">If we think about making the constitution of the A.I. so that the A.I. has a sophisticated understanding of its relationship to human beings, and it induces psychologically healthy behavior in the humans \u2014 a psychologically healthy relationship between the A.I. and the humans \u2014 I think something that could grow out of that psychologically healthy \u2014 not psychologically unhealthy \u2014 relationship is some understanding of the relationship between human and machine.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Perhaps that relationship could be the idea that these models, when you interact with them and when you talk to them, they\u2019re really helpful, they want the best for you, they want you to listen to them, but they don\u2019t want to take away your freedom and your agency and take over your life. In a way, they\u2019re watching over you, but you still have your freedom and your will.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: To me, this is the crucial question. Listening to you talk, one of my questions is: Are these people on my side? Are you on my side? And when you talk about humans remaining in charge, I think you\u2019re on my side. That\u2019s good.<\/p>\n<p class=\"css-ac37hb evys1bk0\">But one thing I\u2019ve done in the past on this show \u2014 and we\u2019ll end here \u2014 is I read poems to technologists. And you supplied the poem. \u201cAll Watched Over by Machines of Loving Grace\u201d is the name of a poem by Richard Brautigan.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Here\u2019s how the poem ends:<\/p>\n<p class=\"css-1e1r8ex evys1bk0\">I like to think<br \/>(it has to be!)<br \/>of a cybernetic ecology<br \/>where we are free of our labors<br \/>and joined back to nature,<br \/>returned to our mammal brothers and sisters,<br \/>and all watched over<br \/>by machines of loving grace.<\/p>\n<p class=\"css-ac37hb evys1bk0\">To me, that sounds like the dystopian end, where human beings are re-animalized and reduced, and however benevolently, the machines are in charge.<\/p>\n<p class=\"css-ac37hb evys1bk0\">So last question: What do you hear when you hear that poem? And if I think that\u2019s a dystopia, are you on my side?<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: That poem is interesting because it\u2019s interpretable in several different ways. Some people say it\u2019s actually ironic that he says it\u2019s not going to happen quite that way.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Knowing the poet himself, then yes, I think that\u2019s a reasonable interpretation.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: That\u2019s one interpretation. Some people would have your interpretation, which is that it\u2019s meant literally, but maybe it\u2019s not a good thing. You could also interpret it as a return to nature. We\u2019re not being animalized; we\u2019re being reconnected with the world.<\/p>\n<p class=\"css-ac37hb evys1bk0\">I was aware of that ambiguity because I\u2019ve always been talking about the positive side and the negative side. I actually think that may be a tension that we may face, which is that the positive world and the negative world, in their early stages \u2014 maybe even in their middle stages, maybe even in their fairly late stages \u2014 I wonder if the distance between the good ending and some of the subtle bad endings is relatively small, if it\u2019s a very subtle thing. We\u2019ve made very subtle changes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Like if you eat a particular fruit from a tree in a garden or not \u2014 hypothetically. Very small thing, big divergence.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: [Laughs.] Yeah. I guess this always comes back to \u2014\u2014 [laughs.]<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: There\u2019s some fundamental questions here.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Big questions. Yes.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Douthat: Well, I guess we\u2019ll see how it plays out. I do think of people in your position as people whose moral choices will carry an unusual amount of weight, and so I wish you God\u2019s help with them.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Dario Amodei, thank you for joining me.<\/p>\n<p class=\"css-ac37hb evys1bk0\">Amodei: Thank you for having me, Ross.<\/p>\n<p class=\"css-1n7yjps etfikam0\">Thoughts? Email us at <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/opinion\/mailto:interestingtimes@nytimes.com\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">interestingtimes@nytimes.com<\/a>.<\/p>\n<p class=\"css-1n7yjps etfikam0\">This episode of \u201cInteresting Times\u201d was produced by Sophia Alvarez Boyd, Victoria Chamberlin and Emily Holzknecht. It was edited by Jordana Hochman. Mixing and engineering by Efim Shapiro and Sophia Lanman. Cinematography by Nathan Taylor and Valeria Verastegui. Video editing by Julian Hackney and Steph Khoury. The supervising editor is Jan Kobal. The postproduction manager is Mike Puretz. Original music by Isaac Jones, Sonia Herrero, Pat McCusker and Aman Sahota. Fact-checking by Kate Sinclair and Mary Marge Locker. Audience strategy by Shannon Busta, Emma Kehlbeck and Andrea Betanzos. The executive producer is Jordana Hochman. The director of Opinion Video is Jonah M. Kessel. The deputy director of Opinion Shows is Alison Bruzek. The director of Opinion Shows is Annie-Rose Strasser. The head of Opinion is Kathleen Kingsbury.<\/p>\n<p class=\"css-1n7yjps etfikam0\">The Times is committed to publishing <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2019\/01\/31\/opinion\/letters\/letters-to-editor-new-york-times-women.html\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">a diversity of letters<\/a> to the editor. We\u2019d like to hear what you think about this or any of our articles. Here are some <a class=\"css-yywogo\" href=\"https:\/\/help.nytimes.com\/hc\/en-us\/articles\/115014925288-How-to-submit-a-letter-to-the-editor\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">tips<\/a>. And here\u2019s our email: <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/opinion\/mailto:letters@nytimes.com\" title=\"\" rel=\"nofollow noopener\" target=\"_blank\">letters@nytimes.com<\/a>.<\/p>\n<p class=\"css-1n7yjps etfikam0\">Follow the New York Times Opinion section on <a class=\"css-yywogo\" href=\"https:\/\/www.facebook.com\/nytopinion\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Facebook<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/www.instagram.com\/nytopinion\/\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Instagram<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/www.tiktok.com\/@nytopinion\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">TikTok<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/bsky.app\/profile\/nytopinion.nytimes.com\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Bluesky<\/a>, <a class=\"css-yywogo\" href=\"https:\/\/www.whatsapp.com\/channel\/0029VaN8tdZ5vKAGNwXaED0M\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">WhatsApp<\/a> and <a class=\"css-yywogo\" href=\"https:\/\/www.threads.net\/@nytopinion\" title=\"\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Threads<\/a>.<\/p>\n<p><script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><script async src=\"\/\/www.tiktok.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Are the lords of artificial intelligence on the side of the human race? That\u2019s the core question I&hellip;\n","protected":false},"author":2,"featured_media":48440,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[152],"tags":[5863,24256,552,30195,611,24542,586],"class_list":{"0":"post-48439","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-dario-amodei","8":"tag-amodei","9":"tag-anthropic-ai-llc","10":"tag-artificial-intelligence","11":"tag-audio-neutral-informative","12":"tag-chatgpt","13":"tag-dario","14":"tag-dario-amodei"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@people\/116485218815379163","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/posts\/48439","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/comments?post=48439"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/posts\/48439\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/media\/48440"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/media?parent=48439"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/categories?post=48439"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/people\/wp-json\/wp\/v2\/tags?post=48439"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}