Stephen Grenville’s excellent column on the American critical reaction to Dan Wang’s bestselling book Breakneck: China’s Quest to Engineer the Future, prompts me to recommend Wang’s 2025 China letter, published in late December. These annual reflections are always worthwhile but the 2025 letter is particularly good. Wang’s book compares China and the United States, while the 2025 letter compares China and Silicon Valley.
In doing so, it addresses a question I had for most of 2025, relating to how the development of artificial intelligence in the US and China is framed. Specifically, it is often seen as a race. This has never seemed like an entirely satisfactory metaphor because it suggests that there is a finish line, that there will be a winner and a loser, and even that the former will gain a decisive edge over the latter. Dan Wang’s letter addresses this very issue; I’ll quote him at length (emphasis mine):
It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.
Silicon Valley’s views on AI made more sense to me after I learned the term “decisive strategic advantage.” It was first used by Nick Bostrom’s 2014 book Superintelligence, which defined it as a technology sufficient to achieve “complete world domination.” How might anyone gain a DSA? A superintelligence might develop cyber advantages that cripple the adversary’s command-and-control capabilities. Or the superintelligence could self-recursively improve such that the lab or state that controls it gains an insurmountable scientific advantage. Once an AI reaches a certain capability threshold, it might need only weeks or hours to evolve into a superintelligence. And if an American lab builds it, it might help to lock in the dominance of another American century…
I am skeptical of the decisive strategic advantage when I filter it through my main preoccupation: understanding China’s technology trajectories. On AI, China is behind the US, but not by years. There’s no question that American reasoning models are more sophisticated than the likes of DeepSeek and Qwen. But the Chinese efforts are doggedly in pursuit, sometimes a bit closer to US models, sometimes a bit further. By virtue of being open-source (or at least open-weight), the Chinese models have found receptive customers overseas, sometimes with American tech companies.
If US labs achieve superintelligence, the Chinese labs are probably on a good footing to follow closely. Unless the DSA is decisive immediately, it’s not obvious that the US will have a monopoly on this technology, just as it could not keep it over the bomb…
I am not a skeptic of AI. I am a skeptic only of the decisive strategic advantage, which treats awakening the superintelligence as the final goal. Rather than “winning the AI race,” I prefer to say that the US and China need to “win the AI future.” There is no race with a clear end point or a shiny medal for first place. Winning the future is the more appropriately capacious term that incorporates the agenda to build good reasoning models as well as the effort to diffuse it across society. For the US to come ahead on AI, it should build more power, revive its manufacturing base, and figure out how to make companies and workers make use of this technology. Otherwise China might do better when compute is no longer the main bottleneck.
Wang’s point about nuclear weapons (bolded) raises a critical question about seeing AI as a race. The advent of nuclear weapons was a true revolution in military affairs, a “decisive strategic technology” in settling World War II. Scholars still debate whether it was necessary for the US to use the bomb against Japan, but the war would surely have turned out differently if the Soviet Union, Japan or Nazi Germany had developed it first. The race really did matter.
Then again, Wang’s point appears to be about the Cold War, and it’s less clear that coming second in developing nuclear weapons, or even third or fourth, made as much of a difference in that era.
So, whether you choose to see the development of AI as a race depends firstly on whether you think it will be a “super weapon” eclipsing even nuclear weapons in importance, and on whether you think the US and China are already in a World War II-like existential struggle or instead a Cold War-like wrestling match for advantage.
Of course, Wang’s voice is only one in what is already a crowded field of commentary on the strategic implications of AI. Here’s a reading list.