The US and China are, by many measures, archrivals in the field of artificial intelligence, with companies racing to outdo each other on algorithms, models, and specialized silicon. And yet, the world’s AI superpowers still collaborate to a surprising degree when it comes to cutting-edge research.
A WIRED analysis of more than 5,000 AI research papers presented last month at the industry’s premier conference, Neural Information Processing Systems (NeurIPS), reveals a significant amount of collaboration between US and Chinese labs.
The analysis found that 141 out of the 5,290 total papers (roughly 3 percent) involve collaboration between authors affiliated with US institutions and those affiliated with Chinese ones. US-China collaboration appears fairly constant, too, with 134 out of 4,497 total papers involving authors from institutions in both countries in 2024.
WIRED also looked at how algorithms and models developed in one country are shared and adapted across the Pacific. The transformer architecture, developed by a team of researchers at Google and now widely used across the industry, is featured in 292 papers with authors from Chinese institutions. Meta’s Llama family of models was a key element of the research presented in 106 of these papers. Meanwhile, the increasingly popular large language model Qwen from Chinese tech giant Alibaba appears in 63 papers that include authors from US organizations.
Jeffrey Ding, an assistant professor at George Washington University who tracks China’s AI landscape, says he is not surprised to see this level of teamwork. “Whether policymakers on both sides like it or not, the US and Chinese AI ecosystems are inextricably enmeshed—and both benefit from the arrangement,” Ding says.
The analysis no doubt simplifies the degree to which the US and China share i...







English (US)