Newswise — What is the language model of the crow?
That was the question that motivated Ali Farhadi, a professor at the University of Washington, when he started the Allen Institute for Artificial Intelligence (Ai2) as founding CEO in 2014.
It was also the question he posed to the audience at the most recent installment of Columbia Engineering’s Lecture Series in AI, held Feb. 27 at Columbia’s Morningside campus. Farhadi began his presentation by showing a short video of a crow watching a person dig a hole in the ice to catch fish. At first glance, the scene seems ordinary. For Farhadi, it invites a deeper question: What does the crow actually understand about what it’s seeing? Is it simply observing movement, or is it predicting what might happen next?
That moment set the tone for the entire talk. Instead of beginning with answers, the process begins with curiosity. Observation leads to a question, which leads to an experiment. It’s a small example of what it means to think like a scientist, he said.
In science, the way a problem is framed often matters as much as the solution itself. Throughout the talk, Farhadi returned to this idea, showing how careful observation and thoughtful questions drive advances in artificial intelligence. What followed was a series of examples, from a crow observing a fisherman to AI agents learning in simulated worlds, that illustrated how scientific thinking continues to shape the future of the field.
Learning from the success of language models
From there, he reflected on the rise of large language models and asked what made them successful in the first place. Their progress came from a few key ingredients: enormous datasets gathered from the web, a learning objective based on predicting the next word, and the discovery that scaling these systems often produces new capabilities.
But scientific thinking didn’t stop there. Instead, he asked the next logical question: What would these ingredients look like outside of language? If language models learn by crawling the web, then an intelligent agent interacting with the physical world might need to “crawl the world.” That means moving through environments, he said, observing what happens, and learning from experience.
Because deploying millions of robots in the real world isn’t practical, his team developed Thor, an open-source framework for environment simulation. Through simulation, researchers built custom 3D worlds where robots could interact with their environments and be trained at scale.
Rethinking what “reasoning” really means
Another part of the lecture challenged the way the field discusses reasoning in AI.
Today, reasoning is often associated with solving math problems or explaining answers step by step in language. But Farhadi argued that this may be too narrow. In the real world, reasoning often involves actions—moving through space, manipulating objects, and interacting with environments.
To explore this idea, researchers began collecting a new kind of data: trajectories through space. Instead of representing reasoning as sentences, these trajectories captured how an agent moved through an environment to complete a task.
In a sense, they function like a physical version of a “chain of thought,” he said, where reasoning unfolds through actions rather than words.
Why AI needs scientists, not just hackers
Farhadi reflected on the rapid pace of progress in AI and what it means for the future of the field.
“AI has come a long way; it’s been phenomenal to watch it, to be part of it, and I think it still has a long way to go,” he said. But he also warned that breakthroughs in AI “do not come from shortcuts or quick hacks, they come from systematic thinking.”
“I really hope that the scientists in this room… do not subscribe to this hacker mentality,” he added. “It requires scientists and systematic thinking.”
For students interested in the field, his advice is practical: Learn the tools that are shaping AI today and evolve with them as they change. People who know how to use these systems will be more productive than those who ignore them. And one skill remains especially important—learning how to code.
While exceptional people sometimes succeed outside formal education, he noted that most researchers develop their skills through structured study. “What matters most is developing a principled approach to solving problems, whether someone becomes a scientist or an engineer.”
At the time of this talk, Ali Farhadi was serving as CEO of Ai2. He has since transitioned to a role at Microsoft.