{"id":14869,"date":"2026-04-24T00:44:09","date_gmt":"2026-04-24T00:44:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/14869\/"},"modified":"2026-04-24T00:44:09","modified_gmt":"2026-04-24T00:44:09","slug":"approaching-ai-like-a-scientist","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/14869\/","title":{"rendered":"Approaching AI Like a Scientist"},"content":{"rendered":"<p>Newswise \u2014 What is the language model of the crow?<\/p>\n<p>That was the question that motivated Ali Farhadi, a professor at the University of Washington, when he started the Allen Institute for Artificial Intelligence (Ai2) as founding CEO in 2014.\u00a0<\/p>\n<p>It was also the question he posed to the audience at the most recent installment of Columbia Engineering\u2019s\u00a0<a href=\"https:\/\/www.engineering.columbia.edu\/research-innovation\/strategic-areas\/ai-columbia-engineering\/lecture-series-ai\" rel=\"nofollow noopener\" target=\"_blank\">Lecture Series in AI<\/a>, held Feb. 27 at Columbia\u2019s Morningside campus. Farhadi began his presentation by showing a short video of a crow watching a person dig a hole in the ice to catch fish. At first glance, the scene seems ordinary. For Farhadi, it invites a deeper question: What does the crow actually understand about what it\u2019s seeing? Is it simply observing movement, or is it predicting what might happen next?<\/p>\n<p>That moment set the tone for the entire talk. Instead of beginning with answers, the process begins with curiosity. Observation leads to a question, which leads to an experiment. It\u2019s a small example of what it means to think like a scientist, he said.<\/p>\n<p>In science, the way a problem is framed often matters as much as the solution itself. Throughout the talk, Farhadi returned to this idea, showing how careful observation and thoughtful questions drive advances in artificial intelligence. What followed was a series of examples, from a crow observing a fisherman to AI agents learning in simulated worlds, that illustrated how scientific thinking continues to shape the future of the field.<\/p>\n<p>Learning from the success of language models<\/p>\n<p>From there, he reflected on the rise of large language models and asked what made them successful in the first place. Their progress came from a few key ingredients: enormous datasets gathered from the web, a learning objective based on predicting the next word, and the discovery that scaling these systems often produces new capabilities.<\/p>\n<p>But scientific thinking didn\u2019t stop there. Instead, he asked the next logical question: What would these ingredients look like outside of language? If language models learn by crawling the web, then an intelligent agent interacting with the physical world might need to \u201ccrawl the world.\u201d That means moving through environments, he said, observing what happens, and learning from experience.<\/p>\n<p>Because deploying millions of robots in the real world isn\u2019t practical, his team developed\u00a0<a href=\"https:\/\/ai2thor.allenai.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Thor,<\/a>\u00a0an open-source framework for environment simulation. Through simulation, researchers built custom 3D worlds where robots could interact with their environments and be trained at scale.<\/p>\n<p>Rethinking what \u201creasoning\u201d really means<\/p>\n<p>Another part of the lecture challenged the way the field discusses reasoning in AI.<\/p>\n<p>Today, reasoning is often associated with solving math problems or explaining answers step by step in language. But Farhadi argued that this may be too narrow. In the real world, reasoning often involves actions\u2014moving through space, manipulating objects, and interacting with environments.<\/p>\n<p>To explore this idea, researchers began collecting a new kind of data: trajectories through space. Instead of representing reasoning as sentences, these trajectories captured how an agent moved through an environment to complete a task.<\/p>\n<p>In a sense, they function like a physical version of a \u201cchain of thought,\u201d he said, where reasoning unfolds through actions rather than words.<\/p>\n<p>Why AI needs scientists, not just hackers<\/p>\n<p>Farhadi reflected on the rapid pace of progress in AI and what it means for the future of the field.<\/p>\n<p>\u201cAI has come a long way; it\u2019s been phenomenal to watch it, to be part of it, and I think it still has a long way to go,\u201d he said. But he also warned that breakthroughs in AI \u201cdo not come from shortcuts or quick hacks, they come from systematic thinking.\u201d<\/p>\n<p>\u201cI really hope that the scientists in this room\u2026 do not subscribe to this hacker mentality,\u201d he added. \u201cIt requires scientists and systematic thinking.\u201d<\/p>\n<p>For students interested in the field, his advice is practical: Learn the tools that are shaping AI today and evolve with them as they change. People who know how to use these systems will be more productive than those who ignore them. And one skill remains especially important\u2014learning how to code.<\/p>\n<p>While exceptional people sometimes succeed outside formal education, he noted that most researchers develop their skills through structured study. \u201cWhat matters most is developing a principled approach to solving problems, whether someone becomes a scientist or an engineer.\u201d<\/p>\n<p>At the time of this talk, Ali Farhadi was serving as CEO of Ai2. He has since transitioned to a role at Microsoft.<\/p>\n","protected":false},"excerpt":{"rendered":"Newswise \u2014 What is the language model of the crow? That was the question that motivated Ali Farhadi,&hellip;\n","protected":false},"author":2,"featured_media":14870,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,11209,11210,372,865,373,134],"class_list":{"0":"post-14869","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificial-intelligencelanguage-modelsmachine-learning","11":"tag-columbia-university-school-of-engineering-and-applied-science","12":"tag-engineering","13":"tag-ethics-and-research-methods","14":"tag-newswise","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/14869","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=14869"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/14869\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/14870"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=14869"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=14869"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=14869"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}