{"id":15923,"date":"2025-04-13T07:38:09","date_gmt":"2025-04-13T07:38:09","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/15923\/"},"modified":"2025-04-13T07:38:09","modified_gmt":"2025-04-13T07:38:09","slug":"small-language-models-are-the-new-rage-researchers-say","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/15923\/","title":{"rendered":"Small Language Models Are the New Rage, Researchers Say"},"content":{"rendered":"<p>The original version of <a href=\"https:\/\/www.quantamagazine.org\/why-do-researchers-care-about-small-language-models-20250310\/\" target=\"_blank\" rel=\"noopener\">this story<\/a> appeared in <a href=\"https:\/\/www.quantamagazine.org\" target=\"_blank\" rel=\"noopener\">Quanta Magazine<\/a>.<\/p>\n<p class=\"paywall\">Large language models work well because they\u2019re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of \u201cparameters\u201d\u2014the adjustable knobs that determine connections among data and get tweaked during the training process. With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate.<\/p>\n<p class=\"paywall\">But this power comes at a cost. Training a model with hundreds of billions of parameters takes huge computational resources. To train its Gemini 1.0 Ultra model, for example, Google reportedly spent <a href=\"https:\/\/aiindex.stanford.edu\/wp-content\/uploads\/2024\/05\/HAI_AI-Index-Report-2024.pdf\" target=\"_blank\" rel=\"noopener\">$191 million<\/a>. Large language models (LLMs) also require considerable computational power each time they answer a request, which makes them notorious energy hogs. A single query to ChatGPT <a data-offer-url=\"https:\/\/www.epri.com\/research\/products\/000000003002028905\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.epri.com\/research\/products\/000000003002028905&quot;}\" href=\"https:\/\/www.epri.com\/research\/products\/000000003002028905\" rel=\"nofollow noopener\" target=\"_blank\">consumes about 10 times<\/a> as much energy as a single Google search, according to the Electric Power Research Institute.<\/p>\n<p class=\"paywall\">In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters\u2014a fraction of their LLM counterparts.<\/p>\n<p class=\"paywall\">Small models are not used as general-purpose tools like their larger cousins. But they can excel on specific, more narrowly defined tasks, such as summarizing conversations, answering patient questions as a health care chatbot, and gathering data in smart devices. \u201cFor a lot of tasks, an 8 billion\u2013parameter model is actually pretty good,\u201d said <a data-offer-url=\"https:\/\/zicokolter.com\/\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/zicokolter.com\/&quot;}\" href=\"https:\/\/zicokolter.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Zico Kolter<\/a>, a computer scientist at Carnegie Mellon University. They can also run on a laptop or cell phone, instead of a huge data center. (There\u2019s no consensus on the exact definition of \u201csmall,\u201d but the new models all max out around 10 billion parameters.)<\/p>\n<p class=\"paywall\">To optimize the training process for these small models, researchers use a few tricks. Large models often scrape raw training data from the internet, and this data can be disorganized, messy, and hard to process. But these large models can then generate a high-quality data set that can be used to train a small model. The approach, called knowledge distillation, gets the larger model to effectively pass on its training, like a teacher giving lessons to a student. \u201cThe reason [SLMs] get so good with such small models and such little data is that they use high-quality data instead of the messy stuff,\u201d Kolter said.<\/p>\n<p class=\"paywall\">Researchers have also explored ways to create small models by starting with large ones and trimming them down. One method, known as pruning, entails removing unnecessary or inefficient parts of a <a href=\"https:\/\/www.quantamagazine.org\/tag\/neural-networks\/\" target=\"_blank\" rel=\"noopener\">neural network<\/a>\u2014the sprawling web of connected data points that underlies a large model.<\/p>\n<p class=\"paywall\">Pruning was inspired by a real-life neural network, the human brain, which gains efficiency by snipping connections between synapses as a person ages. Today\u2019s pruning approaches trace back to <a href=\"https:\/\/www.researchgate.net\/publication\/221618539_Optimal_Brain_Damage\" target=\"_blank\" rel=\"noopener\">a 1989 paper<\/a> in which the computer scientist Yann LeCun, now at Meta, argued that up to 90 percent of the parameters in a trained neural network could be removed without sacrificing efficiency. He called the method \u201coptimal brain damage.\u201d Pruning can help researchers fine-tune a small language model for a particular task or environment.<\/p>\n<p class=\"paywall\">For researchers interested in how language models do the things they do, smaller models offer an inexpensive way to test novel ideas. And because they have fewer parameters than large models, their reasoning might be more transparent. \u201cIf you want to make a new model, you need to try things,\u201d said <a data-offer-url=\"https:\/\/research.ibm.com\/people\/leshem-choshen--1\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/research.ibm.com\/people\/leshem-choshen--1&quot;}\" href=\"https:\/\/research.ibm.com\/people\/leshem-choshen--1\" rel=\"nofollow noopener\" target=\"_blank\">Leshem Choshen<\/a>, a research scientist at the MIT-IBM Watson AI Lab. \u201cSmall models allow researchers to experiment with lower stakes.\u201d<\/p>\n<p class=\"paywall\">The big, expensive models, with their ever-increasing parameters, will remain useful for applications like generalized chatbots, image generators, and <a data-offer-url=\"https:\/\/arxiv.org\/html\/2409.04481\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/arxiv.org\/html\/2409.04481&quot;}\" href=\"https:\/\/arxiv.org\/html\/2409.04481\" rel=\"nofollow noopener\" target=\"_blank\">drug discovery<\/a>. But for many users, a small, targeted model will work just as well, while being easier for researchers to train and build. \u201cThese efficient models can save money, time, and compute,\u201d Choshen said.<\/p>\n<p class=\"paywall\"><a href=\"https:\/\/www.quantamagazine.org\/why-do-researchers-care-about-small-language-models-20250310\/\" target=\"_blank\" rel=\"noopener\">Original story<\/a> reprinted with permission from <a href=\"https:\/\/www.quantamagazine.org\" target=\"_blank\" rel=\"noopener\">Quanta Magazine<\/a>, an editorially independent publication of the <a href=\"https:\/\/www.simonsfoundation.org\" target=\"_blank\" rel=\"noopener\">Simons Foundation<\/a> whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.<\/p>\n","protected":false},"excerpt":{"rendered":"The original version of this story appeared in Quanta Magazine. Large language models work well because they\u2019re so&hellip;\n","protected":false},"author":2,"featured_media":15924,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,10815,70,53,16,15],"class_list":{"0":"post-15923","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-quanta-magazine","11":"tag-science","12":"tag-technology","13":"tag-uk","14":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114329518354690127","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/15923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=15923"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/15923\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/15924"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=15923"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=15923"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=15923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}