{"id":337999,"date":"2025-08-12T08:40:15","date_gmt":"2025-08-12T08:40:15","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/337999\/"},"modified":"2025-08-12T08:40:15","modified_gmt":"2025-08-12T08:40:15","slug":"the-lost-art-of-admitting-what-you-dont-know","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/337999\/","title":{"rendered":"The lost art of admitting what you don\u2019t know"},"content":{"rendered":"<p>Stay informed with free updates<\/p>\n<p class=\"article__content-sign-up-topic-description o3-type-body-base\">Simply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.<\/p>\n<p>When I applied to Cambridge university, my first interview was with a professor who invited me to sit, pressed his fingertips together, looked at me searchingly, then said: \u201cIs the nation-state in decline?\u201d<\/p>\n<p>My heart fell. Not only did I not know the answer, I didn\u2019t even really understand the question. But I had heard \u2014 possibly from my state school, or else from the university \u2014 that these interviews were \u201cnot testing what you know, but how you think\u201d. So I took a breath and said: \u201cI\u2019m not sure what a nation-state is.\u201d<\/p>\n<p>It worked out well. The professor said that was fine, asked me a few simple questions to help me figure out the term, then a few more as we worked our way through the original question. In the end, I was offered a place.<\/p>\n<p>It was a formative experience for me. Even so, I have found it harder and harder to say the words \u201cI don\u2019t know\u201d as the years have gone on, and I don\u2019t think I\u2019m alone.\u00a0<\/p>\n<p>In many ways, this is understandable. The more \u201cexpert\u201d you become, the more you think you ought to know, and the more you fear your credibility will suffer if you ever admit otherwise.<\/p>\n<p>But the aversion seems to have spread to all sorts of places, including settings where it should be perfectly fine to say you don\u2019t know. A student at a prestigious US business school recently told me of a fellow student who sat in front of her during lectures. When the professor asked a question, he would type it surreptitiously into ChatGPT, then read out the answer as though it was his own.<\/p>\n<p>What is going on? One possibility is the lack of role models. Confidence is rewarded in public life. It is rare to hear the phrase \u201cI don\u2019t know\u201d in TV interviews. Little wonder: many media training courses teach people the \u201cABC\u201d technique to avoid having to say those words when faced with a question to which they do not know the answer (or do not want to give it). Acknowledge the question. Bridge to safer ground (\u201cWhat\u2019s really important to know is\u2009.\u2009.\u2009.\u2009\u201d). Communicate the message you have already planned to convey.<\/p>\n<p>New technology has also made it easier to bluff. First search engines, and now large language models like ChatGPT, have made it more simple than ever to avoid the discomfort of admitting what you don\u2019t know.<\/p>\n<p>And yet, one of the great ironies of LLMs is that they have the exact same tendency we do. When they do not know the answer to a question, for example because they can\u2019t access a vital file, they often make something up rather than say \u201cI don\u2019t know\u201d. When OpenAI put its o3 model through one particular test, for example, the <a href=\"https:\/\/openai.com\/index\/introducing-gpt-5\/\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">company found<\/a> that it \u201cgave confident answers about non-existent images 86.7 per cent of the time.\u201d <\/p>\n<p>There are costs to not admitting what you don\u2019t know. For a start, you miss the opportunity to learn. Most experts are remarkably generous to those who ask curious questions. Some of my favourite journalism projects over the years have begun with an interesting question to which I didn\u2019t know the answer when I began.<\/p>\n<p>There is also the risk that you undermine your credibility even more when you bluff. We have probably all had an experience like this at some point: an impressive polymath pundit or publication ventures into your own area of expertise, and you realise with a shock that they don\u2019t know what they\u2019re talking about. After that, you begin to doubt them on every topic.<\/p>\n<p>The AI industry is particularly alert to this risk. The technology companies know their tools will be of limited use in sectors like law and medicine if they continue to give confident-but-wrong answers some of the time. Efforts are under way to teach LLMs how to say \u201cI don\u2019t know\u201d, or to at least express their level of confidence for a given answer. OpenAI says it has trained its new model, GPT-5, to \u201cfail gracefully when posed with tasks that it cannot solve\u201d.<\/p>\n<p>But this is not an easy problem to fully fix. One problem is that LLMs do not have a concept of \u201ctruth\u201d. Another is that they have been trained by humans steeped in the culture we have just discussed. \u201cIn order to achieve a high reward during training, reasoning models may learn to lie about successfully completing a task or be overly confident about an uncertain answer,\u201d as OpenAI has put it.<\/p>\n<p>In other words, while these are not necessarily the tools we need, they might just be the tools we deserve.<\/p>\n<p><a href=\"https:\/\/www.ft.com\/content\/mailto:sarah.oconnor@ft.com\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">sarah.oconnor@ft.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Stay informed with free updates Simply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to&hellip;\n","protected":false},"author":2,"featured_media":338000,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-337999","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115014901616218824","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/337999","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=337999"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/337999\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/338000"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=337999"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=337999"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=337999"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}