{"id":22149,"date":"2026-04-29T22:09:57","date_gmt":"2026-04-29T22:09:57","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/22149\/"},"modified":"2026-04-29T22:09:57","modified_gmt":"2026-04-29T22:09:57","slug":"tiktok-fact-checks-chatgpt-about-its-timer-feature","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/22149\/","title":{"rendered":"TikTok Fact-Checks ChatGPT About Its Timer Feature"},"content":{"rendered":"<p>    <img width=\"1200\" height=\"675\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/huskistaken-from-TikTok-catches-ChatGPT-lying-on-camera.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"huskistaken from TikTok catches ChatGPT lying on camera\" decoding=\"async\" fetchpriority=\"high\"  \/><\/p>\n<p data-characters=\"279\" data-injectable=\"true\" data-video=\"true\">OpenAI\u2019s ChatGPT went viral on TikTok for claiming to have a feature it doesn\u2019t. Sam Altman addressed the issue with the chatbot. When was ChatGPT forced to watch its creator admit that it doesn\u2019t have the feature? ChatGPT still couldn\u2019t admit its faults.<\/p>\n<p data-characters=\"368\" data-current-count=\"368\" data-injectable=\"false\">Husk, also known as @huskitaken on TikTok, creates content about his ChatGPT. He would often troll the bot and ask it random questions. But recently, a video of Husk asking ChatGPT to time his run went viral. In the clip, Husk only jogged for a few seconds, but the bot said he ran for ten minutes and twelve seconds. Essentially, the bot gave Husk a made-up response.<\/p>\n<p data-characters=\"250\" data-current-count=\"618\" data-injectable=\"true\"><a target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/www.youtube.com\/watch?v=5VRgk7_X7oc\">Altman, who watched the viral video,<\/a> acknowledged that the ChatGPT model that Husk has doesn\u2019t have the ability to start a timer. Even Altman had a good laugh about the situation and said that perhaps within a year, the issue could be resolved.<\/p>\n<p>    ChatGPT caught lying on camera    <\/p>\n<p data-characters=\"218\" data-current-count=\"218\" data-injectable=\"false\">Husk went back to TikTok to create a strangely meta video. <a href=\"https:\/\/www.tiktok.com\/@huskistaken\/video\/7624723977222556959?is_from_webapp=1&amp;sender_device=pc\" target=\"_blank\" rel=\"nofollow noopener\">He made his ChatGPT react to Altman,<\/a> who was reacting to the original, viral clip. Even after watching, the bot seemed averse to the concept of accountability.<\/p>\n<p data-characters=\"178\" data-current-count=\"396\" data-injectable=\"false\">Husk made his ChatGPT watch the clip, and the bot recognized Sam Altman. When Altman himself said that Husk\u2019s ChatGPT model can\u2019t run a timer, the bot said otherwise.<\/p>\n<p data-characters=\"113\" data-current-count=\"509\" data-injectable=\"true\">\u201cSo, what he\u2019s saying is that some voice models might not have all the capabilities, but I do.\u201d<\/p>\n<p data-characters=\"83\" data-current-count=\"83\" data-injectable=\"false\">Husk corrects ChatGPT, \u201cNo, he\u2019s talking about you specifically.\u201d<\/p>\n<p data-characters=\"105\" data-current-count=\"188\" data-injectable=\"false\">The chatbot insisted, \u201cWell, I can tell you right now, I definitely have a timer capability.\u201d<\/p>\n<p data-characters=\"162\" data-current-count=\"350\" data-injectable=\"false\">When Husk asked the chatbot if it thinks Sam Altman is lying, it denied it and thought Altman wasn\u2019t lying either. Husk said that one of them must be lying.<\/p>\n<p data-characters=\"205\" data-current-count=\"555\" data-injectable=\"true\">The only way to test that is by using the supposed timer feature one more time. When Husk asked ChatGPT to start a timer in the updated video, the bot said Husk ran for seven minutes and forty-two seconds.<\/p>\n<p data-characters=\"39\" data-current-count=\"39\" data-injectable=\"false\">Husk didn\u2019t even leave his chair.<\/p>\n<p data-characters=\"215\" data-current-count=\"254\" data-injectable=\"false\">Husk didn\u2019t even leave his chair. Would it be a stretch to say ChatGPT tried to gaslight Husk like a manipulative lover? Not exactly, but this wouldn\u2019t be the first time Husk has messed with his ChatGPT.<\/p>\n<p>    AI\u2019s people-pleasing tendency    <\/p>\n<p data-characters=\"175\" data-current-count=\"429\" data-injectable=\"true\">In another video, he lied to his ChatGPT about activating a \u201creally ugly face-morphing filter.\u201d The mere suggestion of the idea made the chatbot agree immediately.<\/p>\n<p data-characters=\"294\" data-current-count=\"294\" data-injectable=\"false\">But Husk wasn\u2019t wearing a face-morphing filter either, and the chatbot had to awkwardly save the conversation after it agreed that Husk looked \u201cugly.\u201d Essentially, the bot agreed based on suggestion. It didn\u2019t rigorously check if Husk even had filter software turned on.<\/p>\n<p data-characters=\"343\" data-current-count=\"637\" data-injectable=\"true\">Who would\u2019ve thought that even bots people-please? Lying doesn\u2019t build trust in relationships\u2014not that the robot would know, of course. Instead, it overestimates its abilities, claims it can do a task, and makes a person wait for two days, only to inform them later that it can\u2019t accomplish that task (a true horror story).<\/p>\n<p>    Can AI be trusted with other complex tasks?    <\/p>\n<p data-characters=\"256\" data-current-count=\"256\" data-injectable=\"false\">A criticism for AI chatbots is that they often agree with the user uncritically. Chatbots, at one point, can even give in to restricted and unreasonable requests. At worst, <a target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/time.com\/7306661\/ai-suicide-self-harm-northeastern-study-chatgpt-perplexity-safeguards-jailbreaking\/\">it can give advice it shouldn\u2019t<\/a> after its restrictions are bypassed by users.<\/p>\n<p data-characters=\"244\" data-current-count=\"500\" data-injectable=\"true\">But Husk\u2019s example essentially proves why ChatGPT shouldn\u2019t be relied upon for everything. Some people tend to use the bot as an advanced version of Google, but for now, nothing beats relying on the basics\u2014that includes the brain.<\/p>\n<p data-characters=\"178\" data-current-count=\"178\" data-injectable=\"false\">If ChatGPT can\u2019t even time runs without hallucinating, it just means that <a href=\"https:\/\/www.themarysue.com\/you-really-cant-claude-tells-sen-bernie-sanders-that-ai-cant-be-trusted-with-data\/\" rel=\"nofollow noopener\" target=\"_blank\">it can\u2019t be completely relied on<\/a> for research and other intellectually strenuous activities.<\/p>\n<p data-characters=\"29\" data-current-count=\"207\" data-injectable=\"false\">(featured image: huskistaken)<\/p>\n<p>Have a tip we should know? <a href=\"http:\/\/www.themarysue.com\/cdn-cgi\/l\/email-protection#582c31282b182c303d35392a212b2d3d763b3735\" rel=\"nofollow noopener\" target=\"_blank\">[email\u00a0protected]<\/a><\/p>\n<p>\t<img decoding=\"async\" class=\"wp-block-gamurs-author-bio__avatar img-vertical\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/4c00613914349c998542524f51c1bb44.png\"   alt=\"Image of Vanessa Esguerra\"\/><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/www.themarysue.com\/author\/vanessa-esguerra\/ \" class=\"wp-block-gamurs-author-bio__name fg-minimal\" rel=\"nofollow noopener\" target=\"_blank\">Vanessa Esguerra <\/a><\/p>\n<p>\n\t\t\t\t\tStaff Writer\t\t\t\t<\/p>\n<p>\n\t\t\t\t\t\t\tVanessa Esguerra (She\/They) has been a Contributing Writer for The Mary Sue since 2023. She speaks three languages but still manages to get lost in the subways of Tokyo with her clunky Japanese. Fueled by iced coffee brewed from local caf\u00e9s in Metro Manila, she also regularly covers every possible topic under the sun while queuing for her next match in League of Legends.\t\t\t\t\t\t<\/p>\n<p>  <script async src=\"\/\/www.tiktok.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI\u2019s ChatGPT went viral on TikTok for claiming to have a feature it doesn\u2019t. Sam Altman addressed the&hellip;\n","protected":false},"author":2,"featured_media":22150,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[580,157,370,756],"class_list":{"0":"post-22149","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-openai","8":"tag-chatgpt","9":"tag-openai","10":"tag-sam-altman","11":"tag-tiktok"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22149","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=22149"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22149\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/22150"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=22149"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=22149"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=22149"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}