{"id":155925,"date":"2025-08-18T14:33:11","date_gmt":"2025-08-18T14:33:11","guid":{"rendered":"https:\/\/www.europesays.com\/us\/155925\/"},"modified":"2025-08-18T14:33:11","modified_gmt":"2025-08-18T14:33:11","slug":"expert-warns-dangerous-ai-deepfakes-can-fool-anyone","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/155925\/","title":{"rendered":"Expert warns dangerous AI deepfakes can fool anyone"},"content":{"rendered":"<p>WASHINGTON (7News) \u2014 We live in a world where we trust what we see and hear. But what if the person on the other end of the line\u2014the voice of your boss or the face of a family member\u2014isn&#8217;t a person at all?<\/p>\n<p>This is the reality of an increasingly sophisticated form of fraud that targets the most human of instincts: our trust. It&#8217;s a technology so advanced that it can turn your own identity against you.<\/p>\n<p>In a recent demonstration for this story, I was shown a video of me talking on our news set. It was not me at all &#8211; but entirely AI-generated. <\/p>\n<p>That video was created in less than three minutes using a single photo and just three seconds of my voice. The ease of this technology is startling.<\/p>\n<p>&#8220;Anyone from the 8-year-old to the 80-year-old can create a realistic deepfake attack in just a few minutes,&#8221; says Brian Long, CEO of Adaptive Security. His company helps corporations build defenses against these AI attacks.<\/p>\n<p>I asked Long if we were to the point where anyone could create a deepfake good enough to fool a government agency. He was blunt.  &#8220;Oh, definitely to fool a government agency,&#8221; said Long. &#8220;I think more troubling is good enough to fool the parent of a child, good enough to fool someone who works every day with a coworker.&#8221;<\/p>\n<p>Long said that with just one image of a person, he can create a fully interactive video of that individual. The same applies to a voice\u2014even a short voicemail greeting is enough to create a deepfake that can understand and respond in real-time.<\/p>\n<p>Long demonstrated this during our interview. A deepfake of my voice called an unsuspecting co-worker, pretending to be in a panic. <\/p>\n<p>The fake voice said:<\/p>\n<p>&#8220;It&#8217;s Lisa Fletcher. I&#8217;m about to go on air and I can&#8217;t get into my computer to reset my password. Can you please help me reset it and text me the code? I need it urgently.&#8221;<\/p>\n<p>When the person on the other end hesitated, asking for proof of identity, the deepfake responded:<\/p>\n<p>&#8220;Okay. I understand your concern. I need you to do this quickly. Can you verify my identity through company records or check my employee ID once you&#8217;ve confirmed it&#8217;s me? Please reset my password and text me the code immediately.&#8221;<\/p>\n<p>Long said that a request like this, layered with a few specific details, is all it takes to trick someone. The most unsettling part is that the deepfake wasn&#8217;t using pre-programmed sentences; it was interacting and responding to the live conversation. &#8220;It will respond and remember the conversation,&#8221; said Long.<\/p>\n<p><strong>A Growing Crisis<\/strong><\/p>\n<p>The threat is costing corporations millions. In 2024, police in Hong Kong reported that a finance worker at a multinational company was tricked into paying out $25 million after criminals used deepfake technology to impersonate the company&#8217;s chief financial officer in a video conference call.<\/p>\n<p>Sam Altman, the CEO of OpenAI, has warned that the world is on the cusp of an &#8220;AI fraud crisis.&#8221;<\/p>\n<p>&#8220;There&#8217;s no way it can be controlled because you saw earlier this year that China came out with a model called Deep Seek, which is an open-source model,&#8221; said Long. &#8220;And it has little to no moderation. So if you ask it to do bad stuff, it&#8217;ll probably do those things.&#8221;<\/p>\n<p>Deepfake scams are also being used for political and social engineering. In 2024, Senator Ted Cruz was targeted in a deepfake scam where a robocall used audio of his voice to spread misinformation. In a separate incident, a deepfake impersonating Ukraine&#8217;s foreign minister was able to get on a call with the chairman of the U.S. Senate Foreign Relations Committee and demand sensitive information.<\/p>\n<p><strong>How to Protect Yourself<\/strong><\/p>\n<p>With deepfake technology becoming increasingly difficult to distinguish from authentic content, Long says we are entering a new reality where people must think twice before trusting a video or audio of someone they know.<\/p>\n<p>Long, whose company deals with thousands of businesses, says most don&#8217;t report these crimes publicly. <\/p>\n<p>He recommends that if you are even the slightest bit suspicious of a call\u2014whether it&#8217;s a phone or video call\u2014do nothing and follow up separately with the individual.<\/p>\n<p>He also suggests a couple of other things you can do to protect yourself and your family:<\/p>\n<ul>\n<li><strong>Create a shared password: <\/strong>Establish a password that only you and your family members know. If you receive a call that sounds or looks like a family member who is in an urgent situation or needs money, ask them for the password to verify their identity<\/li>\n<li><strong>Use a generic voicemail greeting: <\/strong>Long recommends using the default robotic voice for your voicemail message. The number of words in a typical outgoing message is enough to create a deepfake of your voice for an entire conversation<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"WASHINGTON (7News) \u2014 We live in a world where we trust what we see and hear. But what&hellip;\n","protected":false},"author":3,"featured_media":155926,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,115,72851,820,18769,28052,4995,158,67,132,68],"class_list":{"0":"post-155925","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-crisis","11":"tag-deepfake","12":"tag-fraud","13":"tag-identity","14":"tag-misinformation","15":"tag-security","16":"tag-technology","17":"tag-united-states","18":"tag-unitedstates","19":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115050263555020201","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/155925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=155925"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/155925\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/155926"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=155925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=155925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=155925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}