{"id":295581,"date":"2025-07-27T09:56:17","date_gmt":"2025-07-27T09:56:17","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/295581\/"},"modified":"2025-07-27T09:56:17","modified_gmt":"2025-07-27T09:56:17","slug":"what-happened-when-i-asked-ai-to-do-my-job","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/295581\/","title":{"rendered":"What happened when I asked AI to do my job"},"content":{"rendered":"<p>On 22 November, 2022, I asked AI to write the introduction to The Independent\u2019s weekly IndyTech newsletter. It was eight days before the release of <a href=\"https:\/\/www.independent.co.uk\/topic\/chatgpt\" target=\"_blank\" rel=\"noopener\">ChatGPT<\/a>, and the tool I used was built on the precursor to the hugely popular chatbot, OpenAI\u2019s GPT-3.<\/p>\n<p>It did an okay job (you can read it <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/link.e.independent.co.uk\/view\/6112cb7814d391370561f210hq882.c4o\/aa58037c\">here<\/a>), and even came up with some unexpected \u2013 perhaps unintended \u2013 wordplay. \u201cFor now, it\u2019s better to have a human behind the keyboard,\u201d the AI concluded, \u201chands down\u201d.<\/p>\n<p>I also asked the latest AI image generator from OpenAI, DALL-E 2, to create a picture for the email using the prompt: \u201cThe journalist Anthony Cuthbertson dressed up as a robot.\u201d<\/p>\n<p>Both the picture and the text from 2022 already seem antiquated by today\u2019s standards, so on the eve of the launch of GPT-5 \u2013 which OpenAI boss Sam Altman claims is \u201csmarter than us in almost every way\u201d \u2013 I decided to try the experiment again.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/Dall-e-2-vs-Dall-e-3.png\"  loading=\"lazy\" alt=\"Both images were created with the same prompt \u2013 'the journalist Anthony Cuthbertson dressed up as a robot' \u2013 using DALL-E 2 in November 2022 (left) and DALL-E 3 in July 2025\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/><\/p>\n<p>open image in gallery<\/p>\n<p>Both images were created with the same prompt \u2013 &#8216;the journalist Anthony Cuthbertson dressed up as a robot&#8217; \u2013 using DALL-E 2 in November 2022 (left) and DALL-E 3 in July 2025 (Anthony Cuthbertson\/OpenAI)<\/p>\n<p>One of the biggest differences between GPT-3 and GPT-4.5, which currently powers ChatGPT, is that the AI is now trained on everything I\u2019ve ever written online. That means it can write in my style, or at least imitate it.<\/p>\n<p>So I again asked ChatGPT to \u201cwrite a newsletter in the style of Anthony Cuthbertson about AI writing a newsletter in the style of Anthony Cuthbertson\u201d. This is what it came up with:<\/p>\n<p>In a small yet telling twist of irony, I recently came across an AI-generated newsletter written in my style, discussing \u2013 rather confidently \u2013 how AI can write newsletters in my style. It was good. A little too good. <\/p>\n<p>I found myself thinking: Did I write this? No, I didn\u2019t. But maybe I did. Or maybe it\u2019s just the machine again.<\/p>\n<p>When an AI is trained on articles I\u2019ve written, then asked to write about AI\u2019s trained on articles I\u2019ve written, writing in the way I\u2019d write about them \u2013 it becomes a kind of linguistic feedback loop. One where authorship blurs, and the boundaries between originator and emulator start to dissolve.<\/p>\n<p>A Brief Word From the Real Me (I Think)<\/p>\n<p>Let me be clear: I\u2019m not worried about being replaced. Not exactly. But there is something deeply strange about watching a machine climb into your stylistic skin and start walking around. Is it flattery? Is it theft? Is it progress? I don\u2019t know. But I\u2019ll keep writing about it \u2013 until the next version of me does it better.<\/p>\n<p>That last bit \u2013 that \u2018brief word from the real me\u2019 \u2013 wasn\u2019t actually written by me. But while editing this, I did begin to question who wrote which bit.<\/p>\n<p>This is becoming a problem with text online. AI has become so good at writing like a human that it can sometimes be hard to tell whether it was actually written by a human. I know a journalist (not a colleague) who already uses AI to cut their workload by asking it to write basic news reports for them in their style.<\/p>\n<p>Once online, these AI-generated articles are then being fed back into the AI models to train them, creating the \u201clinguistic feedback loop\u201d that ChatGPT mentioned above. The outcome is an <a href=\"https:\/\/www.independent.co.uk\/topic\/internet\" target=\"_blank\" rel=\"noopener\">internet<\/a> full of factual errors and unoriginal content.<\/p>\n<p>It\u2019s reached the point that I now enjoy seeing spelling mistakes in an article, because at least then I know a human wrote it.<\/p>\n<p><strong>\u2018Peak Data\u2019 theory<\/strong><\/p>\n<p>A recent study found that AI-generated content is also plaguing academia, with millions of scientific papers in 2024 featuring the fingerprints of <a href=\"https:\/\/www.independent.co.uk\/topic\/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">artificial intelligence<\/a>. The researchers from Germany\u2019s University of T\u00fcbingen discovered that large language models (LLMs) like ChatGPT frequently use the same 454 words, which include \u2018crucial\u2019, \u2018delves\u2019 and \u2018encompassing\u2019.<\/p>\n<p>The <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adt3813\">study<\/a>, published in the journal Science Advances this month, described it as a \u201crevolution\u201d in science that is \u201cunprecedented in both quality and quantity\u201d. But they warned that it is impacting the accuracy and integrity of research.<\/p>\n<p>The researchers noted that if LLMs continue to be trained on these AI-written papers, it will have an ouroboros effect, whereby the AI will consume itself to the detriment of discovery.<\/p>\n<p>\u201cSuch homogenisation can degrade the quality of scientific writing,\u201d the paper concluded. \u201cFor instance, all LLM-generated introductions on a certain topic might sound the same and would contain the same set of ideas and references, thereby missing out on innovations and exacerbating citation injustice.\u201d<\/p>\n<p>The difficulty of actually identifying AI-generated content means the issue may be far more prevalent than the study suggests.<\/p>\n<p>The lack of new human-generated content means AI firms are also running out of data to train their models on, with some warning that we have already reached \u201cpeak data\u201d. An article in the journal Nature in December predicted that a \u201ccrisis point\u201d would be reached by 2028. \u201cThe internet is a vast ocean of human knowledge, but it isn\u2019t infinite,\u201d the <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/d41586-024-03990-2\">article<\/a> stated. \u201c<a href=\"https:\/\/www.independent.co.uk\/topic\/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">Artificial intelligence<\/a> researchers have nearly sucked it dry.\u201d<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/peak-data-ai-training-chart.png\"  loading=\"lazy\" alt=\"\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/><\/p>\n<p>open image in gallery<\/p>\n<p>( )<\/p>\n<p>Leading generative AI systems created by Google, Meta and OpenAI have been built using massive datasets created by humans since the early days of computing. With that data now running out, there are two possible outcomes. <\/p>\n<p>The first is stagnation, where these models no longer improve exponentially and instead stay roughly at the level they are today. The other is to use AI-generated content, or synthetic data, to train new models.<\/p>\n<p>This second option is the one being adopted by AI companies, who fear being left behind by their rivals. While it can lead to improvements, it could also cause AI systems to feed off their own errors and biases, resulting in more hallucinations and issues.<\/p>\n<p>One of the most vocal proponents of this theory is <a href=\"https:\/\/www.independent.co.uk\/topic\/elon-musk\" target=\"_blank\" rel=\"noopener\">Elon Musk<\/a>, whose own Grok chatbot has recently been making headlines for <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/tech\/elon-musk-ai-grok-hitler-b2785452.html\">endorsing Adolf Hitler and calling for a second Holocaust<\/a>. \u201cThe cumulative sum of human knowledge has been exhausted in AI training,\u201d he said in an interview earlier this year, which includes the worst moments in humanity\u2019s history.<\/p>\n<p><strong>\u2018AI cultural replacement\u2019 <\/strong><\/p>\n<p>By the time we reach the \u201ccrisis point\u201d mentioned in the Nature article, AI may already be advanced enough to take over most jobs. Prominent tech investor Vinod Khosla predicts that AI will automate 80 per cent of high-value jobs by 2030, leading to a \u201ccrazy and frenetic\u201d period of disruption.<\/p>\n<p>His is not even the worst projection. The chief executive of AI chip maker Nvidia, which just became the first ever company to reach a $4 trillion market cap, recently told CNN that he believed AI would replace or change every single job.<\/p>\n<p>A <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2303.10130\">2023 study<\/a> by OpenAI indicated that around 80 per cent of the US workforce will be impacted by LLMs, with their influence spanning \u201call wage levels\u201d. Occupations that are safe include bartenders, mechanics and plumbers, while those most impacted will be reporters, writers and news analysts \u2013 each with a 100 per cent risk score.<\/p>\n<p>OpenAI boss Sam Altman claims this will be a good thing, increasing productivity while giving people more time to pursue leisure activities. But others are not so sure. MIT economist David Autor believes the ensuing mass unemployment could create a \u201cMad Max\u201d scenario, where people\u2019s skills become worthless and they are left scrambling to survive.<\/p>\n<p>Referencing the dystopian film series set in a post-collapse world, Professor Autor told the <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=MGKUTVyqJlI\">Possible podcast<\/a> earlier this month that he thought the most likely scenario would be \u201ceverybody competing over a few remaining resources\u201d in a world that\u2019s very wealthy, \u201cyet most people don\u2019t have anything\u201d.<\/p>\n<p>These changes could happen quickly. If the progress between 2022\u2019s technology and today\u2019s seems like a big jump, the rate of progress is apparently increasing. Former OpenAI researcher Logan Kilpatrick, who now leads Google\u2019s AI studio,<a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/x.com\/OfficialLoganK\/status\/1942007689074102552\"> said this month<\/a> that \u201cthe next six months of AI are likely to be the most wild we have seen so far\u201d.<\/p>\n<p>Even without instructing ChatGPT to do my work, AI is already actively trying to do my job for me. Writing in Microsoft Word, the Co-Pilot tool lights up with offers to generate more words based on what\u2019s already been written. <\/p>\n<p>Sometimes, it tries to finish my sentences before I\u2019ve had the chance to think them through. It suggests headlines, rewrites paragraphs, and occasionally has the audacity to recommend synonyms for words I meant to use \u2013 as if it knows better than me what I\u2019m trying to say.<\/p>\n<p>At first, I found myself dismissing its suggestions. Then I started accepting the small ones \u2013 a phrase here, a fix for clunky syntax there. Now, I sometimes wonder whether I\u2019m editing the AI, or it\u2019s editing me.<\/p>\n<p>The strange truth is that even this sentence \u2013 the one you&#8217;re reading now \u2013 could have been written by an algorithm trained on everything I\u2019ve ever published. And maybe, one day, it will be.<\/p>\n<p>I let AI write those last three paragraphs. What\u2019s equally concerning is that it\u2019s not just the human writers being replaced, but also the readers. According to the 2025 Bad Bot Report by the cyber security firm Imperva, <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/tech\/bots-internet-traffic-ai-chatgpt-b2733450.html\">more than half of all web traffic is now made up of bots<\/a>.<\/p>\n<\/p>\n<p>Online publishers are experiencing a huge amount of automated traffic, and the real \u201cstrange truth\u201d that AI mentioned above, is that if you\u2019re reading this sentence, there\u2019s a good chance you\u2019re a bot.<\/p>\n<p>Author Ewan Morrison refers to this phenomenon as \u201chuman cultural replacement\u201d, with Spotify recently accused of profiting from fake listeners to AI-made songs. \u201cWho needs humans when bots can click on links and trick advertisers into paying for fake engagement,\u201d he wrote in a recent <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/x.com\/MrEwanMorrison\/status\/1940742002670223633\">post<\/a> to X.<\/p>\n<p>It feels inevitable that each of these words I\u2019m writing now will be used to feed the machine that could soon replace me entirely. So how would AI conclude this article? I\u2019ll let ChatGPT finish it: <\/p>\n<p>\u201cIn a world where words are no longer anchored to the hands that wrote them, the lines between creation and replication dissolve. As the loop tightens, we face a choice: to resist, to collaborate, or to vanish into the data ourselves.\u201d<\/p>\n<p>I didn\u2019t write that. But I might have. Or maybe I just trained the ghost that did.<\/p>\n","protected":false},"excerpt":{"rendered":"On 22 November, 2022, I asked AI to write the introduction to The Independent\u2019s weekly IndyTech newsletter. It&hellip;\n","protected":false},"author":2,"featured_media":242248,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-295581","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114924603553268499","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/295581","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=295581"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/295581\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/242248"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=295581"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=295581"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=295581"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}