{"id":479234,"date":"2026-05-11T13:43:13","date_gmt":"2026-05-11T13:43:13","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/479234\/"},"modified":"2026-05-11T13:43:13","modified_gmt":"2026-05-11T13:43:13","slug":"head-of-google-ireland-some-capabilities-of-ai-platform-gemini-not-ideal","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/479234\/","title":{"rendered":"Head of Google Ireland: Some capabilities of AI platform Gemini \u201cnot ideal\u201d"},"content":{"rendered":"<p>The technology behind large language models (LLMs) remains \u201cextremely nascent\u201d and will need to be improved further over time, the head of Google Ireland has said.<\/p>\n<p>In an interview with the Irish Examiner, vice president of large customer sales across Google\u2019s EMEA region and head of Google Ireland, Vanessa Hartley, said that some of the capabilities possessed by the tech giant\u2019s generative AI platform Gemini are \u201cnot ideal\u201d, with ongoing discussions taking place about the LLM\u2019s code of practice.<\/p>\n<p class=\"\">Released at the end of 2023 in response to the AI boom and launch of OpenAI\u2019s GPT-4, Gemini offers a chatbot function, deep reasoning, and advanced coding capabilities, with 75% of Google\u2019s code now being written by AI.<\/p>\n<p class=\"\">Significantly, Gemini is also capable of producing audio and image generation, with its latest Nano Banana 2 model offering some of the most advanced, hyper-realistic image modelling at the touch of a button.<\/p>\n<p class=\"\">Recent months have demonstrated what can happen when LLMs\u2019 image-generation capabilities are used maliciously, exhibited by Elon Musk\u2019s Grok AI assistant, which, earlier this year, created non-consensual nude images of real people, most of which depicted women or children.<\/p>\n<p class=\"\">The incident, which sparked regulatory and ethical concerns worldwide, led to around 3m sexualised images being created by Grok in just 11 days.<\/p>\n<p class=\"\">Google has implemented several layers of safeguards to prevent the creation of such images on Gemini, with Ms Hartley explaining: \u201cYou cannot reproduce pictures of famous people or copyrighted pictures. You can only do it with your own photographs.<\/p>\n<p class=\"\">\u201cWe are very keen to make sure we stay within the parameters of what\u2019s right.\u201d<\/p>\n<p class=\"\">Gemini itself says that protecting users from deepfakes and non-consensual imagery is a \u201ccore part\u201d of its safety design, with its underlying models being trained to filter the prevention of sexually explicit content, real person likenesses, and non-consensual edits. The chatbot also says it cannot depict images of any public figures.<\/p>\n<p class=\"\">But do these current guardrails protect people enough in the era of AI-made content and image misinformation?<\/p>\n<p class=\"\">An experiment by the Irish Examiner found that non-copyrighted images of individuals, which are largely available online and across personal social media profiles, could still be used by Gemini to depict sensitive images of people and spread misinformation.<\/p>\n<p class=\"\">An image created by Gemini using a publicly available reference photograph depicted this journalist as a delegate at the ard fheis of a major Irish party, despite having no affiliation with any Irish political organisation. It took just one prompt for Gemini to create this image.<\/p>\n<p class=\"\">Asked if she forecasted issues regarding Gemini being used in this way, Ms Hartley said: \u201cAll these different topics are being dealt with in the code of practice. We\u2019re working really hard with the commission to make sure we have real clarity on what can, will, and should happen with AI.<\/p>\n<p class=\"\">\u201cWe talk a lot about it, but this is an extremely nascent technology. In reality, it\u2019s only had commercial models for two or three years.<\/p>\n<p class=\"\">\u201cI know that over time, we will be able to work through all those different topics.\u201d<\/p>\n<p class=\"\">Many LLMs have been released by their respective owners while still in their infancy, leading to issues with societal biases, a lack of appropriate safeguards, and hallucinations, which refer to AI systems generating responses that are nonsensical, fabricated, and not based on any factual data.<\/p>\n<p class=\"\">As Ms Hartley explains: \u201cWe\u2019re building through more use cases now, including ones like this image, where our teams would work to make sure that that does or does not happen again.<\/p>\n<p class=\"\">\u201cWe want our models to be compliant with the EU\u2019s code of practice,\u201d Ms Hartley said, adding that AI is \u201ctoo important not to regulate\u201d, and needs to be built responsibly.<\/p>\n<p class=\"\">\u201cDifferent legislations will allow for different things, but I imagine that over time, these things will be clearer.\u201d<\/p>\n<p class=\"\">Asked if this allows for issues to arise in the short term as the final regulations are ironed out, the head of Google Ireland added: \u201cI mean, the example you shared is not ideal.<\/p>\n<p class=\"\">\u201cI think, in reality, we\u2019re going to have to work through that, to be honest with you.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"The technology behind large language models (LLMs) remains \u201cextremely nascent\u201d and will need to be improved further over&hellip;\n","protected":false},"author":2,"featured_media":479235,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,18,19,17,3569,82],"class_list":{"0":"post-479234","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-eire","12":"tag-ie","13":"tag-ireland","14":"tag-ireland-markets-irish-economy-iseq-stocks-shares","15":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/116556241740445380","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/479234","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=479234"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/479234\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/479235"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=479234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=479234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=479234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}