{"id":141941,"date":"2025-08-13T08:44:10","date_gmt":"2025-08-13T08:44:10","guid":{"rendered":"https:\/\/www.europesays.com\/us\/141941\/"},"modified":"2025-08-13T08:44:10","modified_gmt":"2025-08-13T08:44:10","slug":"use-of-ai-could-worsen-racism-and-sexism-in-australia-human-rights-commissioner-warns-artificial-intelligence-ai","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/141941\/","title":{"rendered":"Use of AI could worsen racism and sexism in Australia, human rights commissioner warns | Artificial intelligence (AI)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">AI risks entrenching racism and sexism in Australia, the human rights commissioner has warned, amid internal Labor debate about how to respond to the emerging technology.<\/p>\n<p class=\"dcr-130mj7b\">Lorraine Finlay says the pursuit of productivity gains from AI should not come at the expense of discrimination if the technology is not properly regulated.<\/p>\n<p class=\"dcr-130mj7b\">Finlay\u2019s comments follow Labor senator Michelle Ananda-Rajah breaking ranks to call for all Australian data to be \u201cfreed\u201d to tech companies to prevent AI perpetuating overseas biases and reflect Australian life and culture.<\/p>\n<p class=\"dcr-130mj7b\">Ananda-Rajah is opposed to a dedicated AI act but believes content creators should be paid for their work.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.theguardian.com\/email-newsletters?CMP=copyembed&amp;CMP=emailbutton\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Sign up: AU Breaking News email<\/a><\/p>\n<p class=\"dcr-130mj7b\">Productivity gains from AI will be discussed next week at the federal government\u2019s economic summit, as unions and industry bodies raise concerns about <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/05\/productivity-commission-digital-economy-report-copyright-rules-text-and-data-mining-to-train-ai-models\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">copyright and privacy protections<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">Media and arts groups have warned of <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/06\/arts-and-media-groups-demand-labor-take-a-stand-against-rampant-theft-of-australian-content-to-train-ai\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">\u201crampant theft\u201d of intellectual property<\/a> if big tech companies can take their content to train AI models.<\/p>\n<p class=\"dcr-130mj7b\">Finlay said a lack of transparency in what datasets AI tools are being trained on makes it difficult to identify which biases it may contain.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAlgorithmic bias means that bias and unfairness is built into the tools that we\u2019re using, and so the decisions that result will reflect that bias,\u201d she said.<\/p>\n<p>The human rights commissioner, Lorraine Finlay. Photograph: Mick Tsikas\/AAP<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhen you combine algorithmic bias with automation bias \u2013 which is where humans are more likely to rely on the decisions of machines and almost replace their own thinking \u2013 there\u2019s a real risk that what we\u2019re actually creating is discrimination and bias in a form where it\u2019s so entrenched, we\u2019re perhaps not even aware that it\u2019s occurring.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The Human Rights Commission has consistently advocated for an AI act, bolstering existing legislation, including the Privacy Act, and rigorous testing for bias in AI tools. Finlay said the government should urgently establish new legislative guardrails.<\/p>\n<p class=\"dcr-130mj7b\">\u201cBias testing and auditing, ensuring proper human oversight review, you [do] need those variety of different measures in place,\u201d she said.<\/p>\n<p class=\"dcr-130mj7b\">There is growing evidence that there is bias in AI tools in Australia and <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/11\/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">overseas<\/a>, in areas such as medicine and job recruitment.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.theguardian.com\/australia-news\/2025\/may\/14\/people-interviewed-by-ai-for-jobs-face-discrimination-risks-australian-study-warns#:~:text=People%20interviewed%20by%20AI%20for%20jobs%20face%20discrimination%20risks%2C%20Australian%20study%20warns,-This%20article%20is&amp;text=Job%20candidates%20being%20interviewed%20by,a%20new%20study%20has%20warned.\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">An Australian study<\/a> published in May found job candidates being interviewed by AI recruiters risked being discriminated against if they spoke with an accent or were living with a disability.<\/p>\n<p class=\"dcr-130mj7b\">Ananda-Rajah,<strong> <\/strong>who was<strong> <\/strong>a medical doctor and researcher in AI before entering parliament, said it was important for AI tools to be trained on Australian data, or risk perpetuating overseas biases.<\/p>\n<p class=\"dcr-130mj7b\">While the government has stressed the need for protecting intellectual property, she warned that not opening up domestic data would mean Australia would be \u201cforever renting [AI] models from tech behemoths overseas\u201d with no oversight or insight into their models or platforms.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAI must be trained on as much data as possible from as wide a population as possible or it will amplify biases, potentially harming the very people it is meant to serve,\u201d Ananda-Rajah said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe need to free our own data in order to train the models so that they better represent us.<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-20\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-rsfwa\">Sign up to Breaking News Australia<\/p>\n<p class=\"dcr-1xjndtj\">Get the most important news as it breaks<\/p>\n<p><strong>Privacy Notice: <\/strong>Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-20\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">\u201cI\u2019m keen to monetise content creators while freeing the data. I think we can present an alternative to the pillage and plunder of overseas.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Ananda-Rajah raised <a href=\"https:\/\/www.theguardian.com\/society\/2021\/nov\/09\/ai-skin-cancer-diagnoses-risk-being-less-accurate-for-dark-skin-study\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">skin cancer screening<\/a> by AI as an example where the tools used for testing have been shown to have algorithmic bias. Ananda-Rajah said the way to overcome any bias or discrimination against certain patients would be to train \u201cthese models on as much diverse data from Australia as possible\u201d, with appropriate protections for sensitive data.<\/p>\n<p class=\"dcr-130mj7b\">Finlay said any release of Australian data should be done in a fair way but she believes the focus should be on regulation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cHaving diverse and representative data is absolutely a good thing \u2026 but it\u2019s only one part of the solution,\u201d she said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe need to make sure that this technology is put in place in a way that\u2019s fair to everybody and actually recognises the work and the contributions that humans are making.\u201d<\/p>\n<p class=\"dcr-130mj7b\">An AI expert at La Trobe university and former data researcher at an AI company, Judith Bishop, said freeing up more Australian data could help train AI tools more appropriately \u2013 while warning AI tools developed overseas using international data may not reflect the needs of Australians \u2013 but that it was a small part of the solution.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe have to be careful that a system that was initially developed in other contexts is actually applicable for the [Australian] population, that we\u2019re not relying on US models which have been trained on US data,\u201d Bishop said.<\/p>\n<p class=\"dcr-130mj7b\">The eSafety commissioner, Julie Inman Grant, is also concerned by the lack of transparency around the data AI tools use.<\/p>\n<p class=\"dcr-130mj7b\">In a statement, she said tech companies should be transparent about their training data, develop reporting tools and must use diverse, accurate and representative data in their products.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe opacity of generative AI development and deployment is deeply problematic,\u201d Inman Grant said. \u201cThis raises important questions about the extent to which LLMs [large language models] could amplify, even accelerate, harmful biases \u2013 including narrow or harmful gender norms and racial prejudices.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWith the development of these systems concentrated in the hands of a few companies, there\u2019s a real risk that certain bodies of evidence, voices and perspectives could be overshadowed or sidelined in generative outputs.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"AI risks entrenching racism and sexism in Australia, the human rights commissioner has warned, amid internal Labor debate&hellip;\n","protected":false},"author":3,"featured_media":141942,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,158,67,132,68],"class_list":{"0":"post-141941","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115020579722655404","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/141941","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=141941"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/141941\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/141942"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=141941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=141941"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=141941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}