Revealed: bias found in AI system used to detect UK benefits fraud | Universal credit

https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits

by InternetProviderings

25 comments
  1. The cynic in me questions whether it’s bias, or an identification of real patterns that aren’t deemed appropriate?

  2. “I totally didn’t know that, you’re telling me now for the first time”

  3. Making government decisions by algorithm was a disaster all those other times but I thought this time it might work

  4. Before everything became ‘AI’ and things like this were just known as ‘Algorithms’, Cathy O’Neil wrote an absolutely fantastic book about the dangers of leaving everything to computers using software written by very fallible humans

    [https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction](https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction)

  5. >An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

    >An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

    Is that *bias* though? Or has the AI just spotted a pattern that people either haven’t, or pretend that they haven’t?

    It’s not like it’s saying “this person is dodgy”, it’s just flagging up potential issues for a human to actually assess. So you’d expect plenty of false positives, wouldn’t you? Because sometimes an investigation is the right thing to do, even if it turns out that actually the person in question hadn’t done anything wrong.

    >Peter Kyle, the secretary of state for science and technology, has previously told the Guardian that the public sector “hasn’t taken seriously enough the need to be transparent in the way that the government uses algorithms”.

    Not just algorithms; I hear the government uses equations, too.

  6. If a person were showing bias against a particular racial group, even if that group had higher rates of benefit offences generally, we would consider it abhorrent. We should hold our AI to a greater standard, not a lesser one.

  7. People are inherently biased already, I don’t think that’s in question, so any system they create, no matter how well “trained” it is, will be too.

    The real foolish idea here is the one they like to pedal that AI and computer systems are somehow objective and unbiased thereby justifying it playing out their biases with the veneer of objectivity.

  8. The problem with using existing data to train ML for jobs like this (same with law enforcement and other similar purposes), is that it will learn from actual positives in the data i.e. data based mostly on those who are terrible at concealing fraud – the algorithm will then (mis)direct towards people with similar characteristics to those terrible at hiding their fraud – rather than improve detection of those who are better at concealing their fraud and who already go largely undetected.

  9. Basically, some activist groups wants to do away with patern recognition.

  10. Actual bias or stop and search not allowed politically bias.

  11. All real world systems have bias. The question is whether the “AI” system is less biased than the alternative, which is humans. 

    That the perfect is the enemy of the good is a hill the Guardian is forever willing to die on.

  12. AI can only reflect back to you it’s training data. So it’s inevitably going to amplify existing biases in that data.

  13. At the end of the day they could simply omit things like Race, Religion, Gender from these system entirely if they are concerned about bias. You cannot argue that a system which overselected a race of people is bias if the training had no access to race data.

  14. Bias in what though? I’ve read it and don’t understand where the bias is

  15. The answer is in the document linked in the article –

    *A referral disparity for Age and Disability is expected due to the nature of the
    Advances fraud risk. For example, fraudsters will misrepresent their true
    circumstances relating to certain protected characteristics to attempt to obtain
    a higher rate of UC payment, to which they would otherwise not be entitled.
    This potential for variation is considered in the Equality Analysis completed at
    the design and development stage of the model.*

    i.e. more high value claims are checked than low value claims.

    Is that a surprising thing to do, AI or not?

  16. No shit, I have a computer science degree and right now AI is not ready for automated recommendations.

    This was true of Machine Learning models before the new wave of Generative AI took the public consciousness by storm, but ML was “scientific” and “expensive” and only a person with experience could set up a halfway useable ML model.

    Generative AI kicked the door down and allowed poorly trained ML models simply rebrand as “AI” and suddenly everyone believes it’s as good as a human is at making decisions.

    In this financial arms race to try and utilise AI, companies are throwing all of their data and the kitchen sink into ML models. Where companies hire quality professionals to do the training there might be some amount of data filtering it that goes into curation, but many companies do not spare the time or resources to identify and collect data they are missing.

    You end up with systems that have massive blindspots or follow the logical fallacies their authors trained into them, because rather than thinking “What outcome do we want this model to create” everyone’s thinking “How can we put the letters ‘AI’ into our stock ticker?”

  17. We all know how perfect infallible technology can go wrong (horizon) and combined with peoples inability to own up to mistakes it just leads to everything being broken with nobody to own fixing it.

    A lot of people rely on the “bad data in bad data out” argument to put the responsibility of bad “AI” onto the data available. In reality the models themselves can definitely be flawed, and so can the people working on them.

  18. Damn I hate how binary the system has become with absolutes. Some people take the piss, some people genuinely need the help to get back to work. Patterns of malice can be seen in both scenarios.

    If anything we need to put pressure on companies to improve hiring and training practices, and especially the pay!.

    -bob has no job
    -bob goes onto universal credit
    -after 3 months and a lot of headache, bob gets job
    -job pays crap, awful hours, limited training
    -job gets rid of bob because it’s a quiet season, (company mismanagement)
    -bob has no job

    After a few more cycles of this, would you want to bother?

  19. “AI algorithm discovers which groups of claimants are more likely to defraud the taxpayer. Reddit up in arms”
    Fixed the headline. 

  20. “This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human”.

    Then why the AI?

  21. Is it successful in identifying more fraud than the previous system? If yes this is a complete non story. I pay more car insurance than someone 2 miles away.

  22. The way these Machine Learning models work is: a “machine” is trained to create a model of how a given system works.

    If the training set is biased, say, for example, benefit officers are somewhat racist on average, your model is going to be somewhat racist. There is no escaping this unless you can produce a training set that is perfect.

  23. Is it wrong to be more suspicious of Romanians when a Romanian gang, stole more than £53.9 million from UK taxpayers, using false benefit claims. Why is bias wrong, if the data supports this then there must be some basis of fact.

  24. Mathematics proving patterns exist. Shock. Smells like some wokeism shite.

Comments are closed.