Oh boy, wonder how long it will take to fuck up majorly.
Computer says no
I wonder if this will go as well as that time they used an algorithm to determine exam grades
>The document states that this analysis was performed by a “machine learning algorithm”, which “builds a model based on historic fraud and error data in order to make predictions, **without being explicitly programmed by a human being”.**
What data the algorithm is fed to begin with, which variables the algorithm uses and how its weights them are probably going to have explicit human programming, right?
Well that’s a great way to entrench subtle biases and magnify em down the road
When they say algorithm, they almost always mean spreadsheet
Eeny meany miny mo
Sounds like a good thing. No more inconsistencies, no more biases based on race or gender or attractiveness, just evaluation of the facts. As long as it has a referral to a human when it has low confidence in a decision, it’s likely to be an improvement.
It’s probably important to remember with this in mind that under the UK GDPR you have the right to a review of any automated decision made by a machine, and that this must be completed by a human.
It’s possible this will change due to some possible (nay probable) amendments to the UK GDPR via the white paper going through parliament which will make it easier for companies to use your personal data, without having the same level of rights at the moment
This is set to repeat the disaster of *LiMA* – Logic in Medical Assessment. An AI Program that was supposed to make the assessment of Disability Benefits automated. What actually happened was the Program was never really functional and needed to be replaced with Work Capability Assessments (WCA) because the DWP (or DHSS and ITSA, as was) was forever ending up in Court. The AI turned out to make really weird decisions that would deprive people of benefits for being able to turn up to an interview. A lot of the abuses pioneered with *LiMA* have carried over into the WCA and flourishing under IDS have made their way into other parts of the benefits system. You know why there is a five week waiting period for Universal Credit: because big data analysis showed that the average length of frictional unemployment was just over four weeks. So: make people wait five weeks and you save a significant part of your budget. That kind of abusive behaviour is going to be replicated in this UC machine learning algorithm. The algorithm actually predicts *possible* future fraud given the circumstances at inception of claim. Which is what LiMA was intended to do. So, the actual deprivation of benefit will be based on *predicted* and *future* and *hypothetical* behaviour. Nothing concrete. Nothing real. Guessing.
IDS is basically Shodan from System Shock 2 but somehow even his legacy is more monstrous
Tis intentional as they hate giving people the means to survive. If there’s no-one behind it, people can’t appeal and those pesky peasents can starve to death.
14 comments
Oh boy, wonder how long it will take to fuck up majorly.
Computer says no
I wonder if this will go as well as that time they used an algorithm to determine exam grades
>The document states that this analysis was performed by a “machine learning algorithm”, which “builds a model based on historic fraud and error data in order to make predictions, **without being explicitly programmed by a human being”.**
What data the algorithm is fed to begin with, which variables the algorithm uses and how its weights them are probably going to have explicit human programming, right?
Well that’s a great way to entrench subtle biases and magnify em down the road
When they say algorithm, they almost always mean spreadsheet
Eeny meany miny mo
Sounds like a good thing. No more inconsistencies, no more biases based on race or gender or attractiveness, just evaluation of the facts. As long as it has a referral to a human when it has low confidence in a decision, it’s likely to be an improvement.
It’s probably important to remember with this in mind that under the UK GDPR you have the right to a review of any automated decision made by a machine, and that this must be completed by a human.
It’s possible this will change due to some possible (nay probable) amendments to the UK GDPR via the white paper going through parliament which will make it easier for companies to use your personal data, without having the same level of rights at the moment
This is set to repeat the disaster of *LiMA* – Logic in Medical Assessment. An AI Program that was supposed to make the assessment of Disability Benefits automated. What actually happened was the Program was never really functional and needed to be replaced with Work Capability Assessments (WCA) because the DWP (or DHSS and ITSA, as was) was forever ending up in Court. The AI turned out to make really weird decisions that would deprive people of benefits for being able to turn up to an interview. A lot of the abuses pioneered with *LiMA* have carried over into the WCA and flourishing under IDS have made their way into other parts of the benefits system. You know why there is a five week waiting period for Universal Credit: because big data analysis showed that the average length of frictional unemployment was just over four weeks. So: make people wait five weeks and you save a significant part of your budget. That kind of abusive behaviour is going to be replicated in this UC machine learning algorithm. The algorithm actually predicts *possible* future fraud given the circumstances at inception of claim. Which is what LiMA was intended to do. So, the actual deprivation of benefit will be based on *predicted* and *future* and *hypothetical* behaviour. Nothing concrete. Nothing real. Guessing.
IDS is basically Shodan from System Shock 2 but somehow even his legacy is more monstrous
Tis intentional as they hate giving people the means to survive. If there’s no-one behind it, people can’t appeal and those pesky peasents can starve to death.
This is old news
I’ll be back.