This will *definitely* not result in massive data leaks…
>The agreement signed by the firm and the science department could give OpenAI access to government data
Don’t forget this comes on top of the government’s spat with Apple over backdoor user-data access as well as ID requirements for NSFW content – This is a worrying direction.
Why are they choosing a US company when Le Chat / Mistral is EU and better data protection?
🤦♀️ This push towards AI in everything isn’t going to make services worse at all! /s
There are too many people that think AI can entirely replace employees when in reality the service gets worse because computers/AI, while useful, still need human interaction.
It’s only as good as the person who programmed it and no one person knows everything.
OpenAI at the moment have to give all their logs to several US newspapers etc due to ongoing legal action… How does this work with data proetction
Introducing this to public services is a terrible idea. OpenAI suck as a company, they have zero transparency and they don’t care about stealing data. Their public tool, ChatGPT, is also a sycophantic pile of shit that will attempt to stroke your ego over providing objective answers to your questions, even with prompts, it always leans towards being nice over being correct, and it’s often confidently and catastrophically incorrect. I’m a software engineer, we use AI tooling at work, in order to use it effectively you already need to be a subject matter expert in the field you’re asking it about in order to understand when it’s wrong.
Brilliant, there are so many awful public services which could easily be automated. Start with enforcing driving laws.
So who’s going to be accountable when the AI totally screws up? We need legal safeguards, put in place, that clearly state which human will be culpable when their AI totally screws up and destroys an innocent person’s life. It will happen!
It will happen a lot and “Oh it’s the AI so there’s nothing we can do” is NOT an acceptable response!
Like any computerised system, AI is only as good as the information that it’s given.
We’ll have fewer, overworked, staff trying to deal with thousands of people desperately trying to talk to a human being to rectify the inevitable errors.
Anyone who thinks that AI is some miracle answer is deluded, at best, downright dangerous, at worst.
This governments willingness to sell out its country to AI companies is insane.
I think this can be a fantastic tool to help accelerate and automate manual parts of your job, but they must be so careful not to lose human expertise to even be able to correctly prompt ai and to evaluate the results. Given how much trouble software caused when blindly followed or deliberately covered up (post office scandal) it can’t just be automatically trusted. Needs to be used in the robot way with right scrutiny
Article rather light on what they’re actually going to be *doing* with it, beyond some vague blether about upholding democratic values.
If we can’t build a uk AI LLM we are pretty ducked.
Can’t wait for this silly bubble to burst (then we get the actual effective AI)
This reminds me of that Johnny English film. No doubt we’ll be sending all of our data to a private firm’s server farm in North America soon.
So Labour is just doing every bad thing that Trump is doing now?
55 year old Admin Janet McGossip vs chatGPT, who will break first?
Why the fuck are we not keeping money in house.
We don’t need more AI. But the Gov seems to throw it to the States instead of I don’t know, funding god knows how many leading AI labs in the UK?
17 comments
This will *definitely* not result in massive data leaks…
>The agreement signed by the firm and the science department could give OpenAI access to government data
Don’t forget this comes on top of the government’s spat with Apple over backdoor user-data access as well as ID requirements for NSFW content – This is a worrying direction.
Why are they choosing a US company when Le Chat / Mistral is EU and better data protection?
🤦♀️ This push towards AI in everything isn’t going to make services worse at all! /s
There are too many people that think AI can entirely replace employees when in reality the service gets worse because computers/AI, while useful, still need human interaction.
It’s only as good as the person who programmed it and no one person knows everything.
OpenAI at the moment have to give all their logs to several US newspapers etc due to ongoing legal action… How does this work with data proetction
Introducing this to public services is a terrible idea. OpenAI suck as a company, they have zero transparency and they don’t care about stealing data. Their public tool, ChatGPT, is also a sycophantic pile of shit that will attempt to stroke your ego over providing objective answers to your questions, even with prompts, it always leans towards being nice over being correct, and it’s often confidently and catastrophically incorrect. I’m a software engineer, we use AI tooling at work, in order to use it effectively you already need to be a subject matter expert in the field you’re asking it about in order to understand when it’s wrong.
Brilliant, there are so many awful public services which could easily be automated. Start with enforcing driving laws.
So who’s going to be accountable when the AI totally screws up? We need legal safeguards, put in place, that clearly state which human will be culpable when their AI totally screws up and destroys an innocent person’s life. It will happen!
It will happen a lot and “Oh it’s the AI so there’s nothing we can do” is NOT an acceptable response!
Like any computerised system, AI is only as good as the information that it’s given.
We’ll have fewer, overworked, staff trying to deal with thousands of people desperately trying to talk to a human being to rectify the inevitable errors.
Anyone who thinks that AI is some miracle answer is deluded, at best, downright dangerous, at worst.
This governments willingness to sell out its country to AI companies is insane.
I think this can be a fantastic tool to help accelerate and automate manual parts of your job, but they must be so careful not to lose human expertise to even be able to correctly prompt ai and to evaluate the results. Given how much trouble software caused when blindly followed or deliberately covered up (post office scandal) it can’t just be automatically trusted. Needs to be used in the robot way with right scrutiny
Article rather light on what they’re actually going to be *doing* with it, beyond some vague blether about upholding democratic values.
If we can’t build a uk AI LLM we are pretty ducked.
Can’t wait for this silly bubble to burst (then we get the actual effective AI)
This reminds me of that Johnny English film. No doubt we’ll be sending all of our data to a private firm’s server farm in North America soon.
So Labour is just doing every bad thing that Trump is doing now?
55 year old Admin Janet McGossip vs chatGPT, who will break first?
Why the fuck are we not keeping money in house.
We don’t need more AI. But the Gov seems to throw it to the States instead of I don’t know, funding god knows how many leading AI labs in the UK?
Comments are closed.