West is already moving on regulating and banning AI. China already won this one.
It will be abused against people.
Fears of AI becoming smart and evil enough to destroy humans, assuming it comes true, is the last thing to happen.AI against ordinary folk by the elite is far more likely.
“West” is not winning the race. US is.
EU is trying to regulate what it can’t build and UK is doing press releases by politicians.
I, for one, welcome our new AI overlords
Elon Musk is not wrong. And it is not like it’s *just* Elon Musk saying this, either. AI Safety guys have been yelling about this for ages and ages. Whether it’s *possible* or not is a different question.
Jeremy Hunt pretty much makes the case about why we are screwed: “I think we have to win the race — and then be super smart about the way we regulate it[.]”
That might be the dumbest thing that anyone has ever said.
When AGI is here, it will be too late — way too late — to regulate it. If the dangers that have been identified come to pass, regulation will help literally nobody. It’s like imagining your dog trying to regulate you. You might pretend to allow it, to make your dog happy, but there is no question about who is in control.
I do understand the argument — I even agree with the argument — that we cannot allow China or Russia to get there first. That would be piling up the disasters. And that is why although at some point even Jeremy Hunt will see what a moronic stance this is, there will be nothing anyone can do about it.
We should have had the framework with China, Russia, and the U.S. in place a decade or more ago. Too late! Too late! Now we all have to rush to create AGI and pray that the people who have spent their lives studying this are wrong.
‘Govern me harder daddy”
I think that AI can certainly be dangerous or at least very disruptive. I think that Musk is right to that extent.
However, I don’t think that the best way to address that is a moratorium on development for six months, because I don’t think that we have a firm enough idea of the timeline involved.
That is, if people need to sit down and think about potential issues, I think that it’s better to speed them up than to try to slow down AI development.
I’d only support trying to suspend development — which I think would be very hard to do — if we had a very strong consensus that we had made major breakthroughs and were on the verge of creating a self-improving intelligence or something that might be hard to halt or to put back in the box.
Nothing I have seen has led me to believe that that is the case now.
Honestly, a lot of people get “AI” wrongs. The big generalist AIs like ChatGPT are already almost obsolete.
In the near future we’ll get specialized models trained for more specific tasks. We already have the beginning of that era with medically-assisted diagnosis: training an AI on medical imagery to identify things like cancer much earlier than physicians.
It will be the same for everything. Translator AIs trained on libraries of translated books. Market AIs trained to make the right decision at the right time (and hide potential market crisis). Web search AIs to replace google with more elaborate tools. The only limitation will be the price, and that’s the reason why a lot of jobs won’t be replaced.
It’s also the reason why we should fear how it will try to manipulate us with not only disinformation, but also even more aggressive targeted advertisement, fake economic reports etc. It’s a new weapon that makes employers a lot stronger than employees. It won’t replace our jobs, but it’s the certain path towards a cyberpunk future, without the punk and all the cool stuff, and with even more megacorp power.
8 comments
West is already moving on regulating and banning AI. China already won this one.
It will be abused against people.
Fears of AI becoming smart and evil enough to destroy humans, assuming it comes true, is the last thing to happen.AI against ordinary folk by the elite is far more likely.
“West” is not winning the race. US is.
EU is trying to regulate what it can’t build and UK is doing press releases by politicians.
I, for one, welcome our new AI overlords
Elon Musk is not wrong. And it is not like it’s *just* Elon Musk saying this, either. AI Safety guys have been yelling about this for ages and ages. Whether it’s *possible* or not is a different question.
Jeremy Hunt pretty much makes the case about why we are screwed: “I think we have to win the race — and then be super smart about the way we regulate it[.]”
That might be the dumbest thing that anyone has ever said.
When AGI is here, it will be too late — way too late — to regulate it. If the dangers that have been identified come to pass, regulation will help literally nobody. It’s like imagining your dog trying to regulate you. You might pretend to allow it, to make your dog happy, but there is no question about who is in control.
I do understand the argument — I even agree with the argument — that we cannot allow China or Russia to get there first. That would be piling up the disasters. And that is why although at some point even Jeremy Hunt will see what a moronic stance this is, there will be nothing anyone can do about it.
We should have had the framework with China, Russia, and the U.S. in place a decade or more ago. Too late! Too late! Now we all have to rush to create AGI and pray that the people who have spent their lives studying this are wrong.
‘Govern me harder daddy”
I think that AI can certainly be dangerous or at least very disruptive. I think that Musk is right to that extent.
However, I don’t think that the best way to address that is a moratorium on development for six months, because I don’t think that we have a firm enough idea of the timeline involved.
That is, if people need to sit down and think about potential issues, I think that it’s better to speed them up than to try to slow down AI development.
I’d only support trying to suspend development — which I think would be very hard to do — if we had a very strong consensus that we had made major breakthroughs and were on the verge of creating a self-improving intelligence or something that might be hard to halt or to put back in the box.
Nothing I have seen has led me to believe that that is the case now.
Honestly, a lot of people get “AI” wrongs. The big generalist AIs like ChatGPT are already almost obsolete.
In the near future we’ll get specialized models trained for more specific tasks. We already have the beginning of that era with medically-assisted diagnosis: training an AI on medical imagery to identify things like cancer much earlier than physicians.
It will be the same for everything. Translator AIs trained on libraries of translated books. Market AIs trained to make the right decision at the right time (and hide potential market crisis). Web search AIs to replace google with more elaborate tools. The only limitation will be the price, and that’s the reason why a lot of jobs won’t be replaced.
It’s also the reason why we should fear how it will try to manipulate us with not only disinformation, but also even more aggressive targeted advertisement, fake economic reports etc. It’s a new weapon that makes employers a lot stronger than employees. It won’t replace our jobs, but it’s the certain path towards a cyberpunk future, without the punk and all the cool stuff, and with even more megacorp power.