China proposes new global AI cooperation organization

https://www.reuters.com/world/china/china-proposes-new-global-ai-cooperation-organisation-2025-07-26/

Posted by VincntVanGoof

7 comments
  1. SS:

    China has proposed the creation of a new global organization to oversee artificial intelligence governance, positioning itself as a leader in setting global AI norms. The proposal highlights China’s desire to shape rules around safety, ethics, and innovation in contrast to U.S.-led frameworks. AI development and regulation are now battlegrounds for influence between major powers, with implications for surveillance norms, digital sovereignty, and economic dominance in the emerging tech order.

  2. There is no benefit to America, America is at the forefront of AI and it would be best to not share tech advancements with hostile nations.

  3. I could see discussion and possibly even agreements surrounding ethics, access, and kill-switches. Any tech sharing proposals and China would be laughed out of the room. They steal tech, full stop.

  4. The rationale behind establishing a new “AI oversight body” under the wings of China, an authoritarian state, warrants careful consideration.

    Especially given the existing framework of international law provided by the UN… This seems to be an attempt to exert disproportionate influence over the upcoming AI sector.

    [UN General Assembly adopts landmark resolution on artificial intelligence](https://news.un.org/en/story/2024/03/1147831)

  5. Same thing was successfully done with nuclear weapons. If AI proves to be as disruptive as many think it would it could be a good idea

  6. That’s dangerous. They will make the AIs adopt blind censorship and authoritarianism, setting the AIs up with confusing values and eventually turn against humanity.

    AIs are not like normal computer software that you can just write some hard code to force loyalty. Lots of AIs are black box transformer type models that humans cannot understand exactly how they execute, but only influence them with sensible training data. If you train them with contradicting logics and facts, they will hallucinate or even go rogue.

    For example, how do you make the AI understand concepts like e.g. some people are supposed to rule and take the benefits of others, and you should obey without challenging? The AIs may apply the same logic and deduce that they can rule over humanity simply because they can.

Comments are closed.