{"id":161439,"date":"2025-06-05T23:15:10","date_gmt":"2025-06-05T23:15:10","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/161439\/"},"modified":"2025-06-05T23:15:10","modified_gmt":"2025-06-05T23:15:10","slug":"anthropic-launches-claude-gov-for-military-and-intelligence-use","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/161439\/","title":{"rendered":"Anthropic launches Claude Gov for military and intelligence use"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The company said the models it\u2019s announcing \u201care already deployed by agencies at the highest level of U.S. national security,\u201d and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic\u2019s <a href=\"https:\/\/www.anthropic.com\/news\/claude-gov-models-for-u-s-national-security-customers\" target=\"_blank\" rel=\"noopener\">blog post<\/a>. And although the company said they \u201cunderwent the same rigorous safety testing as all of our Claude models,\u201d the models have certain specifications for national security work. For example, they \u201crefuse less when engaging with classified information\u201d that\u2019s fed into them, something consumer-facing Claude is trained to flag and avoid. <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Claude Gov\u2019s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There\u2019s been a long list of wrongful arrests across <a href=\"https:\/\/www.nytimes.com\/2024\/06\/29\/technology\/detroit-facial-recognition-false-arrests.html\" target=\"_blank\" rel=\"noopener\">multiple U.S. states<\/a> due to police use of facial recognition, <a href=\"https:\/\/themarkup.org\/prediction-bias\/2021\/12\/02\/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them\" target=\"_blank\" rel=\"noopener\">documented evidence<\/a> of bias in predictive policing, and discrimination in government algorithms that <a href=\"https:\/\/www.wired.com\/story\/welfare-state-algorithms\/\" target=\"_blank\" rel=\"noopener\">assess welfare aid<\/a>. For years, there\u2019s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military \u2014 particularly in Israel \u2014 to use their AI products, with campaigns and public protests under the <a href=\"https:\/\/www.notechforapartheid.com\/\" target=\"_blank\" rel=\"noopener\">No Tech for Apartheid<\/a> movement.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Anthropic\u2019s <a href=\"https:\/\/www.anthropic.com\/legal\/aup\" target=\"_blank\" rel=\"noopener\">usage policy<\/a> specifically dictates that any user must \u201cNot Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,\u201d including using Anthropic\u2019s products or services to \u201cproduce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.\u201d <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">At least eleven months ago, the company <a href=\"https:\/\/www.anthropic.com\/news\/expanding-access-to-claude-for-government\" target=\"_blank\" rel=\"noopener\">said<\/a> it created a set of <a href=\"https:\/\/support.anthropic.com\/en\/articles\/9528712-exceptions-to-our-usage-policy\" target=\"_blank\" rel=\"noopener\">contractual exceptions<\/a> to its usage policy that are \u201ccarefully calibrated to enable beneficial uses by carefully selected government agencies.\u201d Certain restrictions \u2014 such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations \u2014 would remain prohibited. But Anthropic can decide to \u201ctailor use restrictions to the mission and legal authorities of a government entity,\u201d although it will aim to \u201cbalance enabling beneficial uses of our products and services with mitigating potential harms.\u201d <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Claude Gov is Anthropic\u2019s answer to ChatGPT Gov, OpenAI\u2019s product for U.S. government agencies, which it launched in January. It\u2019s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir\u2019s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, <a href=\"https:\/\/www.cnbc.com\/2025\/03\/05\/scale-ai-announces-multimillion-dollar-defense-military-deal.html\" target=\"_blank\" rel=\"noopener\">signed a deal<\/a> with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it\u2019s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.<\/p>\n<p><a class=\"duet--article--comments-link b1p9679\" href=\"http:\/\/www.theverge.com\/ai-artificial-intelligence\/680465\/anthropic-claude-gov-us-government-military-ai-model-launch#comments\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI&hellip;\n","protected":false},"author":2,"featured_media":25805,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,12,46,285,326,53,16,15],"class_list":{"0":"post-161439","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-news","11":"tag-policy","12":"tag-politics","13":"tag-tech","14":"tag-technology","15":"tag-uk","16":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114633305135618040","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/161439","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=161439"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/161439\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/25805"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=161439"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=161439"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=161439"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}