OTTAWA — Canada’s institutions and security systems can withstand any threat that Anthropic’s new AI model poses, but is watching developments, says AI Minister Evan Solomon.
The firm has warned that Claude Mythos, its latest foundation model, could be a cybersecurity nightmare, and is limiting its initial release to a select group of U.S. companies, which will examine it as part of an exercise it calls Project Glasswing. Solomon said he discussed Mythos and Glasswing in a meeting with Anthropic executives on Tuesday. “I’ve left feeling confident that we have robust safety measures here in government,” he told The Logic, although he said he was also confident going in and throughout the conversation.
Solomon cited the capabilities of the Canadian AI Safety Institute, and of the Canadian Centre for Cyber Security, a branch of the Communications Security Establishment intelligence agency. “There’s new and emerging threats, and we have to stay up to speed,” he said, calling the meeting with Anthropic “a very productive dialogue.”
Mythical model: Anthropic claims Mythos is so good at coding that it can find and exploit long-buried and dangerous vulnerabilities in software used for all kinds of critical security and infrastructure. Glasswing will see a few select partners, including Amazon, Google, Microsoft and JPMorgan Chase, test the system. Anthropic has also talked to U.S. security officials. But no Canadian departments or companies are yet on the Mythos access list.
Solomon would not say whether he’d asked Anthropic for access during the meeting, nor would he provide any other details of the discussion about Mythos or Glasswing. The firm has enlisted “defenders, people who want to have regulatory guardrails around powerful models,” Solomon said. “We support a framework that takes a responsible use and a safety-first approach.”
Cybersecurity theatre? AI firms have long had a habit of telling the world how powerful and disruptive their technology could be. Anthropic CEO Dario Amodei warned a year ago that AI could wipe out half of entry-level white collar jobs within one to five years, but many of his tech peers don’t buy that scale or timeline.
The firm’s claims about Mythos’s capabilities have also attracted some skepticism. The U.K.’s AI Security Institute tested the model and found that it could successfully and autonomously attack weakly-protected tech systems in a simulation. But couldn’t determine whether Mythos “would be able to attack well-defended systems.”
That hasn’t stopped institutions and firms from responding with concern. Last week, the Bank of Canada convened banks and regulators to discuss the threat Mythos might pose to the financial system.
Chatbot conversations: Anthropic executives also briefed Solomon Tuesday on the firm’s safety protocols and the thresholds at which they would escalate users’ behaviour to relevant authorities. Ottawa has called major AI and tech firms in for discussions following the revelations about OpenAI’s decision not to immediately report messages that ChatGPT exchanged with the Tumbler Ridge shooter.