#BHUSA: CoSAI, Combating AI Risks Through Industry Collaboration

In early July 2024, some of the world’s leading AI companies joined forces to create the Coalition for Secure AI (CoSAI).

During a conversation with Infosecurity at Black Hat USA 2024, Jason Clinton, CISO at Anthropic, one of CoSAI’s founding members, explained some of the key goals of the new coalition and the cybersecurity focus of the organization.

Hosted by the OASIS global standards body, CoSAI is an open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems.

CoSAI’s founding premier sponsors are Google, IBM, Intel, Microsoft, NVIDIA and PayPal. Additional founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI and Wiz.

In its initial phase of work, CoSAI will focus on three workstreams:

  • Software supply chain security for AI systems: enhancing composition and provenance tracking to secure AI applications.
  • Preparing defenders for a changing cybersecurity landscape: addressing investments and integration challenges in AI and classical systems.
  • AI security governance: developing best practices and risk assessment frameworks for AI security.

More workstreams are set to be added over time.

“These areas were chosen because we looked across the ecosystem of communication right now and the kind of conversations our founding members were having with companies that are trying to adopt AI and what their concerns were,” explained Clinton.

Regarding governance, Clinton noted that there is a lack of taxonomy and a lack of empirical measurements around this space now.

He said that the aim around this area is to make it easier to say what may be a severe risk versus a mild risk.

“You can’t even do that right now, it’s the wild west,” he said.

On supply chain, he commented that CoSAI will explore ways to put a signature on every piece of information that as it flows through the development pipeline as a control against any kind of opportunity for that data to be compromised in production.

“Its just a way for us to gain confidence. If you extend that principle from the company that made the model to the deployment environment, it allows you, the customer, not only to have the model provider attesting that its internal security controls work, but also to have the signature that you can follow through to assert that it hasn’t been tampered with, that’s a super powerful control,” he explained.

Finally, on preparing defenders, Clinton said that AI models are now great at writing code and are also capable of automating the workflows of cyber defenders.

“It is very much the case that in the next few years we will enter an environment where vulnerabilities are being discovered faster, then the question is what do you do about it?” he said.

Even if organizations are not going to adopt AI, they are going to be impacted by the rapid rate of discovery of software vulnerabilities. This is combined with more sophisticated cyber attackers.

Looking ahead, CoSAI has opened for new invitations and there is a lot of inbound interest, according to Clinton.

He also said they are looking to encourage more input from the public sector.

CoSAI is now looking to set up the technical committees for each of the three workstreams.

Finally, the coalition will be looking to reach out to other groups that exist in the space, like the Cloud Security Alliance and Frontier Model Forum, to ensure that work and research is not being duplicated.