EXECUTIVE SUMMARY:
The first global agreement of its kind, eighteen nations have officially endorsed the newly published Guidelines for Secure AI System Development.
Crafted by the U.K’s National Cyber Security Centre (NCSC) in collaboration with the Cyber Security and Information Security Agency (CISA), with contributions from Google, Amazon, OpenAI and Microsoft experts, among others, these guidelines aim to establish a unified, shared understanding of AI-related risks and effective mitigation strategies.
The guidance is intended for AI providers who utilize third-party hosted models, those who interface with AI models through APIs, and for developers at-large.
To keep things simple, we’ve condensed and summarized the critical points that you and your organization may want to make note of on your journey towards more advanced and comprehensive cyber security:
Four key insights
1. Secure-by-design and secure-by-default. In a bid to proactively protect AI-based products from cyber intrusions and attacks, the guidelines stress the importance of secure-by-design and secure-by-default principles.
For developers, specific considerations outlined in the document include prioritizing security when selecting a model architecture or training dataset and ensuring that the most secure options are set-by-default. It is expected that the risks of alternative configurations will be clearly described to users.
At the end of the day, the guidelines advocate for developers to assume responsibility for downstream security results, rather than shifting the responsibility to customers and consumers, post-scripting.
2. Supply chain security risks. The guidelines advise that developers consider where code components are acquired from; in-house or externally, and that security measures are applied accordingly.
If acquired externally, developers should review and monitor the security posture of suppliers, ensuring adherence to high standards. In particular, the guidelines recommend that developers implement scanning and isolation for third-party code.
3. AI’s unique code risks. The guidelines make note of several cyber threats that are specific to AI (prompt injection attacks, data poisoning) that require unique cyber security considerations. It is recommended that developers include AI-specific threat scenarios when testing user inputs for attempts to exploit systems.
4. Collaborative and continuous. The guidelines provide in-depth discussions of best practices throughout the four code lifecycle stages; design, development, deployment, and operation and maintenance.
Further, the NCSC and CISA advocate for developers to share information with the greater AI community in order to evolve and advance systems.“When needed, you escalate issues to the wider community, for example publishing bulletins responding to vulnerability disclosures, including detailed and complete common vulnerability enumeration.”
More information
“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said the United Kingdom’s National Cyber Security Centre CEO, Lindy Cameron, in a public statement.
While the guidelines were approved by Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea, Singapore, the United States, and the United Kingdom, the world’s leading developer of AI, China, has not yet signed the document.
The guidelines have been published on the heels of the first-ever global summit on artificial intelligence safety, recently hosted in the U.K, which aimed to address risks posed by AI.
For more expert insights into artificial intelligence, please click here. Also, be sure to check out our latest AI predictions for 2024, here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.
The post 4 key insights: The new global AI cyber security guidelines appeared first on CyberTalk.