Artificial intelligence is the most headline-grabbing technology of the day. As companies quickly move to incorporate AI into their workflows and product offerings, a subset of technology industry professionals are urging caution. AI, in its current state, is nascent. And while it’s filled with potential, that potential could be good or bad.
As a result, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) recently released a set of guidelines for secure development of AI-based products.
In this week’s article, I’ll look at why this guidance is necessary, and why companies developing or using AI must go beyond the current guidance.
Since the public launch of ChatGPT a year (+) ago, the cybersecurity industry has been abuzz with talk about how artificial intelligence (AI) is going to be life-changing, industry-changing, and hugely revenue-generating. Everywhere you turn — in both your personal and professional life — you can’t escape proclamations about AI’s potential. Corporations big and small, across nearly every industry, are rushing toward an “AI-first” approach to business growth. This, naturally, raises questions about the risks of AI and how it will impact people’s lives and society in general.
On one side we have tech enthusiasts, individuals who have been awaiting the moment when AI becomes mature enough to incorporate…everywhere (eh hmm, Tyler). On the other side we have tech cautionaries. These may be people who don’t know the ins and outs of how AI (or LLMs or even machine learning) works and worry about “the robots taking over.”1 And then there are the people who know tech very well and are wary of the risks AI poses. For good reason, mind you.
Regardless of the risks (or known risks, perhaps), businesses are forging ahead. Cybersecurity vendors are some of the most eager to incorporate AI or AI-like capabilities into their products. Being on the forefront of technology, the majority of the industry has a thirst for new and promising capabilities, of which AI is certainly one. To support this thirst, funding for startups that use AI or claim to have a product to secure AI are the only startups still raking in impressive investments.
It is therefore no great surprise that rule- and law-makers are starting to issue guidance and propose legislation for the secure development and use of AI.2 The recent guidance issued by CISA and NCSC is the most notable (and possibly most thorough) to date. According to the published guidelines, the aim is to assist “providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others.” The primary stated focus of the guidance is “providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs).”
The guidance, itself, is laid out in four sections meant to span the development lifecycle. They are:
Secure design
Secure development
Secure deployment
Secure operation and maintenance
You can read the entire document here.
Why do we need this guidance, and why now?
Perhaps in Utopia the world wouldn’t need such guidelines for the secure development and use of AI (or anything, really). Reality is, though, that humans need guardrails. Bad actors will always manipulate technology for malicious use, and even well-meaning users and builders will, on occasion, cut corners and unintentionally create vulnerabilities in efforts to save time, make money, or impact a wide variety of consequences. And the more powerful AI becomes, the more risk it carries with it.
This is why CISA and NCSC published this guidance now — before things are too far along. It is crucial that builders and users address AI-related risks and challenges now and create a precedent. It’s the concept of “baking security in” (versus “bolting it on”) that we’ve talked about in security for so long.
The fact is — and I discussed it with my co-hosts on Enterprise Security Weekly recently — that securing AI is not all that different from securing other types of technology, in particular, systems that collect, process, and store critical data. But “AI” is the new buzzword, and if we say, “just do what you do for other data…” you know there will be lapses in judgment and processes.3
Nonetheless, the CISA-NCSC guidance (which, if you haven’t had a chance to read closely, was “co-sealed by 23 domestic and international cybersecurity organizations”4) is significant for several key reasons:
Data Privacy and Security: As I wrote above, if you truly break it down, “AI security” is simply data security — only at a much greater scale and speed. To iterate on why this is important: The new guidelines help ensure that the systems used to generate AI models and the sensitive data used in AI algorithms are secured with multi-layered controls which help protect individuals’ and organizations’ privacy.
However, as this guidance is specific to AI development and use, the publication outlines steps for staff awareness of AI-specific threats and risks; the need for threat modeling; assessing the appropriateness of AI system design choices and training models; system monitoring, testing, and documentation; and incident management procedures, just to name a few important processes and procedures.
Robustness and Resilience: Adding on to the “secure by design” principles of the previous bullet, the guidelines state that AI systems and algorithms should be resilient to adversarial attacks and unexpected disruptions. The security principles outlined in the document help developers build resilient systems that can withstand many types of attempts at compromise.
Accountability and Transparency: The CISA-NCSC guidance asks AI providers to follow “secure by design” principles, a main element of which is embracing “radical transparency and accountability.” While AI providers might worry about exposing intellectual property, they must be forthright about their strategy for and execution of AI models so that individuals and organizations adversely affected by AI (whether that’s copyright infringement, unauthorized data disclosure, or a whole host of other nastiness) have some recourse in the event of a compromise. The guidance says that builders should “release models, applications, or systems only after subjecting them to appropriate and effective security evaluation such as benchmarking and red teaming” and that they “are clear to [your] users about known limitations or potential failure modes.”
Ethical Considerations: The guidance helps providers assess the appropriateness of their design choices. Specifically, the publication calls out the need to continuously protect AI models, develop incident management procedures, and make it easy for users to “do the right things.” Because AI has the potential to so significantly alter society, builders and users of AI technologies mustn’t just ask, “can I,” but repeatedly question, “should I?”
Global Standards: Although CISA and the NCSC were the primary parties responsible for these AI guidelines, many international cybersecurity organizations cooperated on the effort. This type of global effort underscores the necessity for standardization — something missing for much other cybersecurity guidance — and fosters consistency and interoperability across different AI systems.
Trust and Acceptance: One of the deepest concerns about AI is that it can’t be trusted to protect human interests. As such, the guidelines help providers and users think through how to build systems that are hardened to security threats, misuse, and abuse. Trust is essential for widespread acceptance and adoption of AI technologies, and these guidelines contribute to the establishment of trust and reliability.
Regulatory Compliance: You better bet that AI-specific regulatory compliance is coming. And likely soon. Data protection and data privacy laws already exist for numerous industries and geographies; AI protection laws will be similar and, perhaps, even more stringent. Companies that adhere to these guidelines will increase their preparedness for demonstrating compliance when the time comes, and (possibly even more importantly) have greater capability to defend against compromises that endanger individuals and organizations.
The wrap up
The joint guidance from CISA, the NCSC, and their partners is a big deal because it sets an early precedent. Too frequently in the past, technology innovators have not considered security and privacy implications (IoT, cloud, and mobile, I’m lookin’ at you), leaving security teams with the arduous task of playing catch-up.
What’s more, those who “bake” these suggestions and principles into their AI tools will be better positioned (themselves or for their users) to mitigate risks, protect privacy, and foster trust among users and stakeholders. Because this guidance was formed on a global level, it should help standardize expectations and keep AI providers’ minimum viable on a level playing field.
Further, traditional DLP wasn’t/isn’t the most effective security solution on the market.