Don’t Be Afraid of AI. Treat It Like an Insider Threat

If artificial intelligence wants to be human so badly, let’s start treating it like one.

Right now, we’re seeing a flurry of ‘panic policies’, in which organisations are scrambling to not only define what ‘AI’ is (clue: it’s not just ‘Chat GPT’), but also how to protect themselves against it, whilst not missing out on harnessing its power.

The current policy guidelines out there are fuzzy at best. Take GDPR’s Article 22

‘The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’

It goes on to make a particular provision to prevent ‘automated processing’ of:

‘…personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation’,

…unless that data subject gives explicit consent.

But these days, ‘explicit’ consent can mean the subject consents to a stack of terms and conditions that they will read just after they finish War & Peace. Such policies keep AI in a holding pattern, a kind of quarantine, before those in control work out what the hell it is …and what the hell to do with it.

We have a simple suggestion: treat AI as an Insider Threat.

Before we go on, let’s start defining exactly what we’re talking about.

Artificial Intelligence

Artificial Intelligence is a broad field of computer science. It involves the development of systems (machines) that do a fair job of mimicking human intelligence. In other words, these systems can understand, learn, problem-solve, be perceptive, and learn languages.

Narrow and General AI

AI can be classified into two types: Narrow (designed for a specific task such as taking voice commands or facial recognition), and General AI: which has no limit and could theoretically perform any intellectual task a human could, perhaps even exceeding our potential.

Let’s touch on ‘machine learning’. A subset of AI in which systems use algorithms to parse data (read: learn from it) and then make a ‘determination’ or prediction.

Instead of hand-coding software routines with specific instructions to accomplish a particular task, the system is “trained” using vast troves of data, and its algorithms learn how to perform a task.

Generative AI

Generative AI is a system trained to create new content that could pass for being created by a human. Unlike discriminative models that only interpret input data, generative models actually produce new data by themselves.

Generative AI systems are often built using machine learning techniques such as Generative Adversarial Networks (GANS). GANS consist of two segments: the ‘Generator’, which produces the data and the ‘Discriminator’, which evaluates it.

Although a gross over-simplification, this can be understood as the ‘left’ and ‘right’ hemispheres of our own brain: the creative side of ourselves along with the part that can make the corrections and spot the errors.

Let’s bring this all back to what’s actually happening in the world, right now.

AI at Work

An increasing number of organisations are beginning to implement Generative AI in their business. Whether it’s internal processes such as recruitment and finance functions or deliverables written by Chat GPT, AI has already entered the equation. can cut costs, shorten delivery times and perfect our programming.

So, what’s the problem?

The problem is that we have invited the fox into the chicken coop. Our Risk Management techniques haven’t yet digested and assessed the risk that both Narrow and Generative AI tools represent to our assets.

Generative AI tools are like whale sharks: constantly ingesting publicly available information for self-training. This means said tools could infer (then reproduce) information never intended for public consumption.

The other factor here is that generative AI tools make mistakes.

In short, they ‘hallucinate’. Also referred to as a ‘confabulation’, this is a confident response that does not seem to be justified by its training data. And it happens all the time.

For example, when asked to write up a quarterly earning story for Tesla, ChatGPT confidently came back with a set of numbers that it had seemingly plucked from thin air. One problem: the numbers didn’t correspond to any real TESLA data.

Just like us, AI is fallible, and we must treat it accordingly. Asking it for tips on system design, using unchecked code it’s written or having it provide risk management advice is all ill-advised, especially without careful human controls and checks and balances on what it produces.

We are not suggesting to ‘throw the baby out with the bathwater’ and ban outright the use of generative AI in our organisations. No one wants to be left behind. But we do need now to configure our systems to spot ‘red flags’ on all its output and activity.

Red Flags 

What red flags? Just like an insider threat, Generative AI’s behaviour must be ring-fenced by carefully configured role-based access control, alongside SIEM systems that alert administrators if there is any attempt at role escalation, unauthorised access to sensitive data or any attempt to interact with the physical world. In other words, Zero Trust.

Generative AI tools can introduce security risks because they can leak sensitive data, introduce security vulnerabilities into code they’ve written and supply erroneous information when designing virtual or physical architectures. Just like a humanoid bad actor.

Additionally, just like the Gremlins in the movie of the same name, we also must be careful of what we feed our Generative AI tools. Our information security policies should forbid AI from being fed: personal data, source code, business plans and certain financial data, because just like certain personnel, we can’t trust what AI might do with that data, or where it may end up.

Generative AI shouldn’t replace our creativity, it should support it. It shouldn’t open further vulnerabilities; it should help us in the work of mitigating them.

At Risk Crew, we believe in technology working for us, not vice versa. By understanding the limitations of our technology, we can guard against it causing us unforeseen problems.

In conclusion, we should treat AI as an employee.

Perhaps one we don’t entirely trust, but one who is potentially useful. Just like anyone else in our organisation, let’s give AI information on a need-to-know basis.

This is a technology evolving continually, and we must keep up, rather than fear it: continually updating our approach, policies, and controls.

AI must be our tool, and never the other way around.

Written by: Sam Raven

Risk Crew