AI security: Balancing innovation with protection

Remember the scramble for USB blockers because staff kept plugging in mysterious flash drives? Or the sudden surge in blocking cloud storage because employees were sharing sensitive documents through personal Dropbox accounts? Today, we face a similar scenario with unauthorised AI use, but this time, the stakes are potentially higher.

The challenge isn’t just about data leakage anymore, although that remains a significant concern. We’re now navigating territory where AI systems can be compromised, manipulated, or even “gamed” to influence business decisions. While widespread malicious AI manipulation is not widely evident, the potential for such attacks exists and grows with our increasing reliance on these systems. As Bruce Schneier aptly questioned at the RSA Conference earlier this year, “Did your chatbot recommend a particular airline or hotel because it’s the best deal for you, or because the AI company got a kickback?”

Just as shadow IT emerged from employees seeking efficient solutions to daily challenges, unauthorised AI use stems from the same human desire to work smarter, not harder. When the marketing team feeds corporate data into ChatGPT, their intent is not malicious, they’re simply trying to write better copy faster. Similarly, developers using unofficial coding assistants are often attempting to meet tight deadlines. However, each interaction with an unauthorised and unvetted AI system introduces potential exposure points for sensitive data.

The real risk lies in the potent combination of two factors – the ease with which employees can access powerful AI tools, and the implicit trust many place in AI-generated outputs. We must address both. While the possibility of AI system compromise might seem remote, the bigger immediate risk comes from employees making decisions based on AI-generated content without proper verification. Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions but requiring oversight and verification.

Forward-thinking organisations are moving beyond simple restriction policies. Instead, they’re developing frameworks that embrace AI’s value while incorporating necessary and appropriate safeguards. This involves providing secure, authorised AI tools that meet employee needs while implementing verification processes for AI-generated outputs. It’s about fostering a culture of healthy scepticism and encouraging employees to trust but verify, regardless of how authoritative an AI system might seem.

Education plays a crucial role, but not through fear-based training about AI risks. Instead, organisations need to help employees understand the context of AI use – how these systems work, their limitations, and the critical importance of verification. This includes teaching simple and practical verification techniques and establishing clear escalation pathways for when AI outputs seem suspicious or unusual.

The most effective approach combines secure tools with smart processes. Organisations should provide vetted and approved AI platforms, while establishing clear guidelines for data handling and output verification. This isn’t about stifling innovation – it’s about enabling it safely. When employees understand both the capabilities and constraints of AI systems, they are better equipped to use them responsibly.

Looking ahead, the organisations that will succeed in securing their AI initiatives aren’t those with the strictest policies – they’re those that best understand and work with human behaviour. Just as we learned to secure cloud storage by providing viable alternatives to personal Dropbox accounts, we’ll secure AI by empowering employees with the right tools while maintaining organisational security.

Ultimately, AI security is about more than protecting systems – it’s about safeguarding decision-making processes. Every AI-generated output should be evaluated through the lens of business context and common sense. By fostering a culture where verification is routine and questions are encouraged, organisations can harness AI’s benefits while mitigating its risks.

Like brakes on an F1 car that enables it to drive faster, security isn’t about hindering work:  it’s about facilitating it safely. We must never forget that human judgement remains our most valuable defence against manipulation and compromise. 

Javvad Malik is lead security awareness advocate at KnowBe4

#security #Balancing #innovation #protection