Fortifying the future: The pivotal role of CISOs in AI operations


The widespread adoption of artificial intelligence (AI) applications and services is driving a fundamental shift in how chief information security officers (CISOs) structure their cyber security policies and strategies.

The unique characteristics of AI, its data-intensive nature, complex models, and potential for autonomous decision-making introduce new attack surfaces and risks that necessitate immediate and specific policy enhancements and strategic recalibrations.

The primary goals are to prevent inadvertent data leakage by employees using AI and Generative AI (GenAI) tools and to ensure that decisions based on AI systems are not compromised by malicious actors, whether internal or external. Below is a strategic blueprint for CISOs to align cybersecurity with the secure deployment and use of GenAI systems.

  • Revamp acceptable use and data handling policies for AI: Existing acceptable use policies (AUPs) must be updated specifically to address the use of AI tools, explicitly prohibiting the input of sensitive, confidential, or proprietary company data into public or unapproved AI models. Sensitive data could include customer personal information, financial records or trade secrets. Policies should clearly define what constitutes ‘sensitive’ data in the context of AI. Data handling policies must also detail requirements for anonymisation, pseudonymisation, and tokenisation of data used for internal AI model training or fine-tuning.
  • Mitigate AI system compromise and tampering: CISOs must focus on AI system integrity and security. Deploy security practices into the entire AI development pipeline, from secure coding for AI models to rigorous testing for vulnerabilities like prompt injection, data poisoning and model inversion. Implement strong filters and validators for all data entering the AI system (prompts, retrieved data for RAG) to prevent adversarial attacks. Similarly, all AI-generated outputs must be sanitised and validated before being presented to users or used in downstream systems to avoid malicious injections. Wherever feasible, deploy AI systems with XAI capabilities, allowing for transparency into how decisions are made. For high-stakes decisions, mandate human oversight when handling sensitive data or performing irreversible operations to provide a final safeguard against compromised AI output. 
  • Building resilient and secure AI development pipelines: Securing AI development pipelines is paramount to ensuring the trustworthiness and resilience of AI applications integrated into critical network infrastructure, security products and collaborative solutions. It necessitates embedding security throughout the entire AI lifecycle. GenAI code, models and training datasets are part of the modern software supply chain. Secure AIOps pipelines with CI/CD best practices, code signing and model integrity checks. Scan training datasets and model artifacts for malicious code or trojaned weights. Vet third-party models and libraries for backdoors and licence compliance.
  • Implement a comprehensive AI governance framework: CISOs must champion the creation of an enterprise-wide AI governance framework that embeds security from the outset. AI risks should not be isolated but woven into enterprise-wide risk management and compliance practices. This framework should define explicit roles and responsibilities for AI development, deployment and oversight to establish an AI-centric risk management process. A centralised inventory of approved AI tools should be maintained, along with their risk classifications.  The governance framework helps substantially in managing the risk associated with “shadow AI”, the use of unsanctioned AI tools or services. Mandate only approved AI tools and block all other AI tools and services.
  • Strengthen data loss prevention tools (DLPs) for AI workflows: DLP strategies must evolve to detect and prevent sensitive data from entering unauthorised AI environments or being exfiltrated via AI outputs. This involves configuring DLP tools to specifically monitor AI interaction channels (eg chat interfaces and API calls to LLMs), identifying patterns indicative of sensitive data being input. AI-specific DLP rules must be developed to block or flag attempts to paste PII, intellectual property or confidential code into public AI prompts.
  • Enhance employee and leadership AI awareness training: Employees are often the weakest link in the organisation. CISOs must implement targeted, continuous training programmes on the acceptable use of AI, identify AI-centric threats, promote engineering best practices, and provide education on reporting security incidents related to the misuse of AI tools and potential compromise.
  • Institute vendor risk management for AI services: As companies increasingly rely on third-party AI services, CISOs must enhance their third-party risk management (TPRM) programmes to address these risks. They should define standards for assessing the security posture of the AI vendor’s supply chain, adhering to robust contractual clauses that mandate security standards, data privacy, liability for breaches, and audit rights for AI service providers. There should be in-depth security assessments of AI vendors, scrutinising their data handling practices, model security, API security, and AI-specific incident response capabilities. 
  • Integrate continual monitoring and adversarial testing: In the ever-evolving landscape of AI threats and risks, static security measures are insufficient. CISOs should stress the importance of continual monitoring of AI systems to detect potential compromises, data leaks or adversarial attacks – signalled by unusual prompt patterns, unexpected outputs or sudden changes in model behaviour. Regular red teaming and adversarial testing exercises, specifically designed to exploit AI vulnerabilities should help organisations to spot weaknesses before malicious actors.

CISOs who make these changes will be better able to manage the risks associated with AI, enabling security practices to keep pace with or get ahead of AI deployment. This requires a shift from reactive defence to a proactive, adaptive security posture woven into the fabric of AI initiatives.

Aditya K Sood is vice president of security engineering and AI strategy at Aryaka.



Read more on Business continuity planning


#Fortifying #future #pivotal #role #CISOs #operations