AI compliance: Dealing with data change and proliferation

In this podcast, we talk to Mathieu Gorge, CEO of Vigitrust, about the compliance risks posed by data during artificial intelligence (AI) processing, and training in particular. The key challenges here are that as datasets are trained, more data is created, and it can be difficult to ensure that data is also compliant, especially as it proliferates.

Here, Gorge talks about the need to know what’s being fed into AI, what comes out, where it goes, who has access to it and how it’s stored, and whether it is compliant.

He also deals with the security and compliance frameworks that can be used and the need to build AI compliance into organisational security culture.

What’s the latest on AI and compliance, with reference to storage and backup, that a CIO needs to know about?

As you know, AI adoption is really growing everywhere and we’ve seen the EU deploying some AI regulations.

We’ve also seen some frameworks adapting to AI, for instance NIST that has an AI framework. We’ve seen some security associations pushing for their own standards. I can think of the Cloud Security Alliance, but also working groups from ISSA, from Isaca, all of them providing guidance.

I think that what we need to consider is that we are most likely going to see more AI-related regulation. Some of it will be national, some of it will be federal, some of it will be international, a little bit like what we’ve seen with privacy. And it’s important to draw a comparison between the evolution of cyber security standards and AI standards, governance standards.



At the beginning, about 25 years ago, there were about 100 standards on network security, IT security and data security. And nowadays we only dial back to about five or six, like HIPAA, PCI, NIST, ISO, CIS and so on. My hope is that we’re going to do the same with AI, but in a faster way, so that we can concentrate on managing AI deployments from a data classification, data privacy and storage perspective.

If you look at the fundamentals, what is AI governance really? AI governance as regulated in the US, the EU and other countries is really about saying: “Well, we’ve got this new way of processing data. So, we need to understand where the data is coming from. Do we have the authority to actually use that data and put it into an AI system to treat it for whatever purpose we treat it?”

The data comes in in a particular form.

[Questions include:]

  • Does it come out [of AI processing] in a different kind of data form, data file or whatever?
  • Is that putting us out of compliance?
  • Is that facilitating compliance?
  • Do we have safeguards around who’s accessing the data?
  • Do we have safeguards around how we store that data?
  • How long do we need to keep it?
  • How long will we need to report on that data, depending on where we’re based?
  • When we store that data, where is it supposed to be stored?

So, the issue with AI is that as we deploy more AI systems, we essentially multiply the data a lot more than we used to. And so, we’re creating a lot more data than we used to and that data needs to be stored somewhere.

And it needs to be stored in a way that doesn’t put you out of compliance. So, you need to watch your AI ecosystem and regulate how the data comes in, how it goes out, who’s got access to it and where you store it.

How should the CIO approach the job of ensuring compliance for AI operations in their organisation, given the potential scope for complexity?

I think the CIO’s role should be to understand what kind of information goes into AI. At the end of the day, the chief information officer is responsible for managing the information that comes into the systems, that goes out, that can be accessed by third parties, how it can be accessed and so on. And so, I would highly recommend that any CIO works in conjunction with their CSO or their security team and looks at global AI regulation and policy.

And I would highly recommend looking at the IAPP, the International Association of Privacy Professionals. Their website has an AI law and policy tracker that can allow you to understand the various frameworks and their requirements in terms of data classification, data deployment, storage and compliance requirements.

If you are pushing AI solutions and AI deployments, you need to push a culture of adoption for those systems, but you also need to push a culture of data management, information management and security with that. Otherwise, you will fall out of compliance
Mathieu Gorge, Vigitrust

The next thing to do is to make sure that when you do training for your staff, as they roll out more and more AI-based systems that allow them to be more efficient and more productive, they also understand the risks with AI.

The same way as we train them for email, for social networking, for other stuff, the CIO should be pushing, at board level, the concept of integrating AI, not just in the business culture of the organisation, but also in the security and information and data management culture of the system.

In other words, if you are pushing AI solutions and AI deployments, you need to push a culture of adoption for those systems, but you also need to push a culture of data management, information management and security with that. Otherwise, you will fall out of compliance.

So again, look at your ecosystem, how you intend to use AI for various business reasons across multiple systems, look at an AI policy tracker somewhere, and then try to apply that to your policy so that it quickly becomes part of the DNA of your organisation.

Because AI is going to continue to be deployed. There are going to be more and more AI-based solutions that will benefit the business.

The question is, will it benefit your data management? Will it make it more complicated? Potentially, if you don’t manage it, but if you use good AI governance frameworks, and if you try to distil them down to what matters to your organisation, you’re then on to a good strategy for AI deployment and AI compliance.

#compliance #Dealing #data #change #proliferation