DeepSeek will help evolve the conversation around privacy

The launch of DeepSeek prompted the familiar wave of ethical debates that now accompany the launch of any large language model (LLM). Questions about data usage, transparency, and bias are well covered, but when the technology originates from China, they are accompanied by geopolitical and ethical concerns. As we’ve seen with TikTok, concerns around data handling quickly escalate into fears of state influence, national security risks and industrial espionage.

These fears aren’t without foundation. The accelerating AI arms race between the US and China has made AI a core pillar of national strategy. Both nations now view leading the AI race as an economic and technological priority. The result is a world in which every breakthrough model, whether American, Chinese or otherwise, is immediately scrutinised not just for its capabilities, but also for the geopolitical power shifts it sets in motion.

Data security in the face of AI

Big Tech companies in the US like Open AI and Anthropic have come under justifiable scrutiny over how they gather and process data but the launch of DeepSeek introduced an additional level of risk. China has a well-documented history of alleged state-sponsored corporate espionage and intellectual property theft – including the December hack of the US Treasury Department, which the US attributed to Chinese-backed hackers.

For CISOs and security leaders, the arrival of another powerful AI model with potential ties to the Chinese state should trigger a renewed focus on the security of their own data, particularly when it comes to protecting intellectual property and the sensitive information that underpins competitive advantage.

However, the real concern isn’t just what DeepSeek can do today, but how it might be trained tomorrow. LLMs are trained on vast datasets scraped from every publicly accessible source imaginable. But publicly available data alone won’t satisfy the demand for more powerful models. There is a growing risk that the next generation of LLMs could be trained, at least in part, on data obtained through less ethical means, whether via state-sponsored hacks, insider threats or large-scale scraping operations that operate in legal grey areas.

This is not a distant possibility. The practice of data hoarding – storing encrypted data today with the intention of decrypting it in the future, is already well documented in the industry. For CISOs, that means the threat landscape isn’t only limited to today’s vulnerabilities. Even encrypted data that’s safely stored today could become accessible within the lifespan of long-term business or government strategies.

How CISOs can mitigate the risk

The emergence of DeepSeek serves as a timely reminder for CISOs to revisit how their organisations think about data protection in the context of state-level threats. It starts with gaining full visibility into what data they hold, where it resides and who can access it.

However, visibility and control are only part of the solution. The technologies used to safeguard data also need to evolve. Privacy-enhancing technologies (PETs), some of which are quantum resilient, should be on the radar of any forward-thinking security team. At the same time, organisations should push their technology suppliers to adopt stronger encryption measures that will remain resilient, especially with the speed in which AI advances are coming to market and, in the longer term, a possible post-quantum era.

There is also a broader cultural shift required. Companies must recognise that threats to data security are no longer just the work of isolated hackers or financially-motivated cyber criminals. Data has become a risk asset in our fractured geopolitical landscape. As the AI arms race continues to intensify, every scrap of proprietary data, from design files to customer behaviour patterns, takes on new strategic value,  not just for competitors, but for nation states with the resources to systematically exploit it.

The arrival of DeepSeek is simply the latest reminder that the boundaries between technological innovation, economic competition and geopolitics have all but disappeared. For CISOs, that means the conversation about protecting data needs to evolve – one that acknowledges data as not just a business asset, but a target in a broader contest for economic and geopolitical power.

Dr Nick New is CEO at Optalysys, With a PhD in Optical Pattern Recognition from Cambridge, Nick has a strong foundation in optical technology. At Optalysys, he is pioneering advancements in silicon photonics and FHE.

#DeepSeek #evolve #conversation #privacy