At Saudi Arabia’s Leap 2025 event in February, the kingdom announced huge investment in artificial intelligence (AI), while the conference element of the show majored heavily in key questions about AI. These ranged from the next steps, such as getting business value from enterprise AI, to challenges in developing agentic AI, as well as the more distant future in robotics and causal AI.
Also addressed was the need to focus on how AI will change society and how to ensure it is a force for good rather than one that undermines social coherence.
Agentic AI needs common sense
Yaser Al-Onaizan, CEO of the National Center for AI in the Saudi Data and AI Authority (SDAIA), focused on agentic AI as the next step, as in AI that works on our behalf.
“The large language models [LLMs] understand how language is constructed, the sequences that people generate,” he said. “But the promise of AI is that it will be in everything we do and touch every day. But it needs to be invisible. It cannot be in your face – it should be listening to you, understanding you and doing things based on your opinion, without you even asking sometimes.
“So, for example, you can interact with a model and instead of just giving you information about flights, it can go on and reserve flights or make a hotel reservation for you.”
But, said Al-Onaizan, the challenge is for AI to work on humans’ behalf and to get things right, to understand “common sense”, so that decisions made autonomously fit with what’s practical for those on whose behalf it works.
Agentic AI to causal AI
Meanwhile, Lamia Youseff, founder of Jazz Computing, is an industry thought leader in AI and cloud, with a CV that includes Google, Microsoft, Facebook and Apple, and academic research institutions including Stanford, MIT and UCSB.
She says we can see AI in several phases and inflection points:
- Enterprise AI – for which big data led the way by gathering huge amounts of data together for analysis. Enterprise AI introduces optimisations and has brought “a tsunami of new products and services”.
- Agentic AI – the next step for the next two years will bring agents that work on our behalf, taking commands, conversing with LLMs, breaking commands into steps and taking actions.
- Robotics and humanoids – which will require great innovation in communications and machine understanding of human language to combine LLMs and robotics, such as in driverless cars, and critically, with the ability to interact in a 3D world.
- Causal AI – where AI can predict incredibly complex real-world events, such as stock market fluctuations.
Unlocking AI for the enterprise
Elsewhere, speakers focused on how to gain business value from AI. This included Aiden Gomez, CEO of Canadian company Cohere, which specialises in use of LLMs in enterprises.
His focus is on making LLMs useful for enterprises by helping build an application stack that makes use of them.
“You have to be technical, you need to be a developer, to be able to build something on top of this model to create value on the other side,” he said.
Challenges include being able to lower barriers so AI can integrate with internal systems, and security where AI moves out of the proof-of-concept phase and touches the most sensitive customer data.
“The biggest thing the enterprise world needs is good solutions that can plug in and go,” he said. “The barrier is for enterprises to adopt AI securely, which means completely private deployments. Then we can start to shift the work that humans are doing onto these models. To succeed, they need to be able to use the systems that humans today use to get their job done and integrate generative AI with the internal software and systems.”
Gomez added: “When people were just testing out the technology, the security piece wasn’t so important, because they weren’t putting mission-critical data into those systems. Now, we’re moving out of the proof-of-concept phase and we’re going into production, and these models are touching the most sensitive customer data, so security is front of mind.”
He, too, pointed to agentic AI as the next stage and the challenges that need to be solved. The first is the use of reasoning models and the second is learning from experience.
“Just think about what reasoning is,” said Gomez. “What happens now is you can ask the model what’s 1+1, or you can ask it to prove Fermat’s last theorem, and the model would spend the same amount of time answering both of those questions, which makes no sense.
“With reasoning, you can spend different amounts of energy on different difficulties and problems. So now we can approach things dramatically more efficiently, but more effectively. That’s a major unlock for agentic AI. You want agents to be able to think through problems and really reason about it,” he added.
“The second thing is learning from experience. So, with a human, when you tell them, ‘You did something wrong. Here’s how to fix it’, they remember that and they learn forever not to make that same mistake. When models have that capability to learn from experience with the user it will unlock the ability to teach your own model just by interacting with it.”
AI: We must not be passive to the dangers
Finally, Lambert Hogenhout, chief of data, analytics and emerging technologies at the United Nations, warned of the dangers of AI to human society if we are passive in our approach to it. In other words, AI has the capacity to undermine human agency and even be a force that can work against us if uncontrolled.
In his view, the key risks posed by AI include threats to:
- Autonomy – in which AI makes you more competent, almost perfect, but means you are thereafter forced to interact with AI to be the person it makes you.
- Identity – in which AI even has the capability to build a replica of someone’s personality if it has enough information about the person. It takes identity theft to a new level.
- Purpose – where if AI replaces many jobs, what is left for humans, and how do we make sure we focus on what humans are good at, such as cooperation and creativity?
- Happiness and connection to society – it’s very important for humans to feel happy and connected to society. If that is undermined, there will be problems.
“AI is giving us lots of ways to improve our business and our private life,” said Hogenhout. “But in the long term, say 20 years, I think nobody predicted what the world is going to look like 20 years from now.
“Some people have a very positive view, where the robots do the work and we can spend our days playing golf or watching interesting movies. But the dystopian view says we will lose our purpose in life, and for humans it is going to become quite miserable, except for a few AI billionaires.”
Hogenhout pointed to how smartphones have changed everything and how AI is likely to do the same.
First, he spoke about the issue of autonomy.
“I’ve always wanted to be funnier when I respond to messages from my friends. I’ve wanted to be more eloquent when I write emails to my boss. And you know, I can with the help of AI. But it means I rely on AI for every communication, and when everybody does that, when everybody is perfect, can you afford not to use it, to be the only non-augmented human? We’re going to be forced to augment ourselves with AI.”
And on identity, Hogenhout talked about how AI has the potential to comprehensively clone a human being’s personality.
“What if somebody takes everything I’ve ever written – emails, posts, everything – and it knows how to respond like me? But there are actually companies doing that already.
“There’s an app called Hereafter that’s meant for elderly people. Your grandfather, for example, can be interviewed and information added about his life. Then, once grandpa dies, on your phone there is grandpa’s voice and you can ask him things like what’s happening with the Super Bowl or a football match and it responds exactly like Grandpa would have.”
That forces the question, said Hogenhout, of what distinguishes us if it is so easy to replicate identity down to the level of voice, of opinions?
Lambert Hogenhout, United Nations
Finally, Hogenhout looked at purpose.
“The meaning in our lives comes from the work we do, but it’s already clear a lot of jobs are going to be replaced. It’s true, we do dumb stuff a lot of the time, but we have incredible skills and abilities,” he said.
“What makes us quite unique as a species is our creativity. There’s also this sense of not accepting reality as it is. But, we’re not just innovators. I think we’re very good at cooperating. There’s a reason that we’re all together here at this conference, because we want to learn from each other. These connections are very important in society.
“It’s important for us to take decisions and to feel fulfilment. We want to make sure AI increases living connections, that we are not eliminated, that it makes a good society. A society where a number of people are excluded is not going to work. It will create problems,” added Hogenhout.
His conclusion is that AI will change us in big ways but we need to ensure we act intentionally. Otherwise, the potential to lock large swathes of humanity out of a core whose lives are augmented by AI will lead to a broken society, from which problems will result.
“At the moment, I think we’re too passive. We’re just waiting for the next amazing AI system to come out. We’re not thinking ahead, thinking about how this is going to affect our lives.”
#Huge #potential #threat #fabric #society