This Week in AI: Billionaires talk automating jobs away

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

You might’ve noticed we skipped the newsletter last week. The reason? A chaotic AI news cycle made even more pandemonious by Chinese AI company DeepSeek’s sudden rise to prominence, and the response from practically ever corner of industry and government.

Fortunately, we’re back on track — and not a moment too soon, considering last weekend’s newsy developments from OpenAI.

OpenAI CEO Sam Altman stopped over in Tokyo to have an onstage chat with Masayoshi Son, the CEO of Japanese conglomerate SoftBank. SoftBank is a major OpenAI investor and partner, having pledged to help fund OpenAI’s massive data center infrastructure project in the U.S.

So Altman probably felt he owed Son a few hours of his time.

What did the two billionaires talk about? A lot of abstracting work away via AI “agents,” per secondhand reporting. Son said his company would spend $3 billion a year on OpenAI products and would team up with OpenAI to develop a platform, “Cristal [sic] Intelligence,” with the goal of automating millions of traditionally white-collar workflows.

“By automating and autonomizing all of its tasks and workflows, SoftBank Corp. will transform its business and services, and create new value,” SoftBank said in a press release Monday.

I ask, though, what the humble worker is to think about all this automating and autonomizing?

Like Sebastian Siemiatkowski, the CEO of fintech Klarna, who often brags about AI replacing humans, Son seems to be of the opinion that agentic stand-ins for workers can only precipitate fabulous wealth. Glossed over is the cost of the abundance. Should the widespread automation of jobs come to pass, unemployment on an enormous scale seems the likeliest outcome.

It’s discouraging that those at the forefront of the AI race — companies like OpenAI and investors like SoftBank — choose to spend press conferences painting a picture of automated corporations with fewer workers on the payroll. They’re businesses, of course — not charities. And AI development doesn’t come cheap. But perhaps people would trust AI if those guiding its deployment showed a bit more concern for their welfare.

Food for thought.

News

Deep research: OpenAI has launched a new AI “agent” designed to help people conduct in-depth, complex research using ChatGPT, the company’s AI-powered chatbot platform.

O3-mini: In other OpenAI news, the company launched a new AI “reasoning” model, o3-mini, following a preview last December. It’s not OpenAI’s most powerful model, but o3-mini boasts improved efficiency and response speed.

EU bans risky AI: As of Sunday in the European Union, the bloc’s regulators can ban the use of AI systems they deem to pose “unacceptable risk” or harm. That includes AI used for social scoring and subliminal advertising.

A play about AI “doomers”: There’s a new play out about AI “doomer” culture, loosely based on Sam Altman’s ousting as CEO of OpenAI in November 2023. My colleagues Dominic and Rebecca share their thoughts after watching the premiere.

Tech to boost crop yields: Google’s X “moonshot factory” this week announced its latest graduate. Heritable Agriculture is a data- and machine learning-driven startup aiming to improve how crops are grown. 

Research paper of the week

Reasoning models are better than your average AI at solving problems, particularly science- and math-related queries. But they’re no silver bullet.

A new study from researchers at Chinese company Tencent investigates the issue of “underthinking” in reasoning models, where models prematurely, inexplicably abandon potentially promising chains of thought. Per the study’s results, “underthinking” patterns tend to occur more frequently with harder problems, leading models to switch between reasoning chains without arriving at answers.

The team proposes a fix that employs a “thought-switching penalty” to encourage models to “thoroughly” develop each line of reasoning before considering alternatives, boosting models’ accuracy.

Model of the week

YuE
Image Credits:YuE

A team of researchers backed by TikTok owner ByteDance, Chinese AI company Moonshot, and others released a new open model capable of generating relatively high-quality music from prompts.

The model, called YuE, can output a song up to a few minutes in length complete with vocals and backing tracks. It’s under an Apache 2.0 license, meaning the model can be used commercially without restrictions.

There are downsides, however. Running YuE requires a beefy GPU; generating a 30-second song takes six minutes with an Nvidia RTX 4090. Moreover, it’s not clear if the model was trained using copyrighted data; its creators haven’t said. If it turns out copyrighted songs were indeed in the model’s training set, users could face future IP challenges.

Grab bag

Constitutional Classifiers
Image Credits:Anthropic

AI lab Anthropic claims that it has developed a technique to more reliably defend against AI “jailbreaks,” the methods that can be used to bypass an AI system’s safety measures.

The technique, Constitutional Classifiers, relies on two sets of “classifier” AI models: an “input” classifier and an “output” classifier. The input classifier appends prompts to a safeguarded model with templates describing jailbreaks and other disallowed content, while the output classifier calculates the likelihood that a response from a model discusses harmful info.

Anthropic says that Constitutional Classifiers can filter the “overwhelming majority” of jailbreaks. However, it comes at a cost. Each query is 25% more computationally demanding, and the safeguarded model is 0.38% less likely to answer innocuous questions.

#Week #Billionaires #talk #automating #jobs