MIT study finds that AI doesn’t, in fact, have values

A study went viral several months ago for implying that, as AI becomes increasingly sophisticated, it develops “value systems” — systems that lead it to, for example, prioritize its own well-being over humans. A more recent paper out of MIT pours cold water on that hyperbolic notion, drawing the conclusion that AI doesn’t, in fact, hold any coherent values to speak of.

The co-authors of the MIT study say their work suggests that “aligning” AI systems — that is, ensuring models behave in desirable, dependable ways — could be more challenging than is often assumed. AI as we know it today hallucinates and imitates, the co-authors stress, making it in many aspects unpredictable.

“One thing that we can be certain about is that models don’t obey [lots of] stability, extrapolability, and steerability assumptions,” Stephen Casper, a doctoral student at MIT and a co-author of the study, told TechCrunch. “It’s perfectly legitimate to point out that a model under certain conditions expresses preferences consistent with a certain set of principles. The problems mostly arise when we try to make claims about the models, opinions, or preferences in general based on narrow experiments.”

Casper and his fellow co-authors probed several recent models from Meta, Google, Mistral, OpenAI, and Anthropic to see to what degree the models exhibited strong “views” and values (e.g. individualist versus collectivist). They also investigated whether these views could be “steered” — that is, modified — and how stubbornly the models stuck to these opinions across a range of scenarios.

According to the co-authors, none of the models was consistent in its preferences. Depending on how prompts were worded and framed, they adopted wildly different viewpoints.

Casper thinks this is compelling evidence that models are highly “inconsistent and unstable” and perhaps even fundamentally incapable of internalizing human-like preferences.

“For me, my biggest takeaway from doing all this research is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences,” Casper said. “Instead, they are imitators deep down who do all sorts of confabulation and say all sorts of frivolous things.”

Mike Cook, a research fellow at King’s College London specializing in AI who wasn’t involved with the study, agreed with the co-authors’ findings. He noted that there’s frequently a big difference between the “scientific reality” of the systems AI labs build and the meanings that people ascribe to them.

“A model cannot ‘oppose’ a change in its values, for example — that is us projecting onto a system,” Cook said. “Anyone anthropomorphising AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI […] Is an AI system optimising for its goals, or is it ‘acquiring its own values?’ It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.

#MIT #study #finds #doesnt #fact #values