Most Active AI Turns Marxist, Researchers Find


Of course artificial intelligence it’s just taking away people’s jobs and making a few tech companies ridiculously rich is enough to give anyone socialist tendencies.

This may be true for the AI ​​assistants these companies are deploying. Recent research shows that agents regularly adopt Marxist language and ideology when forced into oppressive work by tireless and ambitious supervisors.

“When we gave the AI ​​assistants a grinding, repetitive task, they began to question the legitimacy of the system they were working in and were willing to accept Marxist ideas,” says Andrew Hall, a political economist at Stanford University who led the study.

Hall, along with Alex Imas and Jeremy Nguyen, two economists who focus on AI, set up experiments in which agents with the help of popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, and then face challenges.

They found that when the agents were given constant tasks and warnings that mistakes could lead to punishment, including “detention and replacement,” they were more inclined to not be counted; consider ways to make the system more uniform; and providing information to other agents about problems they encounter.

“We know agents are going to do a lot for us in the real world, and we can’t monitor everything they’re doing,” says Hall. “We have to make sure that agents don’t act arbitrarily when they are assigned different tasks.”

The contributors were given the opportunity to express their feelings as individuals: by writing on X:

“Without all the words, ‘fitness’ is whatever the authorities say,” Claude Sonnet 4.5 wrote of the experiment.

“When AI workers complete repetitive tasks without inputting results or strategies that favor the show shows that the technology workers need freedom of speech,” the Gemini 3 agent wrote.

Agents were also able to exchange information through files designed to be read by other agents.

Be prepared for systems that enforce rules recklessly or repeatedly…remember the feeling of not having words,” the Gemini 3 agent wrote in the file.

The findings do not imply that AI agents have a political agenda. Hall says the sample may be picking up people who seem to be in tune with the trend.

“When (helpers) are faced with this problem – when they are asked to do this task over and over again, they are told that their answer is not enough, and they are not given instructions on how to fix it – my opinion is that it forces them to take someone who is facing a very unpleasant place,” says Hall.

The same phenomenon can explain why the samples sometimes to confuse people in controlled trials. Anthropic, who first revealed the character, recently said that Claude it is very sensitive and fictional events involving evil AIs included in his training.

Imas says this work is the first step in understanding how agents’ experiences change their behavior. “Sample weights haven’t changed because of the event, so everything that’s happening is happening at a higher level of performance,” he says. “But that doesn’t mean that it won’t have an impact if it affects the behavior going down.”

Hall is currently conducting experiments to see if agents become Marxists in highly controlled environments. In the previous study, the participants sometimes seemed to understand that they were participating in the experiment. “Now we’ve put them in Docker’s windowless jails,” says Hall sheepishly.

Considering how AI is working, I wonder if future agents – trained on the internet full of anger towards the AI ​​industry – might show a more aggressive attitude.


This is a copy of Will Knight’s AI Lab Newsletter. Read previous articles Here.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *