Using AI for Just 10 Minutes Can Make You Lazy and Dumb, Study Shows


Using AI chatbots even for just 10 minutes can have a negative impact on people’s thinking and problem solving, according to a new lesson from researchers at Carnegie Mellon, MIT, Oxford, and UCLA.

Researchers tasked people with solving a variety of problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. He conducted three experiments, each involving several hundred people. Some of the participants were given the chance to find an AI assistant that could solve the problem on its own. When the AI ​​assistant was suddenly removed, these people were willing to give up on the problem or answer their questions. The research shows that extensive use of AI can increase productivity and improve basic problem-solving skills.

“Concepts don’t mean we should ban AI in education or the workplace,” says Michiel Bakker, an assistant professor at MIT who is involved in the research. “AI can help people do better now, and that can be useful. But we need to be very careful about what kind of help AI provides, and when.”

I recently met Bakker, who has messy hair and good looks, on the MIT campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a a familiar story on the way AI can weaken humans over time inspired him to think about how technology can already destroy human abilities. This story makes for a bit of difficult reading, as it suggests that powerlessness is inevitable. That said, perhaps figuring out how AI can help people improve their mental abilities should be part of how models fit into the humanities.

“It’s a question of knowledge—about persistence, learning, and how people react when faced with a problem,” Bakker tells me. “We wanted to take the big concerns about the long-term interaction of humans with AI and study them in a test environment.”

The next study seems particularly concerning, says Bakker, because a person’s willingness to work hard to solve problems is important for acquiring new skills and also predicts their ability to learn over time.

Bakker says it may be necessary to rethink the way AI tools work because – like a good human teacher – models sometimes prioritize human learning over solving a problem for them. “Systems that provide direct feedback can have very different long-term effects from systems that fire, train, or challenge the user,” says Bakker. He admits, however, that balancing this kind of “fatherly” approach can be difficult.

AI companies are already thinking about the problems their models can have for users. The compatibility of other brands – or how they fit with users – is something else OpenAI has requested download and new GPT releases.

Putting a lot of faith in AI can seem overwhelming when the tools aren’t what you expect. Agetic AI systems are unpredictable because they perform complex tasks on their own and can cause errors. It makes you wonder what Claude Code and Codex are doing to the skills of coders who sometimes have to fix the mistakes they cause.

I recently learned about the dangers of offloading critical thinking to AI myself. I’ve been using OpenClaw (with Codex built-in) as a daily assistant, and I’ve found it to be very good at dealing with editing issues. Linux. Recently, when my Wi-Fi connection dropped, my AI assistant told me to run a few commands to stop the driver talking to the Wi-Fi card. The result was a machine that refused to boot regardless of what I did.

Perhaps, instead of trying to fix the problem for me, OpenClaw should have stopped to teach me how to fix the problem myself. I might have a capable computer—and a brain—for that.


This is a copy of Will Knight’s AI Lab Newsletter. Read previous articles Here.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *