Amazon workers are “tokenmaxxing” because they are forced to use AI tools



The e-commerce group posted statistics for its AI team and its employees, but recently limited access so that only employees and their managers can see their statistics. Managers are discouraged from using benchmarks to evaluate performance, according to a person familiar with the matter.

Meta operators have also engaged in what is known as “tokenmaxxing” to improve their standing on internal boards.

The MeshClaw tool that some workers used to increase their stats was inspired by OpenClaw, which went viral in February. OpenClaw allows users to run native agents on their devices, including computers and laptops.

Amazon’s MeshClaw can start sending code, sending emails, and communicating with apps like Slack, according to people familiar with the matter.

The company said in a statement that the tool helped “thousands of people at Amazon automate repetitive tasks every day” and was one example of “empowering teams” testing and using AI tools.

“We are committed to a safe, secure, and reliable service and delivery of artificial intelligence to our customers,” it added.

More than a dozen Amazon employees used the internal tool, according to internal documents. A recent memo describing the bot said: “I dream at night to consolidate your learnings, monitor your emails while you’re in meetings, and test your email before you wake up.”

Several Amazon employees said they were concerned about the security risks of an AI tool that was allowed to act on behalf of the user. This can be a risk where the agent may make a mistake or do something they don’t want.

Another employee said. “I don’t want to leave it alone.”

© 2026 The Financial Times Ltd. All rights reserved. It must not be redistributed, copied, or modified in any way.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *