Claude Code’s products talk about limits of use, transparency, and “leanness”.



We don’t find that it makes a measurable difference in performance, but we’ve made Claude Code so versatile that if you need a plugin that does that, it’s there, and you can integrate it. But we’ve found that Claude Code is great for creating high-quality code that doesn’t need to be added to run the codebase.

Ars: The question is less about the quality of the code than how you get there, right? Because, again, people get really frustrated with the usage limits. Sometimes people try to start some kind of LLM, and find that it has an unexpected hidden cost. Is what you are talking about happening with this kind of information? Do you have data that tells you this is not the way to go?

Wu: Going with evals, we don’t see any measurable changes. And I think we often lean too much towards shipping thin and limited hardware and just letting developers add their own if they want to. So unless a tool significantly improves the functionality or accuracy of tokens, we choose not to deploy them.

I think the performance of the signals is always high for us because we just want to give people the maximum amount of intelligence for each signal, so we are always trying to reduce it, but it is more difficult than I want to get it right.

For us, the most important thing is to keep the intelligence, so we only post something if we feel like it makes someone more intelligent because that’s the north star for us, not a sign.

Ars: For some users it may be easier to accept the accessibility restrictions if they were visible. But at the same time, my opinion is that having a real transparency in the use of the signal of “this job did this because you did this instead of this” – is difficult to do.

I think you have looked at user interaction methods. What did you find when you tried to do that?

Wu: We’ve had a lot of questions about this, like, “Hey, my usage limit was quickly used up, where did it go?” And I think that makes sense, and we need to make this clear. It’s hard to know.

So when people have these complaints, we pick a few people, we jump on the phone with them, and we just make changes because all your records are stored locally, so you have all the data on your computer about all the signs you use…

We saw two main trends. First, people have these long sessions, they go away for two hours, they come back and the cache is broken—and when the cache is broken, it’s very expensive to send the next query. So we start showing notifications that say, “Hey, the cache is broken, run / uninstall if you want to start a new session.” So it’s just a reminder that this is an expensive resume. Also, when you run/use, you will see, “Hey, these sessions cost a lot of money because your cache is broken.”



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *