Super excited to work even more closely with @jarredsumner and the Bun team
@bcherny
-
Summarization as prompt injection risk mitigation strategy
By
–
Summarization is one thing we do to reduce prompt injection risk. Are you running into specific issues with it?
-
Claude Code WebFetch Tool Adds Markdown Accept Header
By
–
In the next version of Claude Code, Claude’s WebFetch tool automatically adds Accept: “text/markdown, *” to requests which helps docs sites provide token-efficient docs
-
Guest Lecture at Stanford CS 146S on AI Topics
By
–
Had a blast guest lecturing at Stanford CS 146S today. Thanks for the invite @mihail_eric
! -
Claude Code Shows Tool Call Errors for Better Debugging
By
–
Tool call errors happen with every model provider — in claude code we show errors while others hide them, so it may feel like they are happening more. The reason we show them is claude code is the same tool we use internally at Anthropic, and seeing the errors helps us debug
-
Sonnet 4.5 Recommended for Superior Coding Intelligence
By
–
We recommend sonnet 4.5 for everything — you get more rate limits with it, and it’s more intelligent for coding tasks. Re:hostile, specific examples would be helpful to debug
-
AI Output Limits Increased Based on User Feedback
By
–
Yep just output. We used to have a lower max output limit but people asked for a higher limit
-
Comparing Sonnet and Opus AI Models: Performance Differences
By
–
Are you using Sonnet or Opus? And when you say dumber what specifically?
-
SDK Auto-Compaction and Token Usage Statistics
By
–
We auto-compact for you when using the SDK, so you don't need to. To grab stats yourself, read assistant_message.usage.
-
Rate Limits and Token Compaction Bugs in API Systems
By
–
If you're seeing lower rate limits than what we publish, or if you're seeing auto-compact earlier than ~155k tokens, that's a bug. /bug to report.