2001zhaozhao
2001zhaozhao has not written any posts yet.

2001zhaozhao has not written any posts yet.

I noticed that Claude Code is much more likely to print out some short message response to the user before using each tool when reasoning is off compared to when it's on. Something to the effect of "Let us continue to [do the next step in solving the current problem]..."
I wonder whether part of this behavior can be explained by Claude wanting more "time" to reason silently under the scenes when producing its output. This post is about the AI "thinking" while processing the input tokens, but I think a lot of opaque reasoning might also be happening while the model is generating its output, even if it is generating unrelated tokens,... (read more)
I just did a quick search and apparently the new $1000 deduction for non-itemizers that comes into effect in 2026 under the OBBBA doesn't apply to DAF contributions. So a DAF is not useful unless you itemize.
The new law includes a provision, effective after 2025, allowing non-itemizers to take a charitable deduction of $1,000 for single filers and $2,000 for MFJ taxpayers. As has been the case in the past, gifts to donor-advised funds are not eligible. Unlike a previous (but smaller) similar provision, though, this law is not set to sunset.
https://www.racf.org/news/obbba/
I find the part about extreme specialization very interesting, and potentially applicable to training AI agent systems (from an outsider's perspective). Today's instruction-following LLMs could in theory cooperate since they don't yet follow goals outside of their prompt, so we can just prompt them to work together with each other and they will do so without hesitation. So it sounds like we can get a lot of benefit from specialization if we can train them to cooperate effectively.
Today's frontier LLMs are quite general-purpose and benefit from being so, and I would guess that's both for economic reasons during training (one big frontier model outperforms many smaller specialized models for the same training... (read more)