I just did a quick search and apparently the new $1000 deduction for non-itemizers that comes into effect in 2026 under the OBBBA doesn't apply to DAF contributions. So a DAF is not useful unless you itemize.
The new law includes a provision, effective after 2025, allowing non-itemizers to take a charitable deduction of $1,000 for single filers and $2,000 for MFJ taxpayers. As has been the case in the past, gifts to donor-advised funds are not eligible. Unlike a previous (but smaller) similar provision, though, this law is not set to sunset.
https://www.racf.org/news/obbba/
I noticed that Claude Code is much more likely to print out some short message response to the user before using each tool when reasoning is off compared to when it's on. Something to the effect of "Let us continue to [do the next step in solving the current problem]..."
I wonder whether part of this behavior can be explained by Claude wanting more "time" to reason silently under the scenes when producing its output. This post is about the AI "thinking" while processing the input tokens, but I think a lot of opaque reasoning might also be happening while the model is generating its output, even if it is generating unrelated tokens, I'd love to see some comparison that e.g. asks the model to generate a certain unrelated word 100 times before writing its answer.
I've also noticed that AI agents seem to have some chance of finding out bugs or issues the longer it thinks. In particular, Claude and other models will often fail to find a bug at first but then suddenly "notice" it some time later and then start working on fixing that bug unprompted. There doesn't seem to be any particular reason why it noticed the bug at that particular time, so I think some part of the AI's focus is wandering around different considerations at different times somewhat like the human subconscious, so it can give itself more chance of randomly having an "aha" moment if it is given more time reading or generating possibly unrelated tokens before making its ultimate decision (such as which tool to call). This might be a possible mechanism behind why the model learned to opaquely reason when given extra time, leading to the results in this post.