I also recommend using this medical framing, where you take antidepressants, as a medical treatment for depression, if you have depression.
I have like 5 days of water saved for me/partner. I figure 2 days no water could happen under normal disaster situations (like 9/11 10 blocks away or something idk) but getting out into weeks is beyond my prepper aspirations. Does that track?
It's clicking to me how legibility and Goodhardt relate. What if an elite's window into society is Goodhardt-corrupted? This can happen without AI if they're corrupt, or just out of touch. It does seem like a sycophantic AI can really do something novel to this situation. If legibility is low enough, and AI helps make the system legible, outer alignment failure might lead to a successful, permanent deception of the user.
Also it's sort of crazy how AI more than any other STEM I've ever worked with collapses the gap from "very out there" to "practical." If you're using AI for observability, this is a non-negotiable problem you have to solve. Claude Code does answers-faking today; that sort of thing can happen in an observability system.
I think those "spaces" debate definitely invoke why the division exhibits in the first place. For example when someone is worried about a trans person attacking someone in one bathroom or the other, they very clearly are invoking the fact that the bathrooms are segregated to reduce violence. In girls' sports, it's always about competition, body size, some sort of social benefit of promoting women, bathroom privacy. I just don't think you've diagnosed the right problem here. Like as soon as someone gets to the word "because" in their argument about trans rights, they talk about why that gender segregation exists.
You usually realize you're ranting mid-way through the rant. Or possibly, you're posting to a reddit sub with the word "rants" in its name.
A lot of this doesn't pass the smell test.
Where I agree with you:
For what it's worth I was vegan about 10 years and quit about 2 years ago. I have been more neurotic since then, easily.
One quirk about the political donation problem is that activists respond to negative messaging but the broader base -- people who may donate/volunteer/vote based on the candidate not the party -- responds to positive messaging. I think it's fair to say this creates a "poisoning the well" problem, where the fundraising emails are doom, activists respond to doom, but the base is cut off from the party. Relevant to this forum, something like "ensure AI benefits everyone" might message better with the base, and "prevent AI from killing everyone" might message better with the activists.
My communication rule is "using lols to soften your too long message, doesn't really work and I can't explain why."