Dana has not written any posts yet.

True, but not convincing. They have been pretty consistent in their concern for America/Americans above others. E.g., in their latest statement, regarding fully autonomous killer weapons: "We will not knowingly provide a product that puts America’s warfighters and civilians at risk." Now, one could argue that I am being insufficiently generous, but this wording sure makes it sound like the only civilians they are concerned for are American civilians. In the context of providing autonomous killer weapons to the American DoW.
Why couldn't a democratic system of ownership and control implement those safeguards bottom up?
Is this actually misalignment? It seems they are planning to roll out 'adult mode' fairly soon, so I doubt they've put much effort into eliminating this kind of behavior.
Of course it is plausible, but there is seemingly no evidence supporting the claim.
That research is from August. Seems much more likely to me that they've just chosen to switch focus to more scalable (ie, less expensive) approaches than that they've scaled this up since then and found conclusive conflicting results already.
Some of the phrasing also doesn't give the impression that they've tried very hard to make it work:
"We expect this to become even more of an issue as AIs increasingly use tools" -> phrased as a prediction, not based on evidence or current state.
Applying filtering to tool use "wasn't enough assurance against misuse"? What does that even mean? Are we demanding more of filtering than other approaches now?
"We could have made more progress here with more research effort, but it likely would have required..." -> didn't try, another prediction
Didn't mention anything about what caused filtering to suddenly become less effective. Why?
Private American companies seem like the bigger risk from my perspective. As examples, many expect Anthropic/OpenAI to IPO this year, but if AGI is expected to be priced into public markets within ~2 years, that seems like a very small window for the leading AGI companies to not be able to secure private funding. And surely they won't IPO if they can lock in sufficient funding privately, right? Plus all the other private AI companies.
I don't share that intuition, from a few angles.
I think a 10x larger bet would be more than 10x as suspicious. There are more than 10x as many people who would bet 80k on low-medium conviction bets than 800k.
Also liquidity would dry up, quickly, once liquidity providers see the obvious insider, so the reward would be much less than 10x.
Also I see the disutility from suspicion as closer to a step function: you really do not want your suspicion to rise to a level that would warrant a serious investigation. Which is kind of binary, closely-related to the risk of traders framing you as an insider beforehand, and I would think 80k is already pretty close to this threshold (but I'm not familiar with how liquid this market was).
You can delete Youtube videos from your watch history if you don't want it to be used for recommendations. I do this. It would be nice to have an easier way to switch through preference profiles than switching accounts though, that seems like a hassle.
I agree with your assessment of what the problem is, but I don't agree that is the main point of this post. The majority of this post is spent asserting how 'ordinary', smart, and high functioning this victim is and how we can now conclude that therefore everyone, including you, is vulnerable, and AI psychosis in general is a very serious danger. It being suppressed is just mentioned in passing at the start of the post.
I also wonder what exactly is meant by AI psychosis. I mean, my co-worker is allowed to have an anime waifu but I'm not allowed to have a 4o husbando?
Imagine you were to provide full context. Would this affect how the recipient feels about the message? If so, they deserve to have that context. Reaching out to friends for advice is very different from an AI reaching out to you for approval of its message. You didn't initiate, you didn't provide any input, and you didn't put in any effort aside from the single click.
Do you think they would stop the US from sharing its mass surveillance of British citizens with the British government? Or allow another country to use Claude to conduct mass surveillance of Americans?
It seems pretty clearly no in both cases from my perspective.