(Posting in a personal capacity unless stated otherwise.) I work at Coefficient Giving with a focus on helping humanity better navigate transformative AI, especially through US public policy. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.
Not to be confused with the user formerly known as trevor1.
I agree with some direction of this, but it seems to massively depend on the process by which the LLM text has reached your eyes.
At one extreme, a bot on social media, given some basic prompt and programmed to reply to random tweets, has basically zero content about the "mental elements" behind it, as you put it.
On the other, if someone writes "I asked an LLM to summarize this document, and upon closely reviewing it, I think it did a great job," this has lots of content about a human's mental elements. The human's caption is obviously testimony, but the quoted LLM text also seems pretty much like testimony to me.
(There are plenty of intermediate cases, e.g. someone writes "I asked an LLM to summarize this document, which I personally skimmed, and it seems roughly right to me but caveat lector.")
The entire thing about heroic responsibility, as I understand it, and as is apparent in the original HPMOR context, is that feeling responsibility for outcomes specifically when social expectations do not hold you responsible for them. I think the type of responsibility that the owner of a car ownership has for the profitability of said car dealership is called "responsibility."
I still think that the precise kind of optics considerations described and recommended in this post (and other EA-ish circles) are subtly but importantly different from what those staffers are doing.
It's true that LessWrong readers would be doing a subtly but importantly different thing from what the staffers are doing. But the way that it's different is that Congressional staffers, of all political persuasions, are much more intuitively and automatically doing these kinds of considerations because they're pursuing careers in policy and politics in DC, whereas LessWrong readers tend to be technical people largely in the Bay Area who might someday later consider a career in policy and politics, and therefore they need to have these considerations explicitly laid out, as would anyone who's considering a career pivot into an industry with very different norms.
No. I'm more saying that the act of carefully weighing up career capital / PR considerations, and then not donating to a democrat based on a cost-benefit analysis of those considerations, feels to me like very stereotypical democrat / blue-tribe behavior.
Strongly disagree with the implication that Republicans/conservatives don't carefully weigh up career capital and PR considerations when making decisions like this! The vast majority of elected Republicans and even more of their staff are comparably strategic in this regard as their Democratic counterparts. Of course, the exceptions are much higher-profile, which I think could be leading to an availability bias. (Again, notably Hanania is not employed in the government.)
And further, that some people could have a visceral negative reaction to that kind of PR sensitivity more so than the donations themselves.
"Some people," sure. Federal government hiring managers? No.
Sure, but if he were the kind of person who would do that, he probably would not have gotten as popular as he is in the first place.
I mean, depends on if your goal is serving in the government or becoming a widely read Substacker.
And even then I'm not sure it's true; many, many media figures with huge followings on both sides of the aisle are hardcore partisans. See for example (most of) the hosting lineups of MSNBC and Fox. LessWrong is an extreme outlier in how much readers intentionally consume heterodox and disagreeable content; the vast majority of political media consumers trust and prefer to listen to their co-partisans.
Is the idea that Hanania is evidence that being very public about your contrarian opinions is helpful for policy influence? If so, that seems wrong:
That's not to say that nobody should be doing the loud contrarian approach; the world would be worse if everyone were self-censoring to the degree incentivized by DC policy careers. But I think people should be clear-eyed about the costs and benefits.
Interesting. Yeah rather than "entirely driven" I guess I should say: it seems like the direct effects of suppressing information probably usually outweigh the second-order Streisand Effect, and the exceptions are more salient than the non-exceptions due to survivorship bias?
Maybe we should distinguish between the "Weak Streisand Effect" (in some cases, the act of attempting to suppress information adds to the virality of a story) and the "Strong Streisand Effect" (attempting to suppress information increases the virality of a story in expectation). WSE seems definitely true, SSE seems pretty unlikely on average, though depends massively on the details.
Hot take: the Streisand Effect might be entirely driven by survivorship bias. Probably happens all the time that people delete stuff and then the public never hears about it, but the rare exceptions are sufficiently ironic that they get a whole effect named after them!
Makes sense! Yeah I can see that "that would be a problem" can easily be read as saying I don't think this incentive effect even exists in this case; as you're now saying, I meant "that would make the provision a problem, i.e. net-negative." I think conditional on having to say sufficiently detailed things about catastrophic risk (which I think 53 probably does require, but we'll see how it's implemented), the penalty for bad-faith materially false statements is net positive.
Other commenters have said most of what I was going to say, but a few other points in defense: