LESSWRONG
LW

159
tlevin
10318560
Message
Dialogue
Subscribe

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. 

Not to be confused with the user formerly known as trevor1.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
4tlevin's Shortform
1y
76
Considerations around career costs of political donations
tlevin5d51

I still think that the precise kind of optics considerations described and recommended in this post (and other EA-ish circles) are subtly but importantly different from what those staffers are doing.

It's true that LessWrong readers would be doing a subtly but importantly different thing from what the staffers are doing. But the way that it's different is that Congressional staffers, of all political persuasions, are much more intuitively and automatically doing these kinds of considerations because they're pursuing careers in policy and politics in DC, whereas LessWrong readers tend to be technical people largely in the Bay Area who might someday later consider a career in policy and politics, and therefore they need to have these considerations explicitly laid out, as would anyone who's considering a career pivot into an industry with very different norms.

Reply
Considerations around career costs of political donations
tlevin5d76

No. I'm more saying that the act of carefully weighing up career capital / PR considerations, and then not donating to a democrat based on a cost-benefit analysis of those considerations, feels to me like very stereotypical democrat / blue-tribe behavior. 

Strongly disagree with the implication that Republicans/conservatives don't carefully weigh up career capital and PR considerations when making decisions like this! The vast majority of elected Republicans and even more of their staff are comparably strategic in this regard as their Democratic counterparts. Of course, the exceptions are much higher-profile, which I think could be leading to an availability bias. (Again, notably Hanania is not employed in the government.)

And further, that some people could have a visceral negative reaction to that kind of PR sensitivity more so than the donations themselves.

"Some people," sure. Federal government hiring managers? No.

Sure, but if he were the kind of person who would do that, he probably would not have gotten as popular as he is in the first place.

I mean, depends on if your goal is serving in the government or becoming a widely read Substacker.

And even then I'm not sure it's true; many, many media figures with huge followings on both sides of the aisle are hardcore partisans. See for example (most of) the hosting lineups of MSNBC and Fox. LessWrong is an extreme outlier in how much readers intentionally consume heterodox and disagreeable content; the vast majority of political media consumers trust and prefer to listen to their co-partisans.

Reply11
Considerations around career costs of political donations
tlevin5d6-5

Is the idea that Hanania is evidence that being very public about your contrarian opinions is helpful for policy influence? If so, that seems wrong:

  • As you said, Hanania would almost surely not get appointed to an actual position by either administration.
  • To the extent that he has influence/reach on the right, I doubt he has more influence/reach because he pivoted to strong criticisms of Trump and the MAGA-sphere. I would rather guess that this pivot has been really costly to his influence on the right, and if he had self-censored, he'd be more influential.
  • Also, even if this were the case, it seems unreliable to update more on the evidence from one unusual individual than the many cases (as noted in this post) of people losing out on jobs because of their donations and publicly expressed opinions.

That's not to say that nobody should be doing the loud contrarian approach; the world would be worse if everyone were self-censoring to the degree incentivized by DC policy careers. But I think people should be clear-eyed about the costs and benefits.

Reply
tlevin's Shortform
tlevin5d20

Interesting. Yeah rather than "entirely driven" I guess I should say: it seems like the direct effects of suppressing information probably usually outweigh the second-order Streisand Effect, and the exceptions are more salient than the non-exceptions due to survivorship bias?

Reply
tlevin's Shortform
tlevin5d22

Maybe we should distinguish between the "Weak Streisand Effect" (in some cases, the act of attempting to suppress information adds to the virality of a story) and the "Strong Streisand Effect" (attempting to suppress information increases the virality of a story in expectation). WSE seems definitely true, SSE seems pretty unlikely on average, though depends massively on the details.

Reply
tlevin's Shortform
tlevin6d90

Hot take: the Streisand Effect might be entirely driven by survivorship bias. Probably happens all the time that people delete stuff and then the public never hears about it, but the rare exceptions are sufficiently ironic that they get a whole effect named after them!

Reply
What SB 53, California’s new AI law, does
tlevin24d10

Makes sense! Yeah I can see that "that would be a problem" can easily be read as saying I don't think this incentive effect even exists in this case; as you're now saying, I meant "that would make the provision a problem, i.e. net-negative." I think conditional on having to say sufficiently detailed things about catastrophic risk (which I think 53 probably does require, but we'll see how it's implemented), the penalty for bad-faith materially false statements is net positive.

Reply
What SB 53, California’s new AI law, does
tlevin26d102

If that were the only provision of the bill, then yes, that would be a problem, but the bill requires them to publish summaries of (1) their up-to-date framework for assessing and mitigating catastrophic risks and (2) their assessments of catastrophic risks for specific models.

Reply
What SB 53, California’s new AI law, does
tlevin26d30

There's an exception for statements "made in good faith and reasonable under the circumstances"; I would guess it's pretty hard to prove the contrary in court?

Reply
Models Don't "Get Reward"
tlevin2mo32

Coming back >2.5 years later to say this is among the most helpful pieces of AI writing I've ever read -- I remember it being super clarifying at the time, and I still link people to it, cite it in conversation, and use similar analogies. (Even though I now also caveat it with "...but also maybe really sophisticated agents will actively seek it, because the training environment might reward it, and maybe they'll 'experience' something like fear/pain/etc for things correlated with negative reward, if they experience things...") Thank you for writing it up!!

Reply
Load More
106What SB 53, California’s new AI law, does
26d
12
112[linkpost] One Year in DC
5mo
5
46Skepticism towards claims about the views of powerful institutions
8mo
2
60A case for donating to AI risk reduction (including if you work in AI)
11mo
2
57How the AI safety technical landscape has changed in the last year, according to some practitioners
1y
6
4tlevin's Shortform
1y
76
78EU policymakers reach an agreement on the AI Act
2y
7
11Notes on nukes, IR, and AI from "Arsenals of Folly" (and other books)
2y
0
28Apply to HAIST/MAIA’s AI Governance Workshop in DC (Feb 17-20)
3y
0
60Update on Harvard AI Safety Team and MIT AI Alignment
3y
4
Load More