LESSWRONG
LW

Vaniver
40624Ω852158709715
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Vaniver's Shortform
Ω
6y
Ω
49
Decision Analysis
Three Missing Cakes, or One Turbulent Critic?
Vaniver2d50

Hmm. I HMCFed after that, I think, but I don't remember why I didn't talk much about it publicly. (Also I think there was a CFAR postmortem that I don't recall getting written up and discussed online, tho there was lots of in-person discussion.)

Reply1
If Anyone Builds It, Everyone Dies: Advertisement design competition
Vaniver6d20

It does in #1 but not #4--I should've been clearer which one I was referring to.

Reply
If Anyone Builds It, Everyone Dies: Advertisement design competition
Vaniver7d51

I like "this is not a metaphor".

I think referring to Emmett as "former OpenAI CEO" is a stretch? Or, like, I don't think it passes the onion test well enough. 

Reply111
If Anyone Builds It, Everyone Dies: Advertisement design competition
Vaniver7d80

However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety.

Personally, I would rather stake my chips on 'important' and let urgent handle itself. The title of the book is a narrow claim--if anyone builds it, everyone dies--with the clarifying details conveniently swept into the 'it'. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).

There's some further complicated arguments about urgency--you don't want to have gone out on too much of a limb about saying it's close, because of costs when it's far--but I think I most want to make a specialization of labor argument, where it's good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.

I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I'd choose the latter.

Hmm I think I might agree with this value tradeoff but I don't think I agree with the underlying prediction of what the world is offering us. 

I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying "yes, this is a serious book and a serious concern" and not signing on to "and it might happen in two years"--tho probably some of them also believe that--and I think that gives enough cover for the people who are acting on two-year timelines to operate.

Reply
The Cult of Pain
Vaniver10d50

And ultimately the only thing that matters here is power consumption,

Why? I think this is measuring exterior temperature, not the average of exterior and interior temperature. If cooling is set to a comfortable temperature and only run on heat wave days, then you should expect the heat wave days to also have a boost from the thermal mass of interior temperature, and there could be other indirect effects.

[Like, I would buy that power consumption dominates. But the only thing? Seems premature.]

I would be surprised if AC ends up more than 50% of power consumption

It does in Texas during heat waves (focusing only on peak demand, which seems fair). Texas is, of course, hotter than Europe (and places even hotter than Texas have even higher cooling costs).

Reply
Support for bedrock liberal principles seems to be in pretty bad shape these days
Vaniver18d1710

I think I have a somewhat different diagnosis.

For example, take 'property rights'. As a category, this mixes together lots of liberal and illiberal things: houses, hammers, and taxi medallions are all 'property' but the first two are productive capital and the last one is a pretty different form of capital. I'd go so far as to say NIMBYism is mostly downstream of an expansive view of property rights--my ownership of my house is not just the volume and physical objects on it, but also more indirect things like the noises and smells that impinge on it and the view out from it.

I think the core problem for classical liberalism in the 2020s is something like "figuring out a modern theory of regulation". That is, increased population density has increased the indirect costs of action (more people now see and are inconvenienced by your ugly building) and increased economic sophistication has increased a bunch of burdens (more complicated varieties of products require more complicated regulations) but the main answers for how to deal with this have come from anti-liberals. Like, consider Wolf Ladejinski, who helped influence land reform in Asia because he understood the popularity of communism came from (largely correct!) hatred of landlords, and free enterprise also does not like landlords strangling the economy. I think the returns to figuring out things like this are pretty high, and am moderately optimistic about 'abundance' types managing to do a similar thing, but I think there's still lots of fertile ground here.

Reply
johnswentworth's Shortform
Vaniver20d40

Did you ever try Circling? I wonder some if there's a conversational context that's very "get to the interesting stuff" which would work better for you. (Or, even if it's boring, it might be because it's foregrounding relational aspects of the conversation which are much less central for you than they are for most people.)

Reply
Consider chilling out in 2028
Vaniver25d62

E.g., why did folk write AI 2027? Did they honestly think the timeline was that short?

Isn't it more like "I think there's a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline"?

Like, I generally put my median year at 2030-2032; if we make it to 2028, the situation will still feel like "oh jeez we probably only have a few years left", unless we made it to 2028 thru a mechanism that clearly blocks transformative AI showing up in 2032. (Like, a lot is hinging on what "feels basically like today" means.)

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Vaniver26d160

Done, we'll see how it goes.

Reply2
New Endorsements for “If Anyone Builds It, Everyone Dies”
Vaniver1mo7812

IMO the real story here for 'how' is "the book is persuasive to a general audience." (People have made claims about the overton window shifting--and I don't think there's 0 of that--but my guess is the book would have gotten roughly the same blurbs in 2021, and maybe even 2016.) 

But the social story for how is that I grew up in the DC area, one of my childhood friends is the son of an economist who is not a household name but is prominent enough that all of the household name economists know him. (This is an interesting position to be in--I feel like I'm sort of in this boat with the rationality community.) We play board games online every week and so when the blurb hunt started I got him an advance copy, and he was hooked enough to recommend it to Ben (and others, I think).

(I share this story in part b/c I think it's cool, but also because I think lots of people are ~2 hops away from some cool people and could discover this with a bit of thought and effort.)

Reply62
Load More
62There Should Be More Alignment-Driven Startups
Ω
1y
Ω
14
41On plans for a functional society
2y
8
35Secondary Risk Markets
2y
4
46Vaniver's thoughts on Anthropic's RSP
2y
4
45Truthseeking, EA, Simulacra levels, and other stuff
2y
12
24More or Fewer Fights over Principles and Values?
2y
10
81Long-Term Future Fund: April 2023 grant recommendations
2y
3
65A Social History of Truth
2y
2
32Frontier Model Security
Ω
2y
Ω
1
39Bengio's FAQ on Catastrophic AI Risks
2y
0
Load More
Sequences
5mo
Sequences
5mo
April Fool's
5y
(+83)
History of Less Wrong
9y
(+527/-300)
Sequences
11y
Sequences
11y
(+34)
Squiggle Maximizer (formerly "Paperclip maximizer")
12y
(+6/-5)
Special Threads
12y
(+185/-14)
Special Threads
12y
(+42/-46)