LESSWRONG
LW

2197
Dan Weinand
891230
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
(The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser
Dan Weinand8mo111

Donated 5k. I think LessWrong is a big reason why I got into EA, quantified self (which has downstream effects on me getting married), and exposed me to many useful ideas.

I'm not sure about the marginal value of my donation, but I'm viewing this more as payment for services rendered. I think I owe LessWrong a fair amount, since I expect my counter-factual life to be much worse.

Reply33
Loudly Give Up, Don't Quietly Fade
Dan Weinand2y10

I still think that if you want to know where X is on someone's TODO list, you should ask that instead of asking for their full TODO list. This feels nearly as wrong as asking for someone's top 5 movies of the year, instead of whether or not they liked Oppenheimer (when you want to know if they liked Oppenheimer).

Reply1
Loudly Give Up, Don't Quietly Fade
Dan Weinand2y41

I don't think this level of trickery is a good idea.

If you're working with someone honest, you should ask for the info you want. On the other hand, if you're working with someone who will obfuscate when asked "Are you working on X?", I don't see a strong reason to believe that they will give better info when instead asking about their top priorities.

Reply
Self-driving car bets
Dan Weinand2y40

In regard to Waymo (and Cruise, although I know less there) in San Francisco, the last CPUC meeting for allowing Waymo to charge for driverless service had the vote delayed.  Waymo operates in more areas and times of day than Cruise in SF last I checked.
https://abc7news.com/sf-self-driving-cars-robotaxis-waymo-cruise/13491184/

I feel like Paul's right that the only crystal clear 'yes' is Waymo in Phoenix, and the other deployments are more debatable (due to scale and scope restrictions).

Reply
Code Quality and Rule Consequentialism
Dan Weinand3y10

You gave the caveats, but I'm still curious to hear what companies you felt had this engineer vs manager conflict routinely about code quality. Mostly, I'd like to know so I can avoid working at those companies.

I suspect the conflict might be exacerbated at places where managers don't write code (especially if they've never written code). My managers at Google and Waymo have tended to be very supportive of code health projects. The discussion of how to trade-off code debt and velocity is also very explicit.  We've gotten pretty guidance in some quarters along the lines of 'We are sprinting and expect to accumulate debt' vs 'We are slowing down to pay off tech debt'. This makes it pretty easy to tell if a given code health project is something that company leadership wants me to be doing right now.

Reply
Increasing Demandingness in EA
Dan Weinand3y20

Agreed, although it feels like in that case we should be comparing 'donating to X-risk organizations' vs 'working at X-risk organizations'. I think that by default I would assume that the money vs talent trade-off is similar at global health and X-risk organizations though.

Reply
Increasing Demandingness in EA
Dan Weinand3y10

Fair point that GiveWell has updated their RFMF and increased their estimated cost per QALY. 

I do think that 300K EAs doing something equivalent to eliminating the global disease burden is substantially more plausible than 66K doing so. This seems trivially true since more people can do more than fewer people. I agree that it still sounds ambitious, but saying that ~3X the people involved in the Manhattan project could eliminate the disease burden certainly sounds easier than doing the same with half the Manhattan project's workforce size.

This is getting into nits, but ruling out all arguments of the form 'this seems to imply' seems really strong? Like, it naively seems to limit me to only discussing to implications that the argument maker explicitly acknowledges. I'm probably mis-interpreting you here though, since that seems really silly! This is usually what I'm trying to say when I ask about implications; I note something odd to see if the oddness is implied or if I misinterpreted something.

Agreed that X-risk is very important and also hard to quantify.

Reply
Increasing Demandingness in EA
Dan Weinand3y40

I'm surprised that you think that direct work has such a high impact multiplier relative to one's normal salary. The footnote seems to suggest that you expect someone who could get a $100K salary trying to earn to give could provide $3M in impact per year.


I think GiveWell still estimates it can save a life for ~$6K on the margin, which is ~50 QALYs.

(life / $6K) X (50 QALY / life) X ($3 million / EA year) ~= 25K QALY per EA year

Which both seems like a very high figure and seems to imply that 66K EAs would be sufficient to do good equivalent to totally eliminating the burden of all disease (I'm ignoring decreasing marginal returns).  This seems like an optimistic figure to me, unless you're very optimistic about X-risk charities being effective? I'd be curious to hear how you got to the ~3 million figure intuitively.

I would guess something closer to 5-10X impact relative to industry salary, rather than a 30X impact.

Reply
Split and Commit
Dan Weinand4y40

Note that it might be very legally difficult to open source much of Space-X technology, due to the US classifying rockets as advanced weapons technology (because they could be used as such).

Reply
Covid 11/4: The After Times
Dan Weinand4y80

I'm not sure that contagiousness is a good reason to believe that an (in)action is particularly harmful, outside of the multiplier contagiousness creates by generating a larger total harm. It seems clear that we'd all agree that murder is much worse than visiting a restaurant with a common cold, despite the fact that the latter is a contagious harm.

Although there is a good point that the analogy breaks down because a DUI doesn't cause harm during your job (assuming you don't drive in your work), whereas being unvaccinated does cause expected harm to colleagues and customers.

Reply
Load More
10Gratitude: Data and Anecdata
5y
1