Dan Weinand

Posts

Sorted by New

Wiki Contributions

Comments

You gave the caveats, but I'm still curious to hear what companies you felt had this engineer vs manager conflict routinely about code quality. Mostly, I'd like to know so I can avoid working at those companies.

I suspect the conflict might be exacerbated at places where managers don't write code (especially if they've never written code). My managers at Google and Waymo have tended to be very supportive of code health projects. The discussion of how to trade-off code debt and velocity is also very explicit.  We've gotten pretty guidance in some quarters along the lines of 'We are sprinting and expect to accumulate debt' vs 'We are slowing down to pay off tech debt'. This makes it pretty easy to tell if a given code health project is something that company leadership wants me to be doing right now.

Agreed, although it feels like in that case we should be comparing 'donating to X-risk organizations' vs 'working at X-risk organizations'. I think that by default I would assume that the money vs talent trade-off is similar at global health and X-risk organizations though.

Fair point that GiveWell has updated their RFMF and increased their estimated cost per QALY. 

I do think that 300K EAs doing something equivalent to eliminating the global disease burden is substantially more plausible than 66K doing so. This seems trivially true since more people can do more than fewer people. I agree that it still sounds ambitious, but saying that ~3X the people involved in the Manhattan project could eliminate the disease burden certainly sounds easier than doing the same with half the Manhattan project's workforce size.

This is getting into nits, but ruling out all arguments of the form 'this seems to imply' seems really strong? Like, it naively seems to limit me to only discussing to implications that the argument maker explicitly acknowledges. I'm probably mis-interpreting you here though, since that seems really silly! This is usually what I'm trying to say when I ask about implications; I note something odd to see if the oddness is implied or if I misinterpreted something.

Agreed that X-risk is very important and also hard to quantify.

I'm surprised that you think that direct work has such a high impact multiplier relative to one's normal salary. The footnote seems to suggest that you expect someone who could get a $100K salary trying to earn to give could provide $3M in impact per year.


I think GiveWell still estimates it can save a life for ~$6K on the margin, which is ~50 QALYs.

(life / $6K) X (50 QALY / life) X ($3 million / EA year) ~= 25K QALY per EA year

Which both seems like a very high figure and seems to imply that 66K EAs would be sufficient to do good equivalent to totally eliminating the burden of all disease (I'm ignoring decreasing marginal returns).  This seems like an optimistic figure to me, unless you're very optimistic about X-risk charities being effective? I'd be curious to hear how you got to the ~3 million figure intuitively.

I would guess something closer to 5-10X impact relative to industry salary, rather than a 30X impact.

Note that it might be very legally difficult to open source much of Space-X technology, due to the US classifying rockets as advanced weapons technology (because they could be used as such).

I'm not sure that contagiousness is a good reason to believe that an (in)action is particularly harmful, outside of the multiplier contagiousness creates by generating a larger total harm. It seems clear that we'd all agree that murder is much worse than visiting a restaurant with a common cold, despite the fact that the latter is a contagious harm.

Although there is a good point that the analogy breaks down because a DUI doesn't cause harm during your job (assuming you don't drive in your work), whereas being unvaccinated does cause expected harm to colleagues and customers.

Perhaps too tongue in cheek, but there is a strong theoretical upper bound on R0 for humans as of ~2021. It's around 8 billion, the current world population.

I think you're correct that the difference between R0 and Rt is that Rt takes into account the proportion of the population already immune.

However, R0 is still dependent on its environment.  A completely naive (uninfected) population of hermits living in caves hundreds of miles distant from one another has an R0 of 0 for nearly anything. A completely naive population of immunocompromised packed-warehouse rave attendees would probably have an R0 of 100+ for measles.

I don't know if there is another Rte type variable that tries to define the infectiveness of a disease given both the prevalence of immunity and the environment. Seems like most folks just kinda assume that environment other than immune proportion is constant when comparing R0/Rt figures.

A one point improvement (measured on a ten point scale) feels like a massive change to expect. I like the guts to bet that it'll happen and change your mind otherwise, but I'm curious if you actually expected that scale of change.

For me, a one point change requires super drastic measures (ex. getting 2 hours too few sleep for 3+ days straight). Although I may well be arbitrarily compressing too much of my life into 6-9 on the ten point scale.

One of GiveDirectly's blog posts on survey and focus group results, by the way.
https://www.givedirectly.org/what-its-like-to-receive-a-basic-income/

Load More