Davidmanheim

Sequences

Modeling Transformative AI Risk (MTAIR)

Wiki Contributions

Comments

Taking Clones Seriously

Not really, given the huge disparity in numbers -  unless  you have a  magic way of feeding/housing/clothing/caring for children which costs far less than is currently possible? (And note that we know baby warehousing REALLY doesn't work well.)

Taking Clones Seriously

Expected value compared to hiring the top 100 people in the international math Olympiad each of the next 20 years?

Perceptual Entropy and Frozen Estimates

Link fixed, and title added. (If you didn't have another reason to dislike the CIA, they broke the link by moving it. Jerks.)

An Idea for a More Communal Petrov Day in 2022

Could the ceremony's big red button also be mirrored on the site, with a similar shutdown trigger? Non-attendees would still see the results, similar to the status quo. (Much like actual wars are decisions of a small, hopefully trusted group but affect the world more broadly.)

Choice Writings of Dominic Cummings

I didn't say "domestic pressure / public agreement is strong evidence," I said that a reversal of the decision for those reasons would be strong evidence. And yes, I think that a majority of voters agreeing it was so much of a mistake that it is worth it to re-enter on materially worse terms, which it would need to be, would be a clear indication that the original decision was a bad one.

And I'm not sure why you say that a change in the long term trajectory of growth is a myopic criteria. If the principal benefit is better ability to react to crises, given the variety of crises that occur and their frequency, that should be obvious over the course of years, not centuries, and would absolutely affect economic growth over the long term.

Choice Writings of Dominic Cummings

I agree that evidence is weak, but I think it will be much clearer in the future whether it was a mistake - and the pathways for it to have been good are different than for it to have been bad.

Two concrete things that would be strong evidence either way which we'd see in the next 5 years:
- Significant divergence from previous economic trajectory that differs from changes in the EU.
- UK choosing to rejoin the EU due to domestic pressure, or general public agreement that it was good.

Perhaps more likely, we see a mix of evidence, and we conclude that like most complex policy decisions, it will take an additional decade or two for a consensus of economists and historians to  emerge so we clearly see what the impact was.

That said, I would be very happy to bet at even odds about it resolving as a clear negative - albeit with a very long resolution time frame, needing a somewhat qualitative resolution criteria.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I don't specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns - and I don't think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I agree that there is a real issue here that needs to be addressed, and I wasn't claiming that there is no reason to have support - just that there is a reason to compartmentalize.

And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked - and you won't necessarily lose your job, but the impact on a person's career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

To attempt to make this point more legible:

Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders - but not insiders - is to compartmentalize and maintain "need to know." Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access.  Even in regular organizations, lots of information is need-to-know - HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it's costly, those costs are needed. 

This type of granular control isn't intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others' work. You only allow limited exceptions and discretion where it is useful. The alternative, of "good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don't get any," simply doesn't work in practice.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)

Load More