habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com. I have signed no contracts or agreements whose existence I cannot mention.

Sequences

A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology

Wiki Contributions

Comments

What's the source of this? Will also DM you.

habryka2118

just some actual consensus among established researchers to sift mathematical facts from conjecture.

"Scientific consensus" is a much much higher bar than peer review. Almost no topic of relevance has a scientific consensus (for example, there exists basically no trustworthy scientific for urban planning decisions, or the effects of minimum wage law, or pandemic prevention strategies, or cyber security risks, or intelligence enhancement). Many scientific peers think there is an extinction risk. 

I think demanding scientific consensus is an unreasonably high bar that would approximately never be met in almost any policy discussion.

(I didn't get anything out of it, and it seems kind of aggressive in a way that seems non-sequitur-ish, and also I am pretty sure mischaracterizes people. I didn't downvote it, but have disagree-voted with it)

habryka4749

Thankfully, most of this is now moot as the company has retracted the contract.

I don't think any of this is moot, since the thing that is IMO most concerning is people signing these contracts, then going into policy or leadership positions and not disclosing that they signed those contracts. Those things happened in the past and are real breaches of trust.

habryka103

Promoted to curated: I've really appreciated a lot of the sequence you've been writing about various epistemic issues around the EA (and to some degree the rationality) community. This post feels like an appropriate capstone to that work and I quite like it as a positive pointer to a culture that I wish had more adherents. 

habryka204

One reason I feel interested in liability is because it opens up a way to do legal investigations. The legal system has a huge number of privileges that you get to use if you have reasonable suspicion someone has committed a crime or is being negligent. I think it's quite likely that if there was no direct liability, that even if Microsoft or OpenAI causes some huge catastrophe, that we would never get a proper postmortem or analysis of the facts, and would never reach high-confidence on the actual root-causes.

So while I agree that OpenAI and Microsoft want to of course already avoid being seen as responsible for a large catastrophe, having legal liability makes it much more likely there will be an actual investigation where e.g. the legal system gets to confiscate servers and messages to analyze what happens, which makes it then more likely that if OpenAI and Microsoft are responsible, they will be found out to be responsible.

habryka1434

Not sure what you mean by "underrated". The fact that they have $300MM from Vitalik but haven't really done much anyways was a downgrade in my books.

habryka119

I am not that confident about this. Or like, I don't know, I do notice my psychological relationship to "all the stars explode" and "earth explodes" is very different, and I am not good enough at morality to be confident about dismissing that difference.

I disagree. I think it matters a good amount. Like if the risk scenario is indeed "humans will probably get a solar system or two because it's cheap from the perspective of the AI". I also think there is a risk of AI torturing the uploads it has, and I agree that if that is the reason why humans are still alive then I would feel comfortable bracketing it, but I think Ryan is arguing more that something like "humans will get a solar system or two and basically get to have decent lives".

(I missed "this was in the works for a while" on my first read of your comment.)

No, I just gaslit you. I edited it when I saw your reaction as a clarification. Sorry about that, should have left a note that I edited it.

Load More