I operate by Crocker's rules.



Inconsistent Values and Extrapolation


Being nitpicky: Professors are selected to be legibly good at research.

Yeah, I find it difficult to figure out how to look at this. At lot of MIRI discussion focused on their decision theory work, but I think that's just not that important.

Tiling agents e.g. was more about constructing or theorizing about agents that may have access to their own values, in a highly idealized setting about logic.

As far as I understand, MIRI did not assume that we're just able to give the AI a utility function directly. The Risks from Learned Optimization paper was written mainly by people from MIRI!

Other things like Ontological Crises and Low Impact sort of assume yoi can get some info into the values of an agent, and Logical Induction was more about how to construct systems that satisfy some properties in their cognition.

May I recommend my cost-benefit analysis of cryonics? I think it's the SOTA, even though it could be improved.

Hashsum used: SHA-256


To be revealed: 2024-12-31


To be revealed: 2028-01-01


To be revealed: 2028-01-01


To be revealed: 2028-01-01

Changed to the SEP article.

Do people around here consider Aleph Alpha to be a relevant player in the generative AI space? They recently received a €500M from various large German companies.

I dunno, my p(doom) over time looks pretty much like a random walk to me: 60% mid 2020, down to 50% in early 2022, 85% mid 2022, down to 80% in early 2023, down to 65% now.

I think this doesn't apply to me: My model is that I lack energy in the morning, and theanine just makes me more tired.

I responded well to caffeine in another experiment.

Load More