niplav

I operate by Crocker's rules.

Website.

Sequences

Inconsistent Values and Extrapolation

Comments

Being nitpicky: Professors are selected to be legibly good at research.

Yeah, I find it difficult to figure out how to look at this. At lot of MIRI discussion focused on their decision theory work, but I think that's just not that important.

Tiling agents e.g. was more about constructing or theorizing about agents that may have access to their own values, in a highly idealized setting about logic.

As far as I understand, MIRI did not assume that we're just able to give the AI a utility function directly. The Risks from Learned Optimization paper was written mainly by people from MIRI!

Other things like Ontological Crises and Low Impact sort of assume yoi can get some info into the values of an agent, and Logical Induction was more about how to construct systems that satisfy some properties in their cognition.

May I recommend my cost-benefit analysis of cryonics? I think it's the SOTA, even though it could be improved.

Hashsum used: SHA-256

8b21114d4e46bf6871a1e4e9812c53e81a946f04b650e94615d6132855e247e8

To be revealed: 2024-12-31

695e9e58df6bb7e9cbdf48bf4084252cc26149667af60949817c455fdff33168

To be revealed: 2028-01-01

88f11de09bf57161f0468a931d383f829d5f0a0d353bd0982743b2dd4ed126d8

To be revealed: 2028-01-01

805a3d58bb62a94a79ac48445d2bad8ef175fd93c3839ea4d60b222952456618

To be revealed: 2028-01-01

Changed to the SEP article.

Do people around here consider Aleph Alpha to be a relevant player in the generative AI space? They recently received a €500M from various large German companies.

I dunno, my p(doom) over time looks pretty much like a random walk to me: 60% mid 2020, down to 50% in early 2022, 85% mid 2022, down to 80% in early 2023, down to 65% now.

I think this doesn't apply to me: My model is that I lack energy in the morning, and theanine just makes me more tired.

I responded well to caffeine in another experiment.

Load More