LESSWRONG
LW

31
Alexey Feldgendler
22140
Message
Dialogue
Subscribe

Alexey Feldgendler, a software developer interested in rationality

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
On Equivalence of Supergoals
Alexey Feldgendler7y20

Interesting.

This might have something to do with the fact that the problems are getting harder now that the low-hanging fruit has been picked. Every additional year to life expectancy is harder than the one before. Every cycle of Moore's law is harder because we're starting to deal with sizes comparable to molecules.

I admit that this makes my point weaker.

Reply
On Equivalence of Supergoals
Alexey Feldgendler7y10

I have to admit that I have no idea; my understanding of the brain isn't enough to even assess the magnitude of the challenge. Intuitively, it seems at least as hard as “find cheap renewable energy”, but I might be completely wrong.

Reply
On Equivalence of Supergoals
Alexey Feldgendler7y10

My point is that to “model brains better” requires a lot more knowledge about the brains (neurology, microbiology, chemistry) and a lot more computing power, and those things require progress in other areas, and so on, so that sounds like one of those equivalent supergoals.

Reply
On Equivalence of Supergoals
Alexey Feldgendler7y10

Right. This should read “if you believe in friendly AI singularity”. Updating the post.

Reply
22On Equivalence of Supergoals
7y
8