LESSWRONG
LW

1835
jrincayc
203120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Yudkowsky on "Don't use p(doom)"
jrincayc1mo*0-2

My P(doom) is 100% - Ɛ to Ɛ: 100% - Ɛ, Humanity has already destroyed the solar system (and the AGI is rapidly expanding to the rest of the galaxy) on at least some of the quantum branches due to an independence gaining AGI accident. I am fairly certain this was possible by 2010, and possibly as soon as around 1985. Ɛ, at least some of the quantum branches where independence gaining AGI happens will have a sufficiently benign AGI that we survive the experience or we will actually restrict computers enough to stop accidentally creating AGI. It is worth noting that if AGI has a very high probability of killing people, what it looks like in a world with quantum branching is periodically there will be AI winters when computers and techniques approach being capable of AGI, because many of the "successful" uses of AI result in deadly AGI, and so we just don't live long enough to observe those.

And if I am just talking with an average person, I say > 1%, and make the side comment that if a passenger plane had a 1% probability of crashing in the next flight, it would not take off.

Edit: And to really explain this would take a lot longer post, so I apologize for that. That said, if I had to choose, to prevent accidental AGI, I would be willing to restrict myself and everyone else to computers on the order of 512 KiB of RAM, 2 MiB of disk space as part of a comprehensive program to prevent existential risk.

Reply
Thoughts on Hardware limits to Prevent AGI?
jrincayc7mo10

Research gate link: https://www.researchgate.net/publication/388398902_Memory_and_FLOPS_Hardware_Limits_to_Prevent_AGI

Reply
Thoughts on Hardware limits to Prevent AGI?
jrincayc9mo10

Second attempt: http://jjc.freeshell.org/writings/hardware_limits_for_agi_d2.pdf

Reply
Another Way to Be Okay
jrincayc1y20

Thank you for this, I still read it periodically.

Reply1
AGI Fermi Paradox
jrincayc1y10

First of all, I do agree with you that why haven't other civilizations created AGIs that have then spread far enough to reach Earth is a really interesting question as well, and I would be happy to see a discussion on that question.

For that question, I think you are missing a fourth possibility, AGI is almost always deadly, so on quantum branches where it develops anywhere in the light cone, no one observes it (at least not for long). So we don't see other civilization's AGI because we just are not alive on those quantum branches.

Reply
Thoughts on Hardware limits to Prevent AGI?
jrincayc1y10

My first attempt to turn this into a paper: http://jjc.freeshell.org/writings/hardware_limits_for_agi_d1.pdf

Reply
Some ways of spending your time are better than others
jrincayc2y10

For what it is worth, I did write a programming language over the course of about two years of my life ( https://github.com/jrincayc/rust_pr7rs/ ). I do agree that there are better and worse ways to spend time, and it is probably worth thinking about this. I think that when you "recoiled from the thought of actually analyzing whether there was anything better I could have been doing" is a good hint maybe writing a programming language wasn't the best thing for you. I wish you good skill and good luck in finding ways to spend your time.

Reply
Preserving and continuing alignment research through a severe global catastrophe
jrincayc2y10

Sounds interesting. 

For somewhat related reasons, I made a 2 column version of Rationality from AI to Zombies https://github.com/jrincayc/rationality-ai-zombies which is easier to print than the original version, and have printed multiple copies in the hope that some survive a global catastrophe.

Reply
Contra Yudkowsky on AI Doom
jrincayc2y109

I agree that the human brain is roughly at a local optimum. But think about what could be done just with adding a fiber optic connection between two brains (I think there are some ethical issues here so this is a thought experiment, not something I recommend). The two brains could be a kilometer apart, and the signal between them on the fiber optic link takes less time than a signal takes to get from one side to the other of a regular brain.  So these two brains could think together (probably with some (a lot?) neural rewiring) as fast as a regular brain thinks individually. Repeat with some more brains.

Or imagine if myelination was under conscious control. If you need to learn a new language, demyelinate the right parts of the brain, learn the language quickly, and then remyelinate it.

So I think even without changing things much neurons could be used in ways that provide faster thinking and faster learning.

As for energy efficiency, there is no reason that a superintelligence has to be limited to the approximately 20 watts that a human brain has access to. Gaming computers can have 1000 W power supplies, which is 50 times more power. I think 50 brains thinking together really quickly (as in the interbrain connections are as fast as the intrabrain connections) could probably out-think a lot more than 50 humans. 

And, today, there are supercomputers that use 20 or more megawatts of power, so if we have computing that is as energy efficient as the human brain, that is equivalent to 1 million human brains (20e6/20), and I think we might be able to agree that a million brains thinking together really well could probably out-think all of humanity.

Reply
Contra Yudkowsky on AI Doom
jrincayc2y70

Hm, neuron impulses travel at around 200 m/s, electric signals travel at around 2e8 m/s, so I think electronics have an advantage there. (I agree that you may have a point with "That Alien Mindspace".)

Reply
Load More
3Use computers as powerful as in 1985 or AI controls humans or ?
9mo
0
0AGI Fermi Paradox
1y
2
4Thoughts on Hardware limits to Prevent AGI?
2y
3