LESSWRONG
LW

834
siIver
16141080
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Questions about AGI's Importance
siIver8y00

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

Reply
Questions about AGI's Importance
siIver8y00

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

Reply
Instrumental Rationality Sequence Finished! (w/ caveats)
siIver8y00

"Less than a third of students by their own self-appointed worst-case estimate *1."

missing a word here, I think.

Reply
Inconsistent Beliefs and Charitable Giving
siIver8y00

I think your post is spot on.

Reply
Sam Harris and Scott Adams debate Trump: a model rationalist disagreement
siIver8y80

First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.

Answer: [talks about Trump's persuasion skills]

Yeah, okay.

Reply
Daniel Dewey on MIRI's Highly Reliable Agent Design Work
siIver8y00

This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.

Reply
Against lone wolf self-improvement
siIver8y20

Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.

Reply
Against lone wolf self-improvement
siIver8y00

I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.

Reply
Load More
7Changes in AI Safety Funding
9y
0
7The true degree of our emotional disconnect
9y
12