LESSWRONG
LW

506
Danielle_Fong
0030
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Pascal's Mugging: Tiny Probabilities of Vast Utilities
Danielle_Fong17y00

"But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large"

Eliezer, I contend your limit!

Reply
Growing Up is Hard
Danielle_Fong17y00

@Eliezer

Sure, there are upgrades that one can make in which one can more or less prove deterministically how it changes a subsystem in isolation. Things like adding the capability for zillion bit math, or adding a huge associative memory. But it's not clear that the subsystem would actually be an upgrade in interaction with the AI and with the unpredictable environment, at once. I guess the word I'm getting hung up on is 'correctness.' Sure, the subsystems could be deterministically correct, but would it necessarily be a system-wide upgrade?

It's also especially plausible that there are certain 'upgrades' (or at least large cognitive system changes) which can't be arrived at deterministically, even by a super human intelligence.

Reply
Growing Up is Hard
Danielle_Fong17y00

"You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness."

Really? Explain how? It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly. It might be possible to make a billion sequential upgrades with probabilistic guarantees of correctness, but only in a low entropy environment, and even then it's dicey, and I have no idea how you'd prove it.

Reply