Larry_D'Anna

00

Did Ira Howard actually say that? In which story?

00

I'm getting Deja Vu again. Are you recycling bits of older posts or other things you've written?

10

Eliezer: have you given any thought to the problem of choosing a measure on the solution space? If you're going to count bits of optimization, you need some way of choosing a measure. In the real world solutions are not discrete and we cannot simply count them.

00

I swear to god I've read these Kasparov posts before...

-10

I feel like I've read this exact post before. Deja Vu?

00

Moral questions are terminal. Ethical questions are instrumental.

I would argue that ethics are values that are instrumental, but treated as if they were terminal for almost all real object-level decisions. Ethics are a human cognitive shortcut. We need ethics because we can't really compute the expected cost of a black swan bet. An AI without our limitations might not need ethics. It might be able to keep all it's instrumental values in it's head *as* instrumental, without getting confused like we would.

00

"But it was PT:TLOS that did the trick. Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox"

I am unaware of a statement of Cox's theorem where the full *technical* statement of the theorem comes even close to this informal characterization. I'm not saying it doesn't exist, but PT:TLOS certainly doesn't do it.

I found the first two chapters of PT:TLOS to be absolutely, wretchedly awful. It's full of technical mistakes, crazy mischaracterizations of other people's opinions, hidden assumptions and skipped steps (that he tries to justify with handwaving nonsense), and even a discussion of Godel's theorems that mixes meta levels and completly misses the point.

20

Eliezer, I think you have dissolved one of the most persistent and venerable mysteries: "How is it that even the smartest people can make such stupid mistakes".

Being smart just isn't *good* *enough*.

00

J Thomas *Larry, you have not proven that 6 would be a prime number if PA proved 6 was a prime number, because PA does not prove that 6 is a prime number.*

No I'm afraid not. You clearly do not understand the ordinary meaning of implications in mathematics. "if a then b" is equivalent (in boolean logic) to ((not a) or b). They mean the exact same thing.

*The claim that phi must be true because if it's true then it's true*

I said no such thing. If you think I did then you do not know what the symbols I used mean.

*It's simply and obviously bogus, and I don't understand why there was any difficulty about seeing it.*

No offense, but you have utterly no idea what you are talking about.

*Similarly, if PA proved that 6 was prime, it wouldn't be PA*

PA is an explicit finite list of axioms, plus one axiom schema. What PA proves or doesn't prove has absolutely nothing to do with it's definition.

"first-order logic cannot, in general, distinguish finite models from infinite models."

Specifically, if a fist order theory had arbitrarily large finite models, then it has an infinite one.