hairyfigment

hairyfigment's Comments

Unusual medical event led to concluding I was most likely an AI in a simulated world

Sort of reminds me of that time I missed out on a lucid dream because I thought I was in a simulation. In practice, if you see a glitch in the Matrix, it's always a dream.

I find it interesting that we know humans are inclined to anthropomorphize, or see human-like minds everywhere. You began by talking about "entities", as if you remembered this pitfall, but it doesn't seem like you looked for ways that your "deception" could stem from a non-conscious entity. Of course the real answer (scenario 1) is basically that. You have delusions, and their origin lies in a non-conscious Universe.

Intrinsic properties and Eliezer's metaethics

The second set of brackets may be the disconnect. If "their" refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.

Rather than looking for value you can salvage from his work, or an 'interpretation consistent with modern science,' please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don't have time for all of them.

If that still doesn't help you see where I'm coming from, consider that reality is constantly changing and "the evolutionary process" usually happened in environments which no longer exist.

Intrinsic properties and Eliezer's metaethics

Without using terms such as "grounding" or "basis," what are you saying and why should I care?

Stupid Questions September 2017

I repeat: show that none of your neurons have consciousness separate from your own.

Why on Earth would you think Searle's argument shows anything, when you can't establish that you aren't a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don't you need to rely on functionalism or a similar premise?

Stupid Questions September 2017

What about it seems worth refuting?

The Zombie sequence) may be related. (We'll see if I can actually link it here.) As far as the Chinese Room goes:

  • I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can't be conscious.
  • Searle talks about "understanding" Chinese. Now, the way I would interpret this word depends on context - that's how language works - but normally I'd incline towards a Bayesian interpretation of "understanding" as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
  • Some versions of the "Chinese Gym" have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it's comparable to a man blindly following rules in a room, I don't think Searle could refute this. (I also don't think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Open thread, September 11 - September 17, 2017

Do you know what the Electoral College is? If so, see here:

The single most important reason that our model gave Trump a better chance than others is because of our assumption that polling errors are correlated.

Open thread, September 11 - September 17, 2017

Arguably claims about Donald Trump winning enough states - but Nate Silver didn't assume independence, and his site still gave the outcome a low probability.

Open thread, Jul. 17 - Jul. 23, 2017

Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.

Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

What Are The Chances of Actually Achieving FAI?

I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material.

(Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason - but not you. The chance of it wanting you in particular seems effectively nil.)

Double Crux — A Strategy for Resolving Disagreement

Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for "self-hating arithmetic" that proves arithmetic contradicts itself.

Completely unnecessary details here.

Load More