quila

suffering-focused-altruist/longtermist.

PMs open (especially for fellow non-extroverts)


my ea forum account

my pgp public key, mainly to prevent future LLM impersonation, you can always ask me to sign a dated message. [the private key is currently stored in plaintext within an encrypted drive, so is vulnerable to being read by local programs]:

-----BEGIN PGP PUBLIC KEY BLOCK-----

mDMEZiAcUhYJKwYBBAHaRw8BAQdADrjnsrbZiLKjArOg/K2Ev2uCE8pDiROWyTTO
mQv00sa0BXF1aWxhiJMEExYKADsWIQTuEKr6zx3RBsD/QW3DBzXQe0TUaQUCZiAc
UgIbAwULCQgHAgIiAgYVCgkICwIEFgIDAQIeBwIXgAAKCRDDBzXQe0TUabWCAP0Z
/ULuLWf2QaljxEL67w1b6R/uhP4bdGmEffiaaBjPLQD/cH7ufTuwOHKjlZTIxa+0
kVIMJVjMunONp088sbJBaQi4OARmIBxSEgorBgEEAZdVAQUBAQdAq5exGihogy7T
WVzVeKyamC0AK0CAZtH4NYfIocfpu3ADAQgHiHgEGBYKACAWIQTuEKr6zx3RBsD/
QW3DBzXQe0TUaQUCZiAcUgIbDAAKCRDDBzXQe0TUaUmTAQCnDsk9lK9te+EXepva
6oSddOtQ/9r9mASeQd7f93EqqwD/bZKu9ioleyL4c5leSQmwfDGlfVokD8MHmw+u
OSofxw0=
=rBQl
-----END PGP PUBLIC KEY BLOCK-----

I have signed no contracts or agreements whose existence I cannot mention.

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
quila10

And I can see signs of such impending war now.

Do you think we should be moving to New Zealand (ChatGPT's suggestion) or something in case of global nuclear war?

quila52

It actually not clear what EY means by "anthropic immortality".

Same, I'm guessing that by "It actually doesn't depend on quantum mechanics either, a large classical universe gives you the same result", EY means that QI is just one way Anthropic Immortality could be true, but "Anthropic immortality is a whole different dubious kettle of worms" seems to contradict this reading.

(Maybe it's 'dubious' because it does not have the intrinsic 'continuity' of QI? e.g. you could 'anthropically survive' in a complete different part of the universe with a copy of you; but I doubt that would seem dubious to EY?)

Future anthropic shadow. I am more likely to be in the world in which alignment is easy

I think anthropic shadow lets you say conditional on survival, "(example) a nuclear war or other collapse will have happened"[1], but not that alignment was easy, because alignment being easy would be a logical fact, not a historical contingency; if it's true, it wouldn't be for anthropic reasons. (Although, stumbling upon paradigms in which it is easy would be a historical contingency)

  1. ^

    "while civilization was recovering, some mathematicians kept working on alignment theory that did not need computers so that by the time humans could create AIs again, they had alignment solutions to present"

quila32

Eliezer wrote somewhat cryptic tweets referencing it recently

The first link is from 2019. (Also those seem like standard EY tweets)

Edit: although there is now also this recent one, from a few hours after your post https://x.com/ESYudkowsky/status/1880714995618767237

quila10

(I think it's good for posts with confusion exercises to exist)

I disagree with the post's opening claim (which is orthogonal to the rest of it, I think):

Confusion is a felt sense; a bodily sensation you can pay attention to and notice

I think this comes from an (understandable) typical mind fallacy (longer-form link).

I know there is a post somewhere where the author describes starting to primarily experience things like confusion as bodily sensations, but that doesn't mean that is the fundamental nature of confusion:

  • It's surely not for minds in general, since confusion is fundamentally cognitive, not fundamentally bodily. In this sense the statement is wrong, though an adapted one which says "confusion happens at the same time as a specific bodily sensation" is possible to be true for particular beings. The next point is about that claim, but about all humans.
  • It's not true for all humans; for me, confusion is not a bodily sensation. It could be that if I did a certain series of meditations trying to make myself feel confusion as a bodily sensation, that would start happening for me, but that's not how I am now, and I'm content experiencing confusion in the way I do so I won't try to change it. (For me confusion is a mental dynamic, and also noticeable. It also comes with some change in mental qualia but I wouldn't describe it as bodily.)
quila10

if we're currently in a simulation, physics and even logic could be entirely different from what they appear to be.

I have another obscure shortform about this! Physical vs metaphysical contingency, about what it would mean for metaphysics (e.g. logic) itself to have been different. (In the case of simulations, it could only be different in a way still capable of containing our metaphysics as a special case, like how in math a more expressive formal system can contain a less expressive one, but not the reverse)

I agree a metaphysically different base world is possible, but I'm not sure how to reason about it. (I think apparent metaphysical paradoxes are some evidence for it, though we might also just be temporarily confused about metaphysics)

Just physics being different is easier to imagine. For example, it could be that the base world is small, and it contains exactly one alien civilization running a simulation in which we appear to observe a large world. But if the base world is small, arguments for simulations which rely on the vastness of the world, like Bostrom's, would no longer hold. And at that point there doesn't seem much reason to expect it, at least for any individual small world.[1] Though it could also be that the base world is large and physically different, and we're in a simulation where we appear to observe a different large world.

Ultimately, while it could be true that there are 0 unsimulated copies of us, still we can have the best impact in the possibilities where there are at least one.[2]

By the way, I'm also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it's a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won't bother you about those small issues here, though

I'm interested in what they are, I wouldn't be bothered (if you meant that literally). If you want you can reply about it here or on the original thread.

  1. ^

    If we're instead reasoning over the space of all possible mathematical worlds which are 'small' compared to what our observations look like they suggest, then we'd be reasoning about very many individual small worlds (which basically reintroduces the 'there are very many contexts which could choose to simulate us' premise). Some of those small math-worlds will probably run simulations (for example, if some have beings which want to manipulate "the most probable environment" of an AI in a larger mathematical world, to influence that larger math-world)

    In other words: "Conditional on {some singular 'real world' that is somehow special compared to merely mathematical worlds} being small, it probably doesn't contain simulations. But there are certainly many math-worlds that do, because the space of math-worlds is so vast (to the point that some small math-worlds would randomly contain a simulation as part of their starting condition)"

  2. ^

    And there's probably not anything we can do to change our situation in case of possibilities where we don't exist in base reality. Although I do think 'look for bugs' is something an aligned ASI would want to try, especially when considering that our physics apparently has some simple governing laws, i.e. may have a pretty short program length[3], and it's plausible for a process we'd describe with a short program length to naturally / randomly occur as a process of physical interaction in a much larger base world -- that is to say, there are plausible origins of a simulation which don't involve a superintelligent programmer ensuring there are no edge cases)

  3. ^

    (but no longer short when considering its very complex starting state? ig it could turn out that that itself is predicted by some simple rule)

quila20

Possible to be ran on a computer in the actual physical world

quila1510

This isn't an argument against the idea that we have many instantiations[1] in simulations, which I believe we do. My view is that, still, the most impact to be had is (modulo this) through the copies of me which are not in a simulation (where I can improve a very long future by reducing s-/x-risks), so those are the contexts which my decisions should be about effecting from within.

IIUC, this might be a common belief, but I'm not sure. I know at least a few other x-risk focused people believe this.

It's also more relevant for the question of "what choice helps the most beings"; if you feel existential dread over having many simulated instantiations, this may not help with it.

  1. ^

    If there are many copies of one, the question is not "which one am I really?", you basically become an abstract function choosing how to act for all of them at once.

Answer by quila105

"X is possible in principle" means X is in the space of possible mathematical things (as an independent claim to whether humans can find it).

quila30

I noticed feeling more awake than I would expect the next day, and I noticed cold shivers about 30 minutes after taking it which fits with the purported mechanism

what was the hypothesized mechanism?

quila0-1

this comment reads as if you skipped the chain of logic before "implies", and also missed "(it's not clear to me if you actually believe that or if this was a writing mistake)".

You misunderstand that paragraph

this reminds me of the failure mode of disputing definitions, in that it adds no new information but does attempt to re-frame. i would find it pointless to debate if "a reader misunderstood or an author miswrote".[1]

it's also incorrect, i saw and listed two possible realities, of which {the author doesn't actually believe that} was one.

(although i actually expect your friend's cognitive dynamics are more complex than those of one who believes what you say with psychological unity, in which case i would not expect all of the title "you should have 9 kids", the insistence on the term "purpose", and that quoted paragraph in which according to you sentences 3 and 4 are not at all related to sentence 1 (which seems like a motte and bailey and would break a writing convention (of paragraphs containing related sentences) which the text otherwise follows))

As he said at the beginning of the paragraph: "I do not believe there is a correct number of children to have". From that statement, it's implied that he doesn't think it's "incorrect" to have no children at all

i considered that before writing. "over 0" and "unboundedly" are not numbers; also see my first comment.

  1. ^

    i think these mistakes are basic enough that i'd suggest trying to become better at avoiding basic mistakes. i do not mean the following to cause sadness: i also suggest considering that it may be a net-negative use of LW users time/energies to have them engage with your current content.

Load More