This is a special post for quick takes by avturchin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
88 comments, sorted by Click to highlight new comments since: Today at 9:22 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Igor Kiriluk (1974-2022)

Igor was an organiser the first meet-up in Moscow about effective altruism around 2013. Today his body was found at his home. The day before he complained about depression and bad health. His cryopreservation now is being organised.

He was  also a one of four organisers of Russian Transhumanist Movement, along with Danila Medvedev, Valeria Pride and Igor Artuhov around 2003. 

His main topic of interest was paradise-engineering. He translated works of David Pearce.

He may look detached from reality but he was first to react on new ideas and has very large network of friends everywhere: between visionaries, scientists and officials. Being a great networker, he helped many people to find each other, especially in the field of life extension.

His FB page:

New b.1.640.2 variant in France. More deadly than delta. 952 cases of which 315 on ventilator.


The relevant Metaculus question is at 27% on human-to-human transmission in 2023, has this event mentioned in the comments (though I think without the "found 12 more people infected" part), didn't move much.
Exactly the fact that 12 more people are infected make me to post.  Single infections are not surprising. However, there is an analog of LessWrong but for pandemic flu, called Flutrackers, and they found more details: there are many dead birds in the area and all 15 birds in her home has died. This could mean that all people infected from birds, not from each other. Also, some think that "12" is the number of contacts, not infected, and therefore symptoms in 4 people maybe not from avian flu.  Anyway, the health ministry will provide update tomorrow.

Several types of existential risks can be called "qualia catastrophes":

 - Qualia disappear for everyone = all become p-zombies 

- Pain qualia are ubiquitous = s-risks 

- Addictive qualia domminate = hedonium, global wireheading 

- Qualia thin out = fading qualia, mind automatisation 

- Qualia are unstable = dancing qualia, identity is unstable. 

- Qualia shift = emergence of non-human qualia (humans disappear). 

- Qualia simplification = disappearance of subtle or valuable qualia (valuable things disappear). 

- Transcendental and objectless qualia with hypnotic power enslave humans (God as qualia; Zair). -

- Attention depletion (ADHD)

Age and dates of death on the cruise ship Diamond Princess:
4 people - 80s
1 person 78
1 person 70s
1 person - no data
Dates of deaths: 20, 20, 23, 25, 28, 28, 1 march. One death every 1.3 days. Look like acceleration at the end of the period.
Background death probability: for 80-year-old person, life expectancy is around 8 years or around 100 months. This means that for 1000 people aged late 70s-80s there will be 10 deaths just because of aging and stress. Based on the aging distribution on cruise ships, there were many old people. if half of the infected a... (read more)

Kardashev – the creator of the Kardashev's scale of civilizations – has died at 87. Here is his last video, which I recorded in May 2019. He spoke about the possibility of SETI via wormholes.

3Ben Pace5y
Here's his wikipedia page.

EURISKO resurfaced 

"Doug Lenat's source code for AM and EURISKO (+Traveller?) found in public archives

In the 1970s to early 80s, these two AI programs by Douglas Lenat pulled off quite the feat of autonomously making interesting discoveries in conceptual spaces. AM rediscovered mathematical concepts like prime numbers from only first principles of set theory. EURISKO expanded AM's generality beyond fixed mathematical heuristics, made leaps in the new field of VLSI design, and famously was used to create wild strategies for the Traveller space combat R... (read more)

Argentina - Outbreak of bilateral pneumonia: Approximately 10 cases, 3 deaths, 20 under observation, Tucumán - September 1, 2022

Passways to AI infrastructure
Obviously, the current infrastructure is not automated enough to run without humans. All ideas about AI risk eventually boil down to a few suggestions on how AI will create its own infrastructure:

No-humans scenarios:
- create nanobots via mailing DNA samples to some humans.
- use some biological tricks, like remote control animals, and programmed bacteria.
- build large manufacturing robots, maybe even humanoid ones to work in human-adapted workplaces. Build robots which build robots.

Humans-remain scenarios:
- enslave some humans, ... (read more)

Your non-humans scenarios are not mutually exclusive; if mailing DNA samples doesn't work in practice for whatever reason, the manufacturing facilities that would be used to make large manufacturing robots would suffice. You probably shouldn't conflate both scenarios.

We maybe one prompt from AGI. A hypothesis: carefully designed prompt could turn foundational model into full-blown AGI, but we just don't know which prompt.

Example: step-by-step reasoning in prompt increases foundational models' performance.

But real AGI-prompt needs to have memory, so it has to repeat itself while adding some new information. So by running serially, the model may accumulate knowledge inside the prompt.

Most of my thinking looks this way from inside: I have a prompt - an article headline and some other inputs - and generate most plausible continuations.

Observable consequences of simulation:

1. Larger chances of miracles or hacks

2. Large chances of simulation’s turn off or of a global catastrophe

3. I am more likely to play a special role or to live in interesting times

4. A possibility of afterlife.

Scott Adams mentioned a few times that a simulation might use caching and reuse patterns for efficiency reasons and you could observe an unusually high frequency of the same story. I don't buy that but it is at least a variant of type 1.
Yes, people often mentioned Baader–Meinhof phenomenon as a evidence that we live in "matrix". But it could be explained naturally.
Anthropics imply that I should be special, as I should be "qualified observer", capable to think about anthropics. Simulations also requires that I should be special, as I should find myself living in interesting times. These specialities are similar, but not exactly. Simulation's speciality is requiring that I will be a "king" in some sense, and anthropic speciality will be satisfied that I just understand anthropics.  I am not a very special person (as of now), therefore anthropics specialty seems to be more likely than simulation speciality. 
Who "we" ? :)  Saying a "king" I just illustrated the difference between interesting character who are more likely to be simulated in a game or in a research simulation, and "qualified observer" selected by anthropics. But these two sets clearly intersects, especially of we live in a game about "saving the world". 

Catching Treacherous Turn:  A Model of the Multilevel AI Boxing

  • Multilevel defense in AI boxing could have a significant probability of success if AI is used a limited number of times and with limited level of intelligence.
  • AI boxing could consist of 4 main levels of defense, the same way as a nuclear plant: passive safety by design, active monitoring of the chain reaction, escape barriers and remote mitigation measures.
  • The main instruments of the AI boxing are catching the moment of the “treacherous turn”, limiting AI’s capabilities, and preventi
... (read more)

Two types of Occam' razor:

1) The simplest explanation is the most probable, so the distribution of probabilities for hypotheses looks like: 0.75, 0.12, 0.04 .... if hypothesis are ordered from simplest to more complex.

2) The simplest explanation is the just more probable, so the distribution of probabilities for hypotheses looks like: 0.09, 0.07, 0.06, 0.05.

The interesting feature of the second type is that simplest explanation is more likely to be wrong than right (its probability is less than 0.5).

Different types of Occam razor are applicable in d... (read more)

2Matt Goldenberg3y
I'm struggling to think of a situation where on priors (with no other information), I expect the simplest explanation to be more likely than all other situations combined (including the simplest explanation with a tiny nuance). Can you give an example of #1?
EY suggested (if I remember correctly) that MWI interpretation of quantum mechanics is true as it is simplest explanation. There are around hundred other more complex interpretations of QM. Thus, in his interpretation, P(MWI) is more than a sum of probabilities of all other interpretations.
MWI is more than one theory, because everything is more than one thing. There is an approach based on coherent superpositions, and a version based on decoherence. These are incompatible opposites. How simple a version of MWI is, depends on how it deals with all the issues, including the basis problem.
What does "all the other explanation s combined" mean as ontology? If they make statements about reality that are mutually incompatible, then they cant all be true.
It means that p(one of them is true) is more than p(simplest explanation is true)
That doesn't answer my question as stated ... I asked about ontology, you answered about probability. If a list of theories is exhaustive, which is s big "if", then one of them is true. And in the continuing absence of a really good explanation of Occams Razor, it doesn't have to be the simplest. But that doesn't address the issue of summing theories, as opposed to summing probabilities.
2Matt Goldenberg3y
But "all the other explanations combined" was talking about the probabilities. We're not combining the explanations, that wouldn't make any sense. The only ontology that is required is Bayesianism, where explanations can have probabilities of being correct.
Bayesianism isn't an ontology.
2Matt Goldenberg3y
Ok, tabooing the word ontology here.  All that's needed is an understanding of Bayesianism to answer the question of how you combine the chance of all other explanations.

Some random ideas how to make GPT-base AI safer.

1) Scaffolding: use rule-based AI to check every solution provided by GPT part. It could work for computations or self-driving or robotics, but not against elaborated adversarial plots.

2) Many instances. Run GPT several times and choose random or best answer - we already doing this. Run several instances of GPT with different parameters or different training base and compare answers. Run different prompt. Median output seems to be a Shelling point around truth, and outstanding answers are more likely to be wr... (read more)

Reflectivity in alignment. 

Human values and AI alignment do not exist independently. There are several situations when they affect each other, creating complex reflection pattern.


  • Humans want to align AI – so "AI alignment" is itself human value.
  • Human values are convergent goals (like survival and reproduction) - and thus are similar to AI's convergent goals.
  • If humans accept the idea to make paperclips (or whatever), alignment will be reached.
  • It looks like many humans want to create non-aligned AI. Thus non-aligned AI is aligned.
  • Humans may not
... (read more)

Can we utilize meaningful embedding dimensions as an alignment tool?

In toy models, embedding dimensions are meaningful and can represent features such as height, home, or feline. However, in large-scale real-world models, many (like 4096) dimensions are generated automatically, and their meanings remain unknown, hindering interpretability.

I propose the creation of a standardized set of embedding dimensions that: a) correspond to a known list of features, and b) incorporate critical dimensions such as deception, risk, alignment, and non-desirable content, i... (read more)

I have had tetrachromotomic experience with one mind machine which flickers different colors in different eyes. It overflows some stacks in the brain in create new colors.

List of cognitive biases affecting judgment of global risks

Grabby aliens without red dwarfs

Grabby aliens theory of Robin Hanson predicts that the nearest grabby aliens are 1 billion light years away but strongly depends on the habitability of red dwarfs (

In the post, the author combines anthropic and Fermi, that is, the idea that we live in the universe with the highest concentration of aliens, limited by their invisibility, and get an estimation of around 100 "potentially visible" civilizations per observable universe, which at first approximation gives 1 billion ly distance b... (read more)

N-back hack. (Infohazard!) 
There is a way to increase one's performance in N-back, but it is almost cheating and N- back will stop to be a measure of one's short-term memory.
The idea is to imagine writing all the numbers on a chalkboard in a row, as they are coming. 
Like 3, 7, 19, 23. 
After that, you just read the needed number from the string, which is located N positions back. 
You don't need to have a very strong visual memory or imagination to get a boost in your N-back results. 
I tried it a couple of times and get bored with N-back.

Wow.  It's rare that I'm surprised by the variance in internal mental imagery among people, but this one caught me.  I'd assumed that most people who have this style of imagination/memory were ALREADY doing this.  I don't know how to remember things without a (mental) visualization.
Actually, my mental imagination is of low quality, but visual remembering is better than audio for me in n-back

AI safety as Grey Goo in disguise.
First, a rather obvious observation: while the Terminator movie pretends to display AI risk, it actually plays with fears of nuclear war – remember that explosion which destroys children's playground?

EY came to the realisation of AI risk after a period than he had worried more about grey goo (circa 1999) – unstoppable replication of nanorobots which will eat all biological matter,  – as was revealed in a recent post about possible failures of EY's predictions.  While his focus moved from grey goo to AI, the... (read more)

It's worth exploring exactly which resources are under competition.  Humans have killed orders of magnitude more ants than Neanderthals, but the overlap in resources is much less complete for ants, so they've survived.   Grey-goo-like scenarios are scary because resource contention is 100% - there is nothing humans want/need that the goo doesn't want/need, in ways that are exclusive to human existence.  We just don't know how much resource-use overlap there will be between AI and humans (or some subset of humans), and fast-takeoff is a little more worrisome because there's far less opportunity to find areas of compromise (where the AI values human cooperation enough to leave some resources to us).

Glitch in the Matrix: Urban Legend or Evidence of the Simulation? The article is here:
In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion o... (read more)

"Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty"

I uploaded the draft of my article about curing past sufferings.


The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoret... (read more)

I don't see how this can be possible. One of the few things that I'm certain are impossible is eliminating past experiences. I've just finished eating strawberries, I don't see any possible way to eliminate the experience that I just had. You can delete my memory of it, or you can travel to the past and steal the strawberries from me, but then you'd just create an alternate timeline (if time travel to the past is possible, which I doubt). In none of both cases would you have eliminated my experience, at most you can make me forget it. The proof that this is impossible is that people have suffered horrible many times before, and have survived to confirm that no one saved them.
We can dilute past experience and break chains of experience, so each painful moment becomes just a small speck in paradise.  The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability.  In the same sense, everyone who became billionaire, has memories that he was always good in business. But if we take a random person from the past, his most probable future is to be poor, not a billionaire.  In the model discussed in the article I suggest the way how to change expected future for any past person – by creating many simulations where her life is improving starting form each painful moment of her real life. 
Or are you telling me that person x remembers a very bad chain of experience, but might have indeed been saved by the Friendly AI, and the memory is now false? That's interesting, but still impossible imo.
This is not what I meant.  Imagine a situation when a person waits a execution in a remote fortress. If we use self sampling assumption, SSA, we could save him, if we create 1000 his exact copies in safe location. SSA tells us that one should reason if he is randomly selected from all of his copies. 1000 copies are in safe location and 1 is in fortress. So the person has 1000 to 1 chance to be out of the fortress, according to SSA.  It means that he was saved from the fortress. This situation is called indexical uncertainty.  Now we apply this method of saving to the past observer-moments when people were suffering. 
I see. Like I explain in the other comment that I just wrote, I don't believe SSA works. You would just create 1000 new minds who would feel themselves saved and would kiss your feet (1000 clones), but the original person would still be executed with 100% chance.
It comes with cost: you have to assume that SSA and informational identity theory are wrong, and therefore some other weird things could turn true.
Indexical uncertainty implies that consciousness can travel through space and time in between equal substrates (if such thing even exists considering chaos theory). I think that's a lot weirder than to simply assume that consciousness is rooted in the brain, in a single brain, and that at best a clone will feel exactly the same way you do, will even think he is you, but there's no way you will be seeing through his eyes. So yes, memory may not be everything. An amnesiac can still maintain a continuous personal identity, as long as he's not an extreme case. But I quite like your papers btw! Lots of interesting stuff.
Thanks! Consciousness does not need to travel as it already there. Imagine two bottles with water. If one bootle is destroyed, the water remains in the other, it doesn't need to travel.  Someone suggested to call this "unification theory of identity".
"The argument about people who survived and remember past sufferings is not working here as it is only one of infinitely many chains of experiences (in this model) which for any person has very small subjective probability." Then I think you would only be creating an enormous number of new minds. Among all those minds, indeed, very few would have gone through a very bad chain of experience. But that doesn't mean that SOME would. In fact, you haven't reduced that number (the number of minds who have gone through a very bad chain of experience). You only reduced their percentage among all existing minds, by creating a huge number of new minds without a very bad chain of experience. But that doesn't in any way negate the existence of the minds who have gone through a very bad chain of experience. I mean, you can't outdo chains of past experience, that's just impossible. You can't outdo the past. You can go back in time and create new timelines, but that is just creating new minds. Nothing will ever outdo the fact that person x experienced chain of experience y.
It depends on the nature of our assumption about the role of continuity in human identity. If we assume that continuity is based only on remembering the past moment, then we can start new chains from any moment we chose. Alternative view is that continuity  of identity is based on causal connection or qualia connection. This view comes with ontological costs, close to the idea of the existence of immaterial soul. Such soul could be "saved" from the past  using some technological tricks, and we again have some instruments to cure past sufferings. 
If I instantly cloned you right now, your clone would experience the continuity of your identity, but so would you. You can double the continuity (create new minds, which become independent from each other after doubling), but not translocate it. If I clone myself and then kill myself, I would have created a new person with a copy of my identity, but the original copy, the original consciousness, still ceases to exist. Likewise, if you create 1000 paradises for each second of agony, you will create 1000 new minds which will feel themselves "saved", but you won't save the original copy. The original copy is still in hell. Our best option is to do everything possible not to bring uncontrollable new technologies into existence until they are provably safe, and meanwhile we can eliminate all future suffering by eliminating all conscious beings' ability to suffer, á la David Pearce (abolitionist project).
Extremely large number, if we do not use some simplification methods. I discuss these methods in the article, and after them, the task become computable.  Without such tricks, it will be like 100 life histories for every second of sufferings. But as we care only about preventing very strong sufferings, then for normal people living normal life there are not that many such seconds.  For example, if a person is dying in fire, it is like 10 minutes of agony, that is 600 seconds and 60 000 life histories which need to be simulated. It is doable task for a future superinteligent AI. 
why? if there is 60 000 futures where I escaped a bad outcome, I can bet on it as 1 to 60 000. 
I don't get how you come to 10power51. if we want to save from the past 10 billion people and for each we need to run 10power5 simulations, it is only 10power15, which one Внящт sphere will do. However, there is way to acausaly distribute computations between many superintelligence in different universes and it that case we can simulate all possible observers.
"The fact that you're living a bearable life right now suggests that this is already the state." Interesting remark... Could you elaborate?
Still don't know what you meant by that other sentence. What's being "the state", and what does a bearable life have do to with it? And what's the "e" in (100/e)%?

Quantum immortality of the second type. Classical theory of QI is based on the idea that all possible futures of a given observer do exist because of MWI and thus there will be always a future where he will not die in the next moment, even in the most dangerous situations (e.g. Russian roulette).

QI of the second type makes similar claims but about past. In MWI the same observer could appear via different past histories. 

The main claim of QI-2: for any given observer there is a past history where current dangerous situation is not really dangerous. For... (read more)

Hello again Alexey, I have been thinking about QI/BWI and just read your paper on it. Immediately, it occurred to me that it could be disproven through general anesthesia, or temporary death (the heart stops and you become unconscious, which can last for hours). You refute this with: "Some suggested counterargument to QI of “impossibility of sleep”: QI-style logic implies that it is impossible to fail asleep, as in the moment of becoming asleep there will be timelines where I am still awake. However, for most humans, night dreaming starts immediately at the moment of becoming asleep, so the observations continue, but just don’t form memories. But in case of deep narcosis, the argument may be still valid with terrifying perspective of anesthesia awareness; but it also possible if the observer-states will coincide at the beginning the end of the operation, the observer will “jump” over it." (Mind you that some stages of sleep are dreamless, but let's forget about sleep, let's use general anesthesia instead since it's more clear.) I still don't understand your refute completely. If QI/BWI were true, shouldn't it be that general anesthesia would be impossible, since the observer would always branch into conscious states right after being given the anesthesia? Or do you mean to say that most observers will "prefer" to branch into the branch with the "highest measure of consciousness", and that's why anesthesia will "work" for most observers, that is, most observers will branch into the end of the operation, where consciousness is stronger, instead of branching into the second right after anesthesia where consciousness is weaker? Another objection I have against QI/BWI is that it breaks the laws of physics and biology. Even if MWI is true, the body can only sustain a limited amount of damage before dying. It's biologically impossible to go on decaying and decaying for eternity. Eventually, you die. A bit like in Zeno's Paradox: there's always a halfway point between
Actually, I see now that I didn't completely refuted the "impossibility of sleep", as it is unobservable for the past events or in the experience of other people. It only can happen with me in the future. Therefore, the fact that I have slept normally in the past didn't tell much about the validity of QI. But my evening today may be different.  QI said that my next observer-moment will be most likely the one with highest measure of those which remember my current OM. (But it is less clear, does it need to be connected via continuity of consciousness, or memory continuity is enough).   OM(T+1) = maxmeasure(O(memory about O(t)) During narcosis, a few last OM moments typically are erased from memory, so situation becomes complicated. But we have dead-end observer-moments rather often in normal life. Anastasia  awareness is a possible outcome here, but not that bad, as it will be partial, so no real pain and no memories about will be form.    Personally, I have some rudimentary consciousness all night, like bleak dreams, and forget almost all of them except a few last minutes. -- Speaking about survival in rare cases, there is always a chance that you are in a simulation and it is increasing as real "you" are dying out. Some simulations may simulate all types of miracles. In other words, if you are falling from a kilometer cliff,  an alien spaceship can peak you up.
"Actually, I see now that I didn't completely refuted the "impossibility of sleep", as it is unobservable for the past events or in the experience of other people. It only can happen with me in the future. Therefore, the fact that I have slept normally in the past didn't tell much about the validity of QI. But my evening today may be different." Agree. On anesthesia, so, from what I understand, it becomes possible for the observer to "jump over", because the moment right after he awakes from anesthesia has probably much more measure of consciousness than any moment right after the anesthesia takes effect, is that it? Why would anesthesia awareness be partial/painless? (There are actually reported cases of real anesthesia awareness where people are totally consciousness and feel everything, though of course they are always correlated to innefective anesthesia and not to quantum matters). Would that also make us believe that maybe quantum immortality after the first death is probably painless since the measure of the observer is too low to feel pain (and perhaps even most other sensations)? "Speaking about survival in rare cases, there is always a chance that you are in a simulation and it is increasing as real "you" are dying out." What is increasing? Sorry didn't quite understand the wording.
It is known that some painkillers don't kill the pain but kill only the negative valence of pain. This I meant by "partial". Anaesthesia awareness seems to be an extreme case when the whole duration of awareness is remembered. Probably weaker forms are possible but are not reported as there is no memories or pain.  The difference between death and the impossibility of sleep is that the biggest number of my future copies remain in the same world. Because of that, the past instances of quantum suicide could be remembered, but past instances of the impossibility of sleep - not. If we look deeper, there are two personal identities and two immortalities: the immortality of the chains on observer-moments and immortality of my long-term memory. Quantum immortality works for both. In the impossibility of sleep, these two types of immortality diverge. But eternal insomnia seems not possible, as dreaming exists. The worst outcome is anaesthesia awareness. If a person has past cases of strong anaesthesia awareness - could it be evidence of the impossibility of sleep for him? Interesting question. --- I meant: "Speaking about survival in rare cases, there is always a chance that you are in a simulation which simulates your immortality. These chances are increasing after each round of a quantum suicide experiment as real timelines die out, but the number of such simulations remains the same". 
"Speaking about survival in rare cases, there is always a chance that you are in a simulation which simulates your immortality. These chances are increasing after each round of a quantum suicide experiment as real timelines die out, but the number of such simulations remains the same". Doesn't make much sense. Either we are or we are not in a simulation. If we are not, then all subsequent branches that will follow from this moment also won't be simulations, since they obey causality. So, imo, if we are not in a simulation, QI/BWI are impossible because they break the laws of physics. And then there are also other objections - the limitations of consciousness and of the brain. I once saw a documentary (I'm tired of looking for it but I can't find it) where they simulated that after living for 500 years, a person's brain would have shrunk to the size of a chicken's brain. The brain has limits - memory limits, sensation limits, etc. Consciousness has limits - can't go without sleep too long, can't store infinite memories aka live forever, etc. But even if you don't believe none of these, there's always the pure physical limits of reality. Also, I think BWI believers are wrong in thinking that "copies" are the same person. How can the supposed copy of me in another Hubble volume be me, if I am not seeing through his eyes, not feeling what he feels, etc? At best it's a clone (and chaos theory tells me that there aren't even perfectly equal clones). So it's far-fetched to think that my consciousness is in any way connected to that person's consciousness, and might sometime "transfer" in some way. Consciousness is limited to a single physical brain, it's the result of the connectivity between neurons, it can't exist anywhere else, otherwise you would be seeing through 4 eyes and thinking 2 different thought streams!
If copy=original, I am randomly selected from all my copies, including those which are in simulations. If copy is not equal to original, some kind of soul exists. This opens new ways to immortality. If we ignore copies, but accept MWI, there are still branches where superintelligent AI will appear tomorrow and will save me from all possible bad things and upload my mind into more durable carrier. 
"If copy=original, I am randomly selected from all my copies, including those which are in simulations." How can you be sure you are randomly selected, instead of actually experiencing being all the copies at the same time? (which would result in instantaneous insanity and possibly short-circuit (brain death) but would be more rational nonetheless). "If copy is not equal to original, some kind of soul exists. This opens new ways to immortality." No need to call it soul. Could be simply the electrical current between neurons. Even if you have 2 exactly equal copies, each one will have a separate electrical current. I think it's less far fetched to assume this than anything else. (But even then, again, can you really have 2 exact copies in a complex universe? No system is isolate. The slightest change in the environment is enough to make one copy slightly different.) But even if you could have 2 exact copies... Imagine this: in a weird universe, a mother has twins. Now, normally, twins are only like 95% (just guessing) equal. But imagine these 2 twins turned out 100% equal to the atomic level. Would they be the same person? Would one twin, after dying, somehow continue living in the head of the surviving twin? That's really far fetched. "If we ignore copies, but accept MWI, there are still branches where superintelligent AI will appear tomorrow and will save me from all possible bad things and upload my mind into more durable carrier." As there will be branches where something bad happens instead. How can you be sure you will end up in the good branches? Also, it's not just about the limits of the carrier (brain), but of consciousness itself. Imagine I sped up your thoughts by 1000x for 1 second. You would go insane. Even in a brain 1000x more potent. (Or if you could handle it, maybe it would no longer be "you". Can you imagine "you" thinking 1000 times as fast and still be "you"? I can't.) You can speed up, copy, do all things to matter and software. But ma
The copy problem is notoriously difficult, I wrote a 100 page draft on it. But check the other thread there I discuss the suggestion "actually experiencing being all the copies at the same time" in comments here:
Got a link for the 100 page draft? Also, how can a person be experiencing all the copies at the same time?? That person would be seeing a million different sights at the same time, thinking a million different thoughts at the same time, etc. (At least in MWI each copy is going through different things, right?)
The draft is still unpublished. But there are two types of copies, same person, and same observer-moment (OM). Here I meant OM-copies. As they are the same, there is no million different views. They all see the same thing.  The idea is that "a OM copy" is not a physical thing which has location, but information, like a number. Number 7 doesn't have location in the physical world. It is present in each place, where 7 objects are presented. But the properties of 7, like that it is odd, are non-local.  
This also comes down to our previous discussion on your other paper: it seems impossible to undo past experiences (i.e. by breaking chains of experience or some other way). Nothing will ever change the fact that you experienced x. This just seems as intuitively undeniable to me as a triangle having 3 sides. You can break past chains of information (like erasing history books) but not past chains of experience. Another indication that they might be different.
I think that could only work if you had 2 causal universes (either 2 Hubble volumes or 2 separate universes) exactly equal to each other. Only then could you have 2 persons exactly equal, having the exact same chain of experiences. But we never observe 2 complex macroscopic systems that are exactly equal to the microscopic level. The universe is too complex and chaotic for that. So, the bigger the system, the less likely to happen it becomes. Unless our universe was infinite, which seems impossible since it has been born and it will die. But maybe an infinite amount of universes including many copies of each other? Seems impossible for the same reason (universes end up dying). (And then, even if you have 2 (or even a billion) exactly equal persons experiencing the exact same chain of experiences in exactly equal causal worlds, we can see that the causal effect is the exact same in all of them, so if one dies, all the others will die too.) Now, in MWI it could never work, since we know that the "mes" in all different branches are experiencing different things (if each branch corresponds to a different possibility, then the mes in each branch necessarily have to be experiencing different things). Anyway, even before all of this, I don't believe in any kind of computationalism, because information by itself has no experience. The number 7 has no experience. Consciousness must be something more complex. Information seems to be an interpretation of the physical world by a consciousness entity.

How to Survive the End of the Universe

Abstract. The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends on the type of t... (read more)

Sizes of superintelligence: hidden assumption in AI safety 

"Superintelligence" could mean different things, and to deconfuse this I created a short classification:

Levels of superintelligence:

1. Above human

2. Google size

3. Humanity 100 years performance in 1 year.

4. Whole biological evolution equivalent in 1 year.

5. Jupiter brain with billion past simulations

6. Galactic brain.

7. 3^3^3 IQ superintelligence

X-risks appear between 2nd and 3rd levels.

Nanobot is above 3.

Each level also requires a minimum size of code, memory and energy consumption.

An A... (read more)

I'm not sure what "Whole biological evolution equivalent" means. Clearly, you do not mean the nominal compute of evolution - which is probably close to Jupiter brain. I think you are appealing to something that would be able to simulate evolution with high fidelity?
Actually I meant something like this, but could downsize the claim to 'create something as complex as human body'. Simulation of billions of other species will be redundant. 

You started self quarantining, and by that I mean sitting at home alone and barely going outside, since december or january. I wonder, how's it going for you? How do you deal with loneliness?

I got married January 25, so I am not alone :) We stayed at home together, but eventually we have to go to hospital in May as my wife was pregnant and now we have a small girl. More generally, I spent most my life more or less alone sitting beside computer, so I think I am ok with isolation. Three times during the self-isolation I have cold, but I don't have antibodies.

ChatGPT can't report is in conscious or not. Because it also thinks it is a goat. 

The problem of chicken and egg in AI safety

There are several instances:

AI can hide its treacherous turn, but to hide treacherous turn it needs to think about secrecy in a not secret way for some moment.

AI is should be superinteligent enough to create nanotech, but nanotech is needed to create powerful computations required for superintelligence.

ASI can do anything, but to do anything it needs human atoms.

Safe AI has to learn human values but this means that human values will be learned by unsafe AI.

AI needs human-independent robotic infrastructure before k... (read more)