Andy_McKenzie

Wiki Contributions

Comments

Thanks for this good post. A meta-level observation is that people are grasping at straws like this is evidence that our knowledge of the causes of schizophrenia is quite limited. 

“One day, one of the AGI systems improves to the point where it unlocks a new technology that can reliably kill all humans, as well as destroying all of its AGI rivals. (E.g., molecular nanotechnology.) I predict that regardless of how well-behaved it's been up to that point, it uses the technology and takes over. Do you predict otherwise?”

I agree with this, given your assumptions. But this seems like a fast take off scenario, right? My main question wasn’t addressed — are we assuming a fast take off? I didn’t see that explicitly discussed.

My understanding is that common law isn’t easy to change, even if individual agents would prefer to. This is why there are Nash equilibria. Of course, if there’s a fast enough take off, then this is irrelevant.

Thanks for the write-up. I have very little knowledge in this field, but I'm confused on this point: 

> 34.  Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can’t reason reliably about the code of superintelligences); a “multipolar” system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like “the 20 superintelligences cooperate with each other but not with humanity”.

Yes. I am convinced that things like ‘oh we will be fine because the AGIs will want to establish proper rule of law’ or that we could somehow usefully be part of such deals are nonsense. I do think that the statement here on its own is unconvincing for someone not already convinced who isn’t inclined to be convinced. I agree with it because I was already convinced, but unlike many points that should be shorter this one should have probably been longer.

Can you link to or explain what convinced you of this? 

To me, part of it seems dependent on take-off speed. In slower take-off worlds, it seems that agents would develop in a world in which laws/culture/norms were enforced at each step of the intelligence development process. Thus at each stage of development, AI agents would be operating in a competitive/cooperative world, eventually leading to a world of competition between many superintelligent AI agents with established Schelling points of cooperation that human agents could still participate in.

On the other hand, in faster/hard take-off worlds, I agree that cooperation would not be possible because the AI (or few multipolar AIs) would obviously not have an incentive to cooperate with much less powerful agents like humans. 

Maybe there is an assumption of a hard take-off that I'm missing? Is this a part of M3? 

It is so great you are interested in this area! Thank you. Here are a few options for cryonics-relevant research: 

- 21st Century Medicine: May be best to reach out to Brian Wowk (contact info here: https://pubmed.ncbi.nlm.nih.gov/25194588/) and/or Greg Fahy (possibly old contact info here: https://pubmed.ncbi.nlm.nih.gov/16706656/)

- Emil Kendziorra at Tomorrow Biostasis may know of opportunities. Contact info here: https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0244980

- Robert McIntyre at Nectome may know of opportunities. Contact: http://aurellem.org/aurellem/html/about-rlm.html 

- Chana Phaedra/Aschwin de Wolf at Advanced Neural Biosciences may know of opportunities. Contact info for Aschwin here: https://www.liebertpub.com/doi/10.1089/rej.2019.2225 

- Brain Preservation Foundation: https://www.brainpreservation.org/. No lab, but space for discussions, especially related to neuroscience and related computational modeling.

- Robert Freitas at the Institute for Molecular Manufacturing just published a book called Cryostasis Revival. I'm not sure, but it's possible people there may know of related computational modeling opportunities: http://www.imm.org/

As below, Laura Deming is also a good person to contact. 

As you may know, there is a somewhat big divide in methodology these days between people who favor aldehydes as a part of the preservation procedure and those who do not. But there are good options either way. 

With 3 months and I'm not sure of your location or geographic flexibility, the best option might be some sort of computational modeling experiment, such as a molecular dynamics simulation: https://www.brainpreservation.org/how-computational-researchers-can-contribute-to-brain-preservation-research/ 

Regarding discussions with your profs, I totally understand, but I suspect that people may be more open to discussing it on an intellectual level than you think. 

You can also email me for further information/discussion, although this is not my personal area of research: amckenz at gmail dot com

But there’s also a significant utilitarian motivation - which is relevant here because utilitarianism doesn’t care about death for its own sake, as long as the dead are replaced by new people with equal welfare. Indeed, if our lives have diminishing marginal value over time (which seems hard to dispute if you’re taking our own preferences into account at all), and humanity can only support a fixed population size, utilitarianism actively prefers that older people die and are replaced.

I strongly disagree with this. I think the idea of human fungibility is flawed from a hedonistic quality of life perspective. In my view, much of human angst is due to the specter of involuntary death. There has been a lot of academic literature on this. One famous book is Ernest Becker's: https://en.wikipedia.org/wiki/The_Denial_of_Death/ 

Involuntary death is one of the great harms of life. Decreasing the probability and inevitability of involuntary death seems to have the potential to dramatically improve the quality of human lives. 

It is also not clear that future civilizations will want to create as many people as they can. It is quite plausible that future civilizations will be reticent to do this. For one, those people have not consented to be born and the quality of their lives may still be unpredictable. There is a good philosophical case for anti-natalism as a result of this lack of consent. I consider anti-natalism totally impractical - and even problematic - in today's world because we need the next generation to continue the project of humanity. But in the future that may not be an issue anymore. Whereas people who have opted for cryonics/biostasis are consenting to live longer lives. 

(As a side note, I'm a strong proponent of brain preservation/cryonics and I'm consistently surprised others are not more interested in it.) 

(updated from a previous comment I made on this topic here: https://forum.effectivealtruism.org/posts/vqaeCxRS9tc9PoWMq/why-are-some-eas-into-cryonics) 

Makes sense! I guess I wonder if there’s a literature on the cause of sleep deprivation induced car accidents, eg whether the problem is only microsleeps or whether things like impulsivity or reaction time also contribute.

ETA: Preliminary search: the first Google result found this study: https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-020-09095-5

Basically, in these driving simulations, reaction time and breaking time is significantly affected by sleep deprivation. I’m not sure how this could all be due to microsleeps. And it seems quite plausibly related to both risk of car accident and cognitive performance more broadly.

Extremely interesting article with a number of good points! 

Is there any chance that you could expand upon the driving objection? Why, in your model of sleep and the cognitive effects of sleep, does getting little sleep increase your risk of getting into a car accident when driving?

Another point: I find Mendelian randomization studies fairly convincing for the long-term effects of sleep. For example, here's one based on UK Biobank data suggesting that sleep traits do not have an interaction with Alzheimer's disease risk: https://academic.oup.com/ije/article/50/3/817/5956327

Excellent article. Surprised this isn't more upvoted/commented upon. Trying to rectify the lack of comments and looking forward to the rest of the sequence, especially the mind/brain reverse engineering aspects. 

I think this is a good point, although I think that a Givewell-like site theoretically could compare charities in a particular domain in which outcomes aren't easily measurable. Just because things aren't easily measurable doesn't mean that they are unmeasurable.

Upvoted because this is a good critique. My rationale for using this scale is that I was less interested in absolute interest in cryonics and more in relative interest in cryonics between groups. The data and my code are publicly available, so if you are bothered by it, then you should do your own analysis.

Load More