Mati_Roy

Comments

Let's create a market for cryonics

I wonder what's the life expectancy of the average cryonicist taking a life insurance vs the general population taking a life insurance. If it's higher, then cryonics-purposed life insurances could be cheaper. The first insurance company to offer this would grab a big part of the cryonics insurance market. Life insurance might for once live up to its name :)

Forecasting Thread: AI Timelines

Without consulting my old prediction here, I answered someone asking me:

What is your probability mass for the date with > 50% chance of agi?

with:

I used to use the AGI definition "better and cheaper than humans at all economic tasks", but now I think even if we're dumber, we might still be better at some economic tasks simply because we know human values more. Maybe the definition could be "better and cheaper at any well defined tasks". In that case, I'd say maybe 2080, taking into account some probability of economic stagnation and some probability that sub-AGI AIs cause an existential catastrophe (and so we don't develop AGI)

Mati_Roy's Shortform

In the book Superintelligence, box 8, Nick Bostrom says:

How an AI would be affected by the simulation hypothesis depends on its values. [...] consider an AI that has a more modest final goal, one that could be satisfied with a small amount of resources, such as the goal of receiving some pre-produced cryptographic reward tokens, or the goal of causing the existence of forty-five virtual paperclips. Such an AI should not discount those possible worlds in which it inhabits a simulation. A substantial portion of the AI’s total expected utility might derive from those possible worlds. The decision-making of an AI with goals that are easily resource-satiable may therefore—if it assigns a high probability to the simulation hypothesis—be dominated by considerations about which actions would produce the best result if its perceived world is a simulation. Such an AI (even if it is, in fact, not in a simulation) might therefore be heavily influenced by its beliefs about which behaviors would be rewarded in a simulation. In particular, if an AI with resource-satiable final goals believes that in most simulated worlds that match its observations it will be rewarded if it cooperates (but not if it attempts to escape its box or contravene the interests of its creator) then it may choose to cooperate. We could therefore find that even an AI with a decisive strategic advantage, one that could in fact realize its final goals to a greater extent by taking over the world than by refraining from doing so, would nevertheless balk at doing so.

  1. If the easily resource-satiable goals are persistent through time (ie. the AI wants to fulfill them for the longest period of time possible), then the AI will either try to keep the simulation running for as long as possible (and so not grab its universe) or try to escape the simulation.

  2. If the easily resource-satiable goals are NOT persistent through time (ie. once the AI has created the 45 virtual paperclips, it doesn't matter if they get deleted, the goal has already been achieved), then once the AI has created the 45 paperclips, it has nothing to lose by grabbing more resources (gradually, until it has grabbed the Universe), but it has something to win, namely: a) increasing its probability (arbitrarily close to 100%) that it did in fact achieve its goal through further experiment and reasoning (ie. because it could be mistaken about having created 45 virtual paperclips), and b) if it didn't, then remedy to that.

Cryonics signup guide #1: Overview

Alcor offers worldwide standby services

Cryonics signup guide #1: Overview

Just in case anyone cares: There are ways you can increase your own chances of a good preservation, notably by moving near Alcor.

Cryonics signup guide #1: Overview

I was gonna point out the same thing

Cryonics signup guide #1: Overview

I'm pretty sure paying a monthly fee is not required to have informed consent. Can you quote the part of the text that says otherwise?

Mati_Roy's Shortform

thanks! yeah i know, but would like if it was more easily accessible whenever i watch a video:)

#2: Neurocryopreservation vs whole-body preservation

Seen on the Facebook group:

Dora Kent still has a chance at resurrection because she was a neuro patient. Had she been whole body, the Riverside coroner would have sliced her brain to pieces. Fortunately, before the Coroner executed a search warrant, her head mysteriously disappeared from the Alcor facility. That gave Alcor the time to get a permanent injunction in the courts against autopsying her head.

They seek it here... They seek it there... Those coroners seek it everywhere. Is it alive or is it dead? That damn elusive frozen head.

Frozen heads are a whole lot easier to move in the event of an emergency, be it legal, criminal, war, natural disasters or whatever. Costs a lot less to keep them cool as well. Looking at the long haul, and given that cryonics is a highly speculative endeavor that will likely require almost unimaginable technology to work, it's a rational choice.

Load More