LESSWRONG
LW

83
Vakus Drake
380250
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Amputation of Destiny
Vakus Drake3mo10

The issue with running people at different speeds as a solution is that the eudaimonic rate for people to increase their intelligence will vary, however this creates an incentive for people to self modify in a sort of race to the bottom. It's also problematic because people are liable to care quite a lot how fast they're running, so this forces society to splinter in terms of who can interact with each other.

Also at a fundamental level it seems like what you're reaching for is still going to end up being a Federation style caste system no matter what: What I mean by that is that there will always be some people who are the most competent because of a combination of where they started and how fast they were allowed to improve themselves. Which means that the people who are less competent than them will know that they are always going to be in their shadow, which seems guaranteed to breed resentment. 

In contrast you could let everyone enhance themselves to then same ever increasing baseline standard. However this would essentially make it impossible to every achieve anything or stand out at all except through luck or having already been famous pre-singularity. 

I don't think there's a good solution here that doesn't involve creating a huge number of new minds that lack the same zero sum desire for status that most (but not all) humans have. That way all the original humans can be as special as they want (unless they only want to compare themselves to other original humans) and most people don't desire status enough to feel like they're missing out on anything. With these new people created to be maximally fulfilled without needing to be special or accomplished in some way.

Reply
Amputation of Destiny
Vakus Drake3mo1-2

This is why the only viable solution to giving existing people a sense of purpose/meaning is to create a bunch of new people who aren't as driven by status. That way every existing person can be as impressive within their particularly community as they want: Since most of the other people living with them have little drive for status and don't care about getting fame or needing to feel like they're exceptional/special in some way.

Then combine that with simulations DM'd by superintelligences and you really should be able to give every person the feeling of being maximally fulfilled in their life. 

Since the admiration of the people in the simulation who see you as a hero can be totally real. With those digital people you will become close to being made as real digital minds. So your relationships with them aren't fake. With them having preferences such that once they find out they were made with a fabricated backstory/memories within a simulation they aren't resentful: Instead being actively glad they got to think they were really say saving the world with their best friends, and with any hardship faced being stuff they see as worth it for the personal growth it gave them.

Reply
Amputation of Destiny
Vakus Drake3mo10

I think the proposed solution presented here is suboptimal and would lead to a race to the bottom, or alternatively lead to most people being excluded from the potential to ever do anything that they get to feel matters (and I think a much better solution exists):

If people can enhance themselves then it becomes impossible to earn any real status except via luck. Essentially it's like a modified version of that Syndrome quote "When everyone is exceptional and talented, then no one will be". 

Alternatively if you restrict people's ability to self modify then you just get a de-facto caste system (like the Star Trek Federation), where only the people who already won the genetic luck of the draw get to ever be impressive, with everyone else permanently forced to live in their shadow. Alternatively maybe the people who get bored and have to increase their intelligence the soonest (at the eudaimonic rate) end up being the only ones who get to be exceptional in the long term. Either way the people who are forced to take longer increasing their intelligence get kind of screwed here. 

The fundamental problem here is that humans instinct for meaning/purpose is zero sum and socially mediated. Almost everyone wants to be exceptional in at least some niche area, yet this is inherently zero sum. This is especially obvious when you consider say fame: Only a very small fraction of any given community can be famous within that community just as a brute mathematical fact given human psychology. 

There would seem to be only one real solution to this which involves doing the following: 

  • Have people self select into communities that don't interact too much with the broader civilization (so people aren't competing with too many others for them to ever hope to stand out).
  • Create a bunch of new minds that just don't care about status very much and don't compare themselves to others in the way most baseline humans do. So that they never feel any temptation to enhance themselves in order to stand out, and aren't missing out on anything. These newly made people are just as fulfilled as most of the original humans if not more so, but they're way easier to satisfy.

Specifically this would probably involve simulations with the following qualities (though you could do everything non-digitally in megastructures like a non-dystopian Westworld, it'd just be inefficient) 

  • Have AI's DM simulated adventures with mostly NPC's the AI enjoys acting out (or alternatively it's non-sentient). This way people can play out adventures where they get to be the hero and maximally fulfill all their psychological instincts for purpose, in addition to all the lower level Maslow stuff. You need fake NPC's because it's hard to do good adventures without villains, and creating evil minds would be an insane thing to allow.
  • Have superintelligence within obvious ethical limitations predict in advance which NPC's you will become friends with, and then create them as real digital minds who don't necessarily know they're in a simulation until you complete the adventure together.
  • Ensure these new digital minds are created with preferences such that they will never feel resentment once they find out the nature of their existence, and will be glad to have been created the way they were. They'll even be glad they didn't know it was a simulation at the time, since they got to believe they were say saving the world along with their best friends for instance! 

There's also some more interesting complexities to do with population ethics in a world with finite resources unlike The Culture: Since by default population growth will always grow at an exponential rate, however even at near lightspeed you can never expand to gather resources at faster than a geometric (spherical) rate that merely fills your future light cone. So it's easy to show that if you don't restrict population growth somehow, you reach a Malthusian state within a few thousand years. If anyone's interested I know of at least one good solution to this problem however that avoids making any real sacrifices. I can also show pretty easily the very simple math of why this is unavoidable to anyone interested.

Reply
Ethics Needs A Marginal Revolution
Vakus Drake2y10

This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:

  •  Care about itself more than all of humanity (if total pleasure/pain and not the number of minds is what matters), since it can turn itself into a utility monster whose pleasure and pain just dwarf humanity.
  • Alternately if all minds get more equal consideration it encourages the AI to care far more about future minds it plans on creating than all current humans. Since at a certain point it can build massive computers to run simulated minds faster without humans in the way. Plus on a more fundamental level, the matter and energy needed to support a human can support a much larger number of digital minds, especially if those minds are in a mindless blissed out state and are thus way less intensive to run than a simulated human mind. 

It just seems like there's no way to avoid the fact that sufficiently advanced technology easily takes the repugnant conclusions to even more extreme ends: Wherein you must be willing to wipe out humanity in exchange for creating some sufficiently large number of blissed out zombies who only barely rise to whatever you set as the minimum threshold for moral relevance. 

More broadly I think this post takes for granted that morality is reduce-able to something simple enough to allow for this sort of marginal revolution. Plus without moral realism being true this avenue also doesn't make sense as presented.

Reply
A Good Future (rough draft)
Vakus Drake3y10

I think the whole point of a guardian angel AI only really makes sense if it isn't an offshoot of the central AGI.  After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed.  Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don't really see the point of a guardian angel AI.

>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.

I also disagree with this insofar as as I don't think that people "deciding on some place to stay" is a stable state of affairs under an aligned superintelligence. Since I don't think people will want to be loop immortals if they know they are heading towards that. Similarly I don't even know if I would consider an AGI aligned if it didn't try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it. 

Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying. 

Reply
A Good Future (rough draft)
Vakus Drake3y32

I've had similar ideas but my conception of such a utopia would differ slightly in that: 

  • This early on (at least given how long the OC has been subjectively experiencing) I wouldn't expect one to want to spend most time experiencing simulations stripped of their memory.  As I'd expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn't actually real (plus people will want simulations where they can kill simulated villains guilt free).
  • I personally could never be totally comfortable being totally at the mercy of the machinations of superintelligences and the protection of the singleton AGI. So I would get the singleton AI to make me a lesser superintelligence to specifically look out for my values/interests, which it should have no problem with if it's actually aligned. Similarly I'd expect such an aligned singleton to allow the creation of "guardian angel" AGI's for countless other people, provided said AI's have stable values which are compatible with its aligned values.
  • I would expect most simulations to entail people's guardian angel AI simply acting out the roles of all NPC's with perfect verisimilitude, while obviously never suffering when they act out pain and the like. I'd also expect that many NPC's one formed positive relationships with would at some point be seamlessly swapped with a newly created mind, provided the singleton AI considered their creation to be positive utility and they wouldn't have issues with how they were created. I expect this to a major source of new minds such that the distant future will have many thousands of minds who were created as approximations of fictional characters, from all the people living out their fantasies in say Hogwarts for instance and then taking a bunch of its characters out of it.

PS: If I were working on a story like this (I've actually seriously considered it, and I get the sense we read and watch a lot of the same stuff like Isaac Arthur), I'd make mention of how many(most?) people don't like reverting their level of intelligence, for similar reasons to why people would find the idea of being reverted to a young child's intelligence level existentially terrifying.      

This is important because it means that one should view adult human level intelligence as being a sort of "childhood" for +X% of human-level superintelligence. So essentially to maximize the amount of novel fun that one can experience (without forgetting things and repeating the same experiences repeatedly like a loop immortal) one should wait until you get bored of all there is to appreciate at your intelligence level (for the range of variance in mind design you're comfortable with) before improving it slightly.  This also means that unless you are willing to become a loop immortal,  the speed you run your mind at will determine maybe within an order of magnitude or so how quickly you progress along the process of "maturing" into a superintelligence, unless you're deliberately "growing up" faster than is generally advised. 

Reply
Do insects' lives matter?
Vakus Drake3y10

This kind of issue (among many, many others) is why I don't think the kind of utilitarianism that this applies to is viable. 

My moral position only necessitates extending consideration to beings who might in principle extend similar consideration to oneself. So one has no moral obligations to all but the smartest animals, but also your moral obligations to other humans scale in a way which I think matches most people's moral intuitions. So one genuinely does have a greater moral obligation to loved ones, and this isn't just some nepotistic personal failing like it is in most formal moral systems. For the same reasons one has little to no moral obligations towards say serial killers or anyone else who actively wants to kill or subjugate you.

Reply
Should you refrain from having children because of the risk posed by artificial intelligence?
Vakus Drake3y10

I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).

Reply
Should you refrain from having children because of the risk posed by artificial intelligence?
Answer by Vakus DrakeNov 06, 20221-2

To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.

The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.

From the your hypothetical children's perspective this scenario is also disproportionately one-sidedly positive. If AI isn't aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.

Now it's important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal's minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.

Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.

Reply
The Case for Human Genetic Engineering
Vakus Drake5y40

An irish elk/peacock type scenario is pretty implausible here for a few reasons. 

  • Firstly people care about enough different traits that an obviously bad trade like attractiveness for intelligence wouldn't be adopted by enough people to impact the overall population. 
  • Secondly for traits like attractiveness low mutation load is far more important than any gene variants that could present major tradeoffs. So just selecting for less mutation load will improve most of the polygenetic traits people care about.

Ultimately the polygenetic nature of traits people care the most about just doesn't create much need or incentive for the kinds of trade offs you propose. Such tradeoffs could only ever conceivably be worthwhile in order to reach superhuman levels of intelligence (nothing analogous exists for attractiveness) which would have obvious positive externalities. 

https://slatestarcodex.com/2016/05/04/myers-race-car-versus-the-general-fitness-factor/

Reply
Load More