One of Aesop’s fables is relevant to humanity’s future and the transition of power from human to AI. It’s quite short and you should read one of the many versions. But the one sentence summary is that being a wolf is preferable to being a domestic dog because the wolf has freedom even if it lacks comfort. Now, you are free to disagree with this conclusion. I don’t want to make an argument from authority. My point is that this quite succinctly sums up my objection to the best case ASI scenarios. Even if we remain extant and nominally free, we would no longer be in charge anymore than a dog is. Dogs have a lot of rights, freedoms, and can successfully plead (non-verbally) to get certain things they want from their master, but at the end of the day they aren’t in charge even if the owner’s life revolves around the dog.
Maybe that is a selfish thing to think in the face of astronomical waste, but it does strike me as a world without meaning. You might say that most people alive aren’t in control of their destiny in any meaningful way. You might also say that almost nobody alive is in control of humanity’s destiny in a meaningful way and they are still happy. People in general, although I suspect a smaller percentage of those here, might think it is grandiose to want to contribute, even a small amount, toward shaping humanity’s future. I think I’m willing to grant all that and say that I would still feel bad if no human ever made a meaningful choice after takeoff.
The most obvious objection is that you could say that the AI will just suction off some part of the universe and give us free reign in there if we choose it. That’s still not great in my opinion.
Everything I worked for in this playground would be hollowed out by the knowledge that I could have just queried a friendly nanny AI to get it for me. Even if it didn’t step in, even if it had set up some system where it couldn’t step in, I personally would feel like something important was missing. Like all of the great achievements and firsts had been given out before I even had a chance to play. Humanity forever in second place. I’m switching fairly loosely between how I would feel personally if I was not in play and how I would feel if humanity as a whole was not in play. Feel free to generalize/specify to humanity/yourself as you wish.
You could live in a virtual world and be blinded to that fact but at that point it seems like brainwashing.
Don’t get me wrong, I’d go crazy with hedonism for a while. Maybe I’d even become addicted and change my tune. But right now, I am looking forward to the challenges. How proud I would be to be a member of the species that solved them. How great it would be to contribute one tiny piece to the solutions. But if AI does it all I’ll be cut off from making all contributions. All future accomplishments will be credited to something so alien we get no larger a share than tiktaalik does for inventing the transistor.
Approximately 30% of this video is really highly relevant to my thesis.
I don’t think I’m hitting on anything especially new by saying this. A fewposts I recently came across have similar vibes I would say. It also seems to be discussed at length in Nick Bostrom’s Deep Utopia, although I have not found the time to read that yet.
But, it seems like there is a contingent of humanity that is willing, excited even, to give up agency to secure comfort. Where do you draw the line and say “yes, this is such an incredible amount of bliss/utilitarian goodness that I am willing to never face any real challenges in my life again”? Is this a tipping point past which it becomes your actual preference or is this just the best outcome we can hope for from AI futures?
Framing it as humans would be to ASI as beloved dogs are to their masters might be inaccurate. Replacing ASI with a deity and the utopic future with some vision of heaven might also be inaccurate. But I think there is something meaningful in the comparison and I think a lot of people would push back much more strongly when the scenario is phrased in that way then they currently are to aligned ASI.
One of Aesop’s fables is relevant to humanity’s future and the transition of power from human to AI. It’s quite short and you should read one of the many versions. But the one sentence summary is that being a wolf is preferable to being a domestic dog because the wolf has freedom even if it lacks comfort. Now, you are free to disagree with this conclusion. I don’t want to make an argument from authority. My point is that this quite succinctly sums up my objection to the best case ASI scenarios. Even if we remain extant and nominally free, we would no longer be in charge anymore than a dog is. Dogs have a lot of rights, freedoms, and can successfully plead (non-verbally) to get certain things they want from their master, but at the end of the day they aren’t in charge even if the owner’s life revolves around the dog.
Maybe that is a selfish thing to think in the face of astronomical waste, but it does strike me as a world without meaning. You might say that most people alive aren’t in control of their destiny in any meaningful way. You might also say that almost nobody alive is in control of humanity’s destiny in a meaningful way and they are still happy. People in general, although I suspect a smaller percentage of those here, might think it is grandiose to want to contribute, even a small amount, toward shaping humanity’s future. I think I’m willing to grant all that and say that I would still feel bad if no human ever made a meaningful choice after takeoff.
The most obvious objection is that you could say that the AI will just suction off some part of the universe and give us free reign in there if we choose it. That’s still not great in my opinion.
Everything I worked for in this playground would be hollowed out by the knowledge that I could have just queried a friendly nanny AI to get it for me. Even if it didn’t step in, even if it had set up some system where it couldn’t step in, I personally would feel like something important was missing. Like all of the great achievements and firsts had been given out before I even had a chance to play. Humanity forever in second place. I’m switching fairly loosely between how I would feel personally if I was not in play and how I would feel if humanity as a whole was not in play. Feel free to generalize/specify to humanity/yourself as you wish.
You could live in a virtual world and be blinded to that fact but at that point it seems like brainwashing.
Don’t get me wrong, I’d go crazy with hedonism for a while. Maybe I’d even become addicted and change my tune. But right now, I am looking forward to the challenges. How proud I would be to be a member of the species that solved them. How great it would be to contribute one tiny piece to the solutions. But if AI does it all I’ll be cut off from making all contributions. All future accomplishments will be credited to something so alien we get no larger a share than tiktaalik does for inventing the transistor.
Approximately 30% of this video is really highly relevant to my thesis.
I don’t think I’m hitting on anything especially new by saying this. A few posts I recently came across have similar vibes I would say. It also seems to be discussed at length in Nick Bostrom’s Deep Utopia, although I have not found the time to read that yet.
But, it seems like there is a contingent of humanity that is willing, excited even, to give up agency to secure comfort. Where do you draw the line and say “yes, this is such an incredible amount of bliss/utilitarian goodness that I am willing to never face any real challenges in my life again”? Is this a tipping point past which it becomes your actual preference or is this just the best outcome we can hope for from AI futures?
Framing it as humans would be to ASI as beloved dogs are to their masters might be inaccurate. Replacing ASI with a deity and the utopic future with some vision of heaven might also be inaccurate. But I think there is something meaningful in the comparison and I think a lot of people would push back much more strongly when the scenario is phrased in that way then they currently are to aligned ASI.