If you already have lots of peoples attention (for instance because you have a social media following or high status credentials) and you’re a US/UK citizen, your best available plan might be to run a political campaign with AI pause as the agenda.
You’re unlikely to win the election, but it’ll likely shift the Overton window and give people hope that change is possible.
For most people, having a next step after “ok I read the blogposts and I’m convinced, now what?” is important. Voting or campaigning for you could be that next step.
Okay cool.
I guess you now have better understanding of why people are still interested in solving morality and politics and meaning, without delegating these problems to an ASI.
We might solve alignment in Yudkowsky's sense of "not causing human extinction" or in Drexler's sense of "will answer your questions and then shutdown".
It may be possible to put a slightly (but not significantly) superhuman AI in a box and get useful work done by it despite it being not fully aligned. It may be possible for an AI to be superhuman in some domains and not others, such that it can't attempt a takeover or even think of doing it.
I agree what you are saying is more relevant if I assume we just deploy the ASI, it takes over the world and then does more stuff.
My comment does not say anyone is a bad person. I mentioned specific disagreements like people lacking agency, copy-pasting what they see around them, or having an incorrect view of politics. I'm mostly analysing their psychology.
I wrote this recently.
me: This is core to the alignment problem. I'm confused how you will solve the alignment problem without figuring out anything about what you care about as a (biological) human.
you: I'm saying: the end goal is we have an ASI that we can make do what we want.
I'm saying you've assumed away most of the problem by this assumption.
AI will (probably) know.
No I disagree.
This is core to the alignment problem. I'm confused how you will solve the alignment problem without figuring out anything about what you care about as a (biological) human.
Are you imagining an oracle AI that doesn't take actions in the world?
I think/hope many other people are similar.
I assume Sam Altman's plan is Step 1 World dictatorship Step 2 Maaaybe do some moral philosophy with the AI's help or maybe not.
But yes, I think you can ask it and get an answer that is true to what you want.
Cool, we agree this might happen
Yes we are still talking past each other.
- Assume we can get the ASI to do what someone or some group of people wants
- Imagine that the ASI does its thing and we end up in a world that person / that group of people doesn't like
These are not two different questions, these are the same question.
Until the ASI actually does the thing in real life, you currently have no way to decide if the thing it will do is something you would want on reflection.
One of the best known ways to ask a human if they like some world radically different from today, is to actually put them inside that world for a few years and ask them if they like it living there.
But we also don't trust the ASI to build this world as a test run. Hence it may be best to figure out beforehand some basics of we actually want, instead of asking the ASI to figure it out for us.
I don't think this is a complicated philosophical point, but many people treat it this way.
Yes I think it is possible that by 2030 Sam Altman has overthrown both the US and Chinese governments and is on track to building his own permanent world dictatorship. Which is still radical but not that complicated to understand.
It gets complicated if you ask a) what if we do actually try to fix politics as (biological) humans, instead of letting the default outcome of a permanent dictatorship play out b) what if I was the benevolent leader who built ASI, and don't want to actually build my own permanent dictatorship, and want to build a world where everyone has freedom, etc. Can I ask the ASI to run lots of simulations of minds and help me solve lots of political questions?
@StanislavKrym Check my website for what I mean
My position is we as (biological) humans should lean towards solving both the philosophical problem of meaning for a post-ASI future and the political problem of ensuring one guy doesn't imprint his personal values on the lightcone using ASI, before we allow the ASI to takeover and do whatever.
You are proposing that we gamble on the ASI solving this in a way that we end up endorsing on reflection. Odds of this are non-zero but also not high in my view.
This is a core part of the alignment problem for me. You can't hide behind the abstraction of "utility function" because you don't know that you have one or what it is. What you do know is that you care about "meaning". Meaning is grounded in actual experiences, so when you see it you can instantly recognise it.
I support more advancements in cyberhacking capabilities so that companies and govts are incapable of keeping secrets. Secrecy enables them to act against the wishes of the majority to an extent that couldn’t otherwise.