Sorted by New

Wiki Contributions


Constitutionality is largely irrelevant. All that matters is whether or not five people on the supreme court believe something is constitutional or not, and that seems to be mostly down to the platform of the party that put them in their position.

Define "too much"? The spread is whatever another entity is willing to accept to trade with you and, almost by definition, it is exactly the correct amount.

Is it "alignment" if, instead of AGI killing us all, humans change what it is to be human so much that we are almost unrecognizable to our current selves?

I can foresee a lot of scenarios where humans offload more and more of their cognitive capacity to silicon, but they are still "human" - does that count as a solution to the alignment problem?

If we all decide to upload our consciousness to the cloud, and become fast enough and smart enough to stop any dumb AGI before it can get started  is THAT a solution?

Even today, I offload more and more of my "self" to my phone and other peripherals. I use autocomplete to text people, rather than writing every word, for example. My voicemail uses my voice to answer calls and other people speak to it, not me. I use AI to tell me which emails I should pay attention to and a calendar to augment my memory. "I" already exist, in part, in the cloud and I can see more and more of myself existing there over time.

Human consciousness isn't single-threaded. I have more than one thought running at the same time. It's not unlikely that some of them will soon partially run outside my meat body. To me, this seems like the solution to the alignment problem: make human minds run (more) outside of their current bodies, to the point that they can keep up with any AGI that tries to get smarter than them.

Frankly, I think if we allow AGI to get smarter than us (collectively, at least), we're all fucked. I don't think we will ever be able to align a super-intelligent AGI. I think our only solution is to change what it means to be human instead.

What I am getting at is: are we trying to solve the problem of saving a static version of humanity as it exists today, or are we willing to accept that one solution to Alignment may be for humanity to change significantly instead?