RomanHauksson

Wiki Contributions

Comments

This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!

I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.

A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).

Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).

I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?

This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.

I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us…

I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.

I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.

"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agents that, upon further reflection, aren't actually morally valuable.

For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn't necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don't actually experience the world they've created.

Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don't (selfishness, suffering, irrationality).

So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn't the best way to prevent a valueless posthuman society.

Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.

Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.

Load More