Posts

Sorted by New

Wiki Contributions

Comments

Alex1V2mo43

“Suffering is a mainly measure, rather than the target metric.”

What is the metric? What could be more important than reducing suffering and increasing happiness? Things are only bad if they cause suffering or reduce happiness, and only good if they increase happiness or decrease suffering.

“Either what you are doing was fine before, or you did not hereby make it fine.”

The bad things we do to animals (cages, slaughter, etc) are bad because it causes them suffering. If we find a way to prevent them from suffering, these bad things are no longer bad.

“but expect it to by default go very badly and see no way to coordinate for a better outcome.”

I think your point could be better made with specific examples of the things that could go wrong with this technology.

It’s worth noting that the woman with the FAAH-OUT mutation is a perfectly functional human, she’s just very happy and doesn’t have to deal with the unnecessary suffering that our brains inflict on us on a daily basis.

The problem with cocaine isn’t that it makes you happy. It’s the cycle of tolerance, withdrawal and addiction that causes the problem. Since wanting and liking are separate systems in the brain, there’s no reason, in theory, why you couldn’t have something that brings lasting happiness and reduction of suffering without the downsides. 

Alex1V4mo10

I think there’s lots of specific internal reasons why people make bad choices: sometimes it’s just pure selfishness of sadism.

But as for why some people are delusional, selfish, sadistic. As for why some people “succumb to evolved default behaviors like anger, instead of using their freedom of thought.” I’m not really seeing an alternate explanation here other than some people where unlucky enough to have genes and environment that built a brain that followed the laws of physics until it they did something bad. And from an internal perspective, maybe the people who did good things had a self modification step where the environment that is their brain modified their brain to have better intentions. But that doesn’t really matter from the perspective of judging someone because all the factors that made a brain that would do self-modification in the first place were outside of that persons control.

And that doesn’t mean that you shouldn’t punish people where it will change their behaviour or act as a deterrent, or keep others safe.

But does mean that there is no justice in retributive punishment. And it means theirs no point in hating people and wanting them to suffer. And it means that if you have infinite energy and resurrect Hitler then you should give him paradise rather than punishment.

Alex1V4mo10

We can simulate the brain of C. elegans, I see no reason why it couldn’t theoretically be scaled up to a human brain. I guess technically you need computation AND a full map of the human brain not just computation for that.

Alex1V4mo10

I think the atoms in my brain will follow the laws of physics until a choice is made. And to me that process feels like I’m deciding something, because that’s what computation feels like from the inside. But actually the outcome is predetermined.

Alex1V4mo10

No, but only because I lack the computing power to do so. I very powerful AI could.

Alex1V4mo10

So why do some people choose to do good while others choose to do evil? I think genes and environment are fully sufficient to explain why people make different choices, but if you have an alternate hypothesis I’d be interested to hear it. But the answer can’t be something like “because some people choose different intentions” because then you’d have to explain why some people have different intentions.

To put it another way, you may choose your intentions deliberately, but did you make the choice to be the kind of person who chooses intentions deliberately? And if so, did you make the choice to be the kind of person who made that choice? (and so on…). If you go far enough back in the causal chain, it all goes back to the genes and environment that built a brain that does all those other things.

I can kind of see what you’re getting at with the self modification thing. I self-modified my own thought pattens to become a nicer person. But as for why I did that: my genetics gave me high trait openness and I was given a book that encouraged self-modification toward niceness when I was a child. So in this way, I chose to be a nicer person, and so I choose to do nice things, but factors outside of my control caused my original choice to become nicer.

Alex1V4mo00

You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.

Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.

So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.

Alex1V4mo1-1

The bit of your brain that chooses to think nice thoughts (“I”/“me”) is just as much a product of your genes and environment as the bit of your brain that wants to think bad thoughts.

You didn’t choose to have a brain that tries not to think bad thoughts and Hitler didn’t choose to have a brain that outputs genocide when given some specific environmental conditions. The only way Hitler could have realised that his actions were bad and choose to be good would be if his genes and environment built a brain that would do so given some environmental input.

Alex1V4mo80

Hitler’s evil actions were determined by the physical structure of his brain. His brain was built by genes (which he didn’t choose), and modified by his environment (which didn’t choose), and then certain environmental inputs (which he didn’t choose) caused his brain to output genocide. If you had Hitler’s genes and Hitler’s environment, you would have Hitler’s brain and so you would do as Hitler did.

To punish someone, or in this case withhold high resolution paradise, can only be useful and good in so far as it changes behaviour or acts as a deterrent to others, ultimately reducing suffering. If you have infinite power, there is no longer a need to punish anyone since you can just end all suffering directly by giving everyone their own high resolution paradise, or whatever the ideal heaven is. Punishment becomes nothing but pointless evil cruelty the second we achieve the ability to prevent people from hurting each other without it. 

Alex1V6mo10

I think more exposition is needed. For example, one episode could have someone who knows how dangerous AI is, warns the other characters about it, and explains toward the end why things are going wrong. In other episodes, the characters could realise their own mistake, far too late, but in time to explain what's going on with a bit of dialogue. Alternatively, the AI explains its own nature before killing the characters.

For example, at the end of Cashbot, as nukes are slowly destroying civilisation, someone could give a short monologue about how AIs don't have human values, ethics, empathy or restraint, and that they will follow their goals to the exclusion of all else.

Load More