All of Patodesu's Comments + Replies

Some people post about AI Safety in the EA Forum without crossposting here

When they say stopping I think they refer to stopping it forever, instead of slowing down, regulating and even pausing development.

Which I think is something pretty much everyone agrees on.

I think there's two different misalignments that you're talking about. So you can say there's actually two different problems that are not recieving enough attention.

One is obvious and is between different people. 

And the other is inside every person. The conflict between different preferences, the not knowing what they are and how to aggregate them to know what we actually want.

Human empowerment is a really narrow target too

I'm kinda new here, so where all this EAF fear comes from?

Even if you think S-risks from AGI are 70 times less likely than X-risks, you should think how many times worse would it be. For me would be several orders of magnitude worse.

Can non Reinforcement Learning systems (SL or UL) become AGI/ Superintelligence and take over the world? If so, can you give an example?

3Lone Pine1y
They can if researchers (intentionally or accidentally) turn the SL/UL system into a goal based agent. For example, imagine a SayCan-like system which uses a language model to create plans, and then a robotic system to execute those plans. I'm personally not sure how likely this is to happen by accident, but I think this is very likely to happen intentionally anyway.