Wiki Contributions

Comments

ArtMi2y10

In this premise, The "Creator" of our simulation seems to not share our same ethical values.

This can be supported by the premises that:
A) A SuperIntellgence can (easily) create simulations.
B) It is (really) hard to align a SuperIntelligence with our ethical values. 
C) There is suffering in our reality. 

Which seem to have a high probability. 

ArtMi2y-20

We don't know the status or evolution of internal MIRI or LW independent/individual Safety Align Research.

But it seems that A.G.I. has a (much?) higher probability of getting invented away.

So the problem is not only to discover how to Safely Align A.G.I. but also to invent A.G.I. 

Inventing A.G.I. seems to be a step before than discovering how to Safely Align A.G.I. right?
 

How probable is it estimated that the first A.G.I. will be the Singularity? isn't it a spectrum? The answer is probably in the take-off speed and acceleration. 

If anyone could provide resources on this it would be much appreciated. 
 

ArtMi2y20

"We can get lots of people to show up at protests and chant slogans in unison. I'm not sure how this solves technical problems."
-In the case that there is someone in the planet who could solve the alignment but still doesn't know about the problem. If that is the case this can be one of the ways to find him/her/them (we must estimate the best probability of success in the path we take, and if them exist and where). Maybe involving more intellectual resources into the safety technical investigation, with a social mediatic campaign. And if it just accelerates the doom, weren't we still doomed?

People should and deserve to know the probable future. A surviving timeline can come from a major social revolt, and a social revolt surging from the low probability of alignment is possible.


So we must evaluate prob of success:

1)By creating loudness and how.
2)Keep trying relatively silent. 


 

ArtMi2y10

Shouldn't we implement a loud strategy?

One of the biggest problems is that we haven't been able to reach and convince a lot of people. This would be most easily done with a more efficient route. I think of someone who already knows the importance of this issue the to a certain level and has high power to act. I am talking about Elon Musk. If we show him the dangerous state of the problem, he would be convinced to take a more important focus. It is aligned with his mindset.

If one of the most wealthy and influential persons of this planet already cares about the problem, we must show him that the issue is even greater and the importance of acting now.  And not only him, there are many other technology and scientific leaders who would act if illustrated with the important existencial risk we are facing.


Also in the loud strategy context, i argue for making noise on the streets. If we must go the congress, we should. If we must go the Tesla, Amazon, etc. Headquarters, we should. If we must fund and implement a public campaign, we should. It is worth trying. 

ArtMi2y20

I may argue against the anti aging field (and probably now that i reflect about it, it seems more important than most non-essential productivity in society) that it is highly more probable that we reach an agi singularity during our lifetimes, before we die of aging. (depends on your age). 
What do you think?