Posts

Sorted by New

Wiki Contributions

Comments

Dan7mo30

This sounds potentially legislatable. More so then most ideas. You can put it into simple words. "AGI" can't do anything that you couldn't pay an employee to do.

Dan1y10

The math behind game theory shaped our evolution in such a way as to create emotions because that was a faster solution for evolution to stumble on then making us all mathematical geniuses who would immediately deduce game theory from first principles as toddlers. Either way would have worked.

ASI wouldn't need to evolve emotions for rule-of-thumbing game theory.

Game theory has little interesting to say about a situation where one party simply has no need for the other at all and can squish them like a bug, anyway.

Dan1y10

What is a 'good' thing is purely subjective. Good for us. Married bachelors are only impossible because we decided that's what the word bachelor means.

You are not arguing against moral relativism here.

Dan1y10

 Moral relativism doesn't seem to require any assumptions at all because moral objectivism implies I should 'just know' that moral objectivism is true, if it is true. But I don't. 

Dan1y10

So, if one gets access to the knowledge about moral absolutes by being smart enough then one of the following is true :

    average humans are smart enough to see the moral absolutes in the universe

    average humans are not smart enough to see the moral absolutes

    average humans are right on the line between smart enough and not smart enough

If average humans are smart enough, then we should also know how the moral absolutes are derived from the physics of the universe and all humans should agree on them, including psychopaths. This seems false. Humans do not all agree.

If humans are not smart enough then it's just an implausible coincidence that your values are the ones the SuperAGI will know are true. How do you know that you aren't wrong about the objective reality of morality? 

If humans are right on the line between smart enough and not smart enough, isn't it an implausible coincidence that's the case?

Dan1y1-1

But if moral relativism were not true, where would the information about what is objectively moral come from? It isn't coming from humans is it? Humans, in your view, simply became smart enough to perceive it, right? Can you point out where you derived that information from the physical universe, if not from humans? If the moral information is apparent to all individuals who are smart enough, why isn't it apparent to everyone where the information comes from, too?

Dan1y43

Psychologically normal humans have preferences that extend beyond our own personal well-being because those social instincts objectively increased fitness in the ancestral environment. These various instincts produce sometimes conflicting motivations and moral systems are attempts to find the best compromise of all these instincts.

Best for humans, that is.    

Some things are objectively good for humans.  Some things are objectively good for paperclip maximizers, Some things are objectively good for slime mold. A good situation for an earthworm is not a good situation for a shark. 

It's all objective. And relative. Relative to our instincts and needs.

Dan1y30

A pause, followed by few immediate social effects and slower AGI development then expected may make things worse in the long run. Voices of caution may be seen to have 'cried wolf'.

I agree that humanity doesn't seem prepared to do anything very important in 6 months, AI safety wise.

Edited:Clarity. 

Dan1y-10

I would not recommend new aspiring alignment researchers to read the Sequences, Superintelligence, some of MIRI's earlier work or trawl through the alignment content on Arbital despite reading a lot of that myself.

I think aspiring alignment researchers should read all these things you mention.  This all feels extremely premature. We risk throwing out and having to rediscover concepts at every turn. I think Superinelligence, for example, would still be very important to read even if dated in some respects!

We shouldn't assume too much based on our current extrapolations inspired by the systems making headlines today. 

 GPT-4's creators already want to take things in a very agentic direction, which may yet negate some of the apparent dated-ness.

"Equipping language models with agency and intrinsic motivation is a fascinating and important direction for future research" - OpenAI in Sparks of Artificial General Intelligence: Early experiments with GPT-4.

Load More