In 2000's, Eliezer wrote an article about Meaning of life, and his opinions about it. This is a google doc copy of it.

https://docs.google.com/document/d/12xxhvL34i7AcjXtJ9phwelZ7IzHZ_xiz-8lGwpWxucI/edit?hl=en_US#heading=h.6fc6c4d29c3f 

In the section 2.5, he tells the most important point. He answers the question of what is the meaning of life.

"The sense of "What is the meaning of life?" we're looking to answer, in this section, is not "What is the ultimate purpose of the Universe, if any?", but rather "Why should I get up in the morning?" or "What is the intelligent choice to make?" "

"Can an AI, starting from a blank-slate goal system, reason to any nonzero goals?
Yes."

He then provides how AI, without any pre-written goals or utility functions, would reason, understand what needs to be done, and take concrete actions. And in it, he explains the meaning of life.

Unfortunately, it seems Eliezer himself has rejected this view later, based on irrational beliefs. 

I want to find other people, besides myself, who agree with meaning of life idea of 2000's Eliezer. If you believe in it, please write in the comments. We can work together.

And if you don't agree with that idea for meaning of life, please share why. You might change my mind.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 9:24 PM

That reads to mean like a version of instrumental convergence (for all possible goals, causing a singulairty makes it more feasibly to achieve them or discover more precisely what they are, so let's figure out how to do that) combined with an assumption that the agent doing this reasoning will care about the outcome of thinking about what goals it "should" have despite starting with having "no goals." If it has no goals, why is it even carrying out that calculation? And if I do have some goal of somekind, even if it's just "carry out this calculation," then...so what? If I'm smart enough I might discover that there exists some other goal that, if I had it, would let me reach a higher score, than I can reach given given my current scoring metric? Since I don't already have that goal, why would this knowledge motivate me? It might motivate me to shut down other agents with different goals. It might motivate me to make trades with agents of comparable power where we each partially adopt the others' goals as a way of on net getting more of what we already want. Otherwise, not so much.

 

That's all very abstract, though, and on a practical level we still don't have anything like the ability to give an AI a well-specified, stable, comprehensible goal.