Sometime ago Jonii wrote:

I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you don't get full: every increment of effort/achievement is valuable, like paperclips to Clippy. Now do we have any big goals? Which ones?

Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.

Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.

Procreate. This sounds fun! Fortunately, the same source that gave us this goal also gave us the means to achieve it, and intelligence is not among them. :-) And honestly, what sense in making 20 kids just to play the good-soldier routine for your genes? There's no unique "you gene" anyway, in several generations your descendants will be like everyone else's. Yeah, kids are fun, I'd like two or three.

Follow your muse. Music, comedy, videogame design, whatever. No limit to achievement! A lot of this is about signaling: would you still bother if all your successes were attributed to someone else's genetic talent? But even apart from the signaling angle, there's still the worrying feeling that entertainment is ultimately useless, like humanity-scale wireheading, not an actual goal for us to reach.

Accumulate power, money or experiences. What for? I never understood that.

Advance science. As Erik Naggum put it:

The purpose of human existence is to learn and to understand as much as we can of what came before us, so we can further the sum total of human knowledge in our life.

Don't know, but I'm pretty content with my life lately. Should I have a big goal at all? How about you?

New Comment
95 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you never get full: every increment of effort/achievement is valuable, like paperclips to Clippy.

Well, paperclip maximizers are satisifed by any additional paperclips they can make, but they also care about making sure people can use MS Office pre-07 ... so it's not just one thing.

Tip: you can shift in and out of superscripts in MS Word by pressing ctrl-shift-+, and subscripts by pressing ctrl-= (same thing but without the shift). Much easier than calling up the menu or clicking on the button!

You know, Clippy was a perfect example of a broken attempt at friendliness.
What's the Mac shortcut?
Wouldn't that make 'ctrl-shift-+' like saying "ATM Machine"?

Accumulate power, money or experiences. What for? I never understood that.

I'm not sure why you don't understand this. It seems like the most straightforward goal to me. My own experience is that certain experiences are self-justifying: they bring us pleasure or are intrinsically rewarding in themselves. Why they have this property is perhaps tangentially interesting but it is not necessary to know the why to experience the intrinsic rewards. Pursuing experiences that you find rewarding seems like a perfectly good goal to me, I don't know why anyone woul... (read more)


Accumulate power, money or experiences. What for? I never understood that.

That reminds me of a story (not sure of its historicity, but it is illustrative) about an exchange between Alexander the Great and Diogenes the Cynic:

Diogenes asked Alexander what his plans were. "To conquer Greece," Alexander replied. "And then?" said Diogenes. "To conquer Asia Minor," said Alexander. "And then?" said Diogenes. "To conquer the whole world," said Alexander. "And then?" said Diogenes. "I suppose I s

... (read more)

Wasn't this, er, sorta extensively addressed in the Fun Theory Sequence?

Also, neither "save the world" or "prevent suffering" are Big Goals. They both have endgames: World saved, suffering prevented. There, you're done; then what?

Not sure. Your post Higher Purpose seems to deal with the same topic, but kinda wanders off from the question I have in mind. Also, I'm writing about present-day humans, not hypothetical beings who can actually stop all suffering or exhaust all fun. Edited the post to replace "never get full" with "don't get full".
2Eliezer Yudkowsky
High Challenge, Complex Novelty, Continuous Improvement, and In Praise of Boredom were the main ones I had in mind.

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day.

It is important to note that survival can be treated as a "big goal". For example Hopefully Anonymous treats it that way: if the probability that the pattern that is "him" will survive for the next billion years were .999999, he would strive to increase it to .9999995.

Parenthetically, although no current human being can hold such a... (read more)

The cereal-box-top Aristotelian response:

Big goals, as you describe them, are not good. For valuable things, there can be too much or too little; having an inappropriate amount of concern for such a thing is a vice of excess or deficiency. Having the appropriate amount of concern for valuable things is virtue, and having the right balance of valuable things in your life is eudaimonia, "the good life".

9Eliezer Yudkowsky
Can you have too much eudaimonia?
The usual story is that it's binary - at each moment, you either have it or you don't. It would explain why Aristotle thought most people would never get there. Over time, I'm sure this could be expressed as trying to maximize something.
Yeah, can f(x) be too equal to 3?

Motivation has always intrigued me, ever since I was I kid, I wondered why I had none. I would read my textbooks until I got bored. I'd ace all my tests and do no homework. Every night I went to sleep swearing to myself that tomorrow would be different, tomorrow I would tell my parents the truth when they asked if I had homework and actually do it. I'd feel so guilty for lying, but I never actually did anything.

I joined the military because I knew I couldn't survive in college the way I'd got through high-school. 10 years later I'm smarter, but still tech... (read more)

I'm interested in hearing others' answers to this one. My personal take on it is a firm 'no, it's not an obligation', but it's been a while since I actually thought about the issue, and I'm not sure how much of my reaction is reflexive defensiveness. (I know that I work better when I don't feel obligated-by-society, but that's not much in the way of evidence: My reaction to feeling manipulated or coerced far outweighs my reaction to feeling obligated.)
No. Unless, of course your 'caring' is ambivalent and you wish to overwrite your will in favour of one kind of 'caring'. Bear in mind, of course, that many things you may push yourself to against your natural inclinations are actually goals that benefit you directly (or via the status granted for dominant 'altruistic' acts). Sometimes the reasoning 'I will be penalised by society or the universe in general if I do not do it' is itself a good reason to care. Like you get to continue to eat if you do it.


You can cheat it by donating sperm (or eggs if you're female) - and easily having 10x as high reproductive success, with relatively little effort.

I don't think I can be content, as long as I know how ignorant I am. See for example.

Also, I'm not sure why you define "big goal" the way you do. How does knowing that eventually you will, or won't, be satiated affect what you should do now?

It doesn't. Maybe the definition was too roundabout, and I should have asked what goals can serve as worthy lifetime projects.

Mine would be "Understand consciousness well enough to experience life from the perspective of other beings, both natural and artificial." (Possibly a subset of "Advance science", though a lot of it is engineering.)

That is, I'd want to be able to experience what-it-is-like to be a bat (sorry, Nagel), have other human cognitive architectures (like having certain mental disorders or enhancements, different genders), to be a genetically engineered new entity, or a mechanical AGI.

This goal is never fully satisifed, because there are always other invented/artificial beings you can experience, plus new scenarios.

I'd like to fly too, but isn't it more like a dream than a goal? How do you make incremental progress towards that?
Knowing what-it-is-like only requires the "like", not the "is". This would be satisfied by e.g. a provably accurate simulation of the consciousness of a bat that I can enter while still retaining my memories. Incremental progress comes about through better understanding of what makes something conscious and how an entity's sensory and computational abilities affect the qualities of their subjectively-experienced existence. Much has already been made.
Isn't that not really being a bat, then? You'll never know what it's like to be a bat; you'll only know what it's like for humans who found themselves in a bat body.
It's a bit hard to specify exactly what would satisfy me, so saying that I would "retain my memories" might be overbroad. Stil, you get the point, I hope: my goals is to be able to experience fundamentally different kinds of consciousness, where different senses and considerations have different "gut-level" significance.

Difficulty isn't a point against saving the world and helping the suffering as goals. The utility function is not up for grabs, and if you have those goals but don't see a way of accomplishing them you should invest in discovering a way, like SIAI is trying to do.

Also, if you think you might have big goals, but don't know what they might be, it makes sense to seek convergent subgoals of big goals, like saving the world or extending your life.

There are plenty of different aims that have been proposed. E.g. compare: ...with... It appears not to be true that everyone is aiming towards the same future.
WIthout evidence that their approach is right, for me it's like investing in alchemy to get gold.
If your goal is to get gold and not to just do alchemy, then upon discovering that alchemy is stupid you turn to different angles of attack. You don't need to know whether SIAI's current approach is right, you only need to know whether there are capable people working on the problem there, who really want to solve the problem and not just create appearance of solving the problem, and who won't be bogged down by pursuit of lost causes. Ensuring the latter is of course a legitimate concern.
Vladimir is right, but also I didn't necessarily mean give to SIAI. If you think they're irretrievably doing it wrong, start your own effort.
A quote explaining why I don't do that either: -- Richard Hamming, "You and Your Research"
For now, a valid "attack" of Friendly AI is to actually research this question, given that it wasn't seriously thought about before. For time travel or antigravity, we don't just not have at attack, we have a pretty good idea of why it's won't be possible to implement them now or ever, and the world won't end if we don't develop them. For Friendly AI, there is no such clarity or security.
I want to ask "how much thought have you given it, to be confident that you don't have an attack?", but I'm guessing you'll say that the outside view says you don't and that's that.
I didn't mean to say no attack existed, only that I don't have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.
I've grown out of seeing FAI as an AI problem, at least on the conceptual stage where there are very important parts still missing, like what exactly are we trying to do. If you see it as a math problem, the particular excuse of there being a crackpot-ridden AGI field, stagnating AI field and the machine learning field with no impending promise of crossing over into AGI, ceases to apply, just like the failed overconfident predictions of AI researchers in the past are not evidence that AI won't be developed in two hundred years.
How is FAI a math problem? I never got that either.
In the same sense AIXI is a mathematical formulation of a solution to the AGI problem, we don't have a good idea of what FAI is supposed to be. As a working problem statement, I'm thinking of how to define "preference" for a given program (formal term), with this program representing an agent that imperfectly implements that preference, for example a human upload could be such a program. This "preference" needs to define criteria for decision-making on the unknown-physics real world from within a (temporary) computer environment with known semantics, in the same sense that a human could learn about what could/should be done in the real world while remaining inside a computer simulation, but having an I/O channel to interact with the outside, without prior knowledge of the physical laws. I'm gradually writing up the idea of this direction of research on my blog. It's vague, but there is some hope that it can put people into a more constructive state of mind about how to approach FAI.
1Wei Dai
Thanks (and upvoted) for the link to your blog posts about preference. They are some of the best pieces of writings I've seen on the topic. Why not post them (or the rest of the sequence) on Less Wrong? I'm pretty sure you'll get a bigger audience and more feedback that way.
Thanks. I'll probably post a link when I finish the current sequence -- by current plan, it's 5-7 posts to go. As is, I think this material is off-topic for Less Wrong and shouldn't be posted here directly/in detail. If we had a transhumanist/singularitarian subreddit, it would be more appropriate.
What you are saying in the last sentence is that you estimate that there unlikely to be an attack for some time, which is a much stronger statement than "only that I don't have one ready", and actually is a probabilistic statement that no attack exists ("I didn't mean to say no attack existed"). This statement feeds into the estimate that marginal value of investment in search for such an attack is very low at this time.
That seems to diminish the relevance of Hamming's quote, since the problems he names are all ones where we have good reason to believe an attack doesn't exist.
How long have you thought about it, to reach your confidence that you don't have an attack?

I want to do all of these.

Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.

Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.


Having children holds appeal to me... (read more)

I have moved from Advance Science to Save the world, as I have aged.

Nudging the world is not hard, many people have nudged the world. Especially people who have created technology. Knowing what ripples that nudge will cause later is another matter. It is this that makes me sceptical of my efforts.

I know that I don't feel satisfied with my life without a big goal. Too many fantasy novels with a overarching plot when I was young, perhaps. But it is a self-reinforcing meme, I don't want to become someone who goes through life with no thought to the future. Especially as I see that we are incredibly lucky to live in a time, where we have such things as free time and disposable income to devote to the problem.

I recently read a history of western ethical philosophy and the argument boiled down to this: Without God or deity, human experience/life has no goals or process to work towards and therefore no need for ethics. Humans ARE in fact ethical and behave as though working towards some purpose, so therefore that purpose must exist and therefore god exists.

This view was frustrating to no end. Do humans have to prescribe purpose to the universe in order to satisfy some psychological need?

What is the goal or process supposed to be in the presence of God? Get to heaven and experience eternal happy-fun-time?
7Eliezer Yudkowsky
You're not supposed to ask. Hence the phrase semantic stopsign.
6Paul Crowley
Something grand-sounding but incomprehensible, like every other God-of-the-gaps answer.
Charitably, the same as the goal in the presence of a Friendly singularity.
The goal in the presence of God is to continue to worship God. Forever. To people actually worshiping God right now, this seems wonderful. Or, at least, they say it does, and I don't see any reason to disbelieve them.
1Paul Crowley
Did it even attempt to address goal-seeking behaviour in animals, plants etc?
only to deny that higher order goals existed (achieve basic survival, without regards to ethical system).
2Paul Crowley
So it's just another God-of-the-gaps argument: this aspect of human behaviour is mysterious, therefore God. Only it's a gap that we already know a lot about how to close.
The 'God-of-the-gaps' argument is thrown around very frequently where it doesn't fit. No, theists reason that this aspect of human behavior requires God to be fully coherent, therefore God. Instead of just accepting that their behavior is not fully coherent. Evolution designed us to value things but it didn't (can't) give us a reason to value those things. If you are going to value those things anyway, then I commend your complacency with the natural order of things, but you might still admit that your programming is incoherent if it simultaneously makes you want to do things for a reason and then makes you do things for no reason. (If I sound angry it's because I'm furious, but not at you cithergoth. I'm angry with futility. I'll write up a post later describing what it's like to be 95% deconverted from belief in objective morality.)
Sure it did. The reason to value our terminal values is that we value our terminal values. For example, I want to exist. Why should I continue to want to exist? Because if I stop wanting to exist, I'll probably stop existing, which would be bad, because I want to exist. Yes, this is a justificatory loop, but so what? This isn't a rhetorical question. So what? Such loops are neither illogical nor incoherent.
The incoherence is that I also value purpose. An inborn anti-Sisyphus value. Sisyphus could have been quite happy about his task; pushing a rock around is not intrinsically so bad, but he was also given the awareness that what he did was purposeless. It's too bad he didn't value simply existing more than he did. Which is the situation in which I'm in, in which none of my actions will ever make an objective difference in a completely neutral, value-indifferent universe. (If this is a simulation I'm in, you can abort it now I don't like
I know, but assuming you're a human and no aliens have messed with your brain, it's highly unlikely that this value is a terminal one. You may believe it's terminal, but your belief is wrong. The solution to your problem is simple: Stop valuing objective purpose.
Bravo! We came up with this solution simultaneously -- possibly the most focused solution to theism we have. My brain is happy with the proposed solution. I'll see if it works...
I'm updating this thread, about a month later. I found that I wasn't able to make any progress in this direction. (Recall the problem was the possibility of "true" meaning or purpose without objective value, and the solution proposed was to "stop valuing objective value". That is, find value in values that are self-defined.) However, I wasn't able to redefine (reparametrize?) my values as independent of objective value. Instead, I found it much easier to just decide I didn't value the problem. So I find myself perched indifferently between continuing to care about my values (stubbornly) and 'knowing' that values are nonsense. I thought I had to stop caring about value or about objective value .. actually, all I had to do was stop caring about a resolution. I guess that was easier. I consider myself having 'progressed' to the stage of wry-and-superficially-nihilist. (I don't have the solution, you don't either, and I might as well be amused.)
I don't know what to say except, "that sucks", and "hang in there". :)
Thank you, but honestly I don't feel distressed. I guess I agree it sucks for rationality in some way. I haven't given up on rationality though -- I've just given up on [edited] excelling at it right now. [edited to avoid fanning further discussion]
If my experience is any guide, time will make a difference; there will be some explanation you've already heard that will suddenly click with you, a few months from now, and you'll no longer feel like a nihilist. After all, I very much doubt you are a nihilist in the sense you presently believe you are.
It's very annoying to have people project their experiences and feelings on you. I'm me and you're you.
You're right. Sorry.
You are also a non-mysterious human being.
I disagree with this comment. First, I'm not claiming any magical non-reducibility. I'm just claiming to be human. Humans usually aren't transparently reducible. This is the whole idea behind not being able to reliably other-optimize. I'm generally grateful if people try to optimize me, but only if they give an explanation so that I can understand the context and relevance of their advice. It was Orthonormal that -- I thought was -- claiming an unlikely insider understanding without support, though I understand he meant well. I also disagree with the implicit claim that I don't have enough status to assert my own narrative. Perhaps this is the wrong reading, but this is an issue I'm unusually sensitive about. In my childhood, understanding that I wasn't transparent, and that other people don't get to define my reality for me, was my biggest rationality hurdle. I used to believe people of any authority when they told me something that contradicted my internal experience, and endlessly questioned my own perception. Now I just try to ask the commonsense question: whose reality should I choose -- their's or mine? (The projected or the experienced?) Later edit: Now that this comment has been 'out there' for about 15 minutes, I feel like it is a bit shrill and over-reactive. Well... evidence for me that I have this particular 'button'.
Your objection is reasonable. It is often considered impolite to analyze people based on their words, especially in public. It is often taken to be a slight on the recipient's status, as you took it. As an actual disagreement with Vladimir you are simply mistaken. In the raw literal sense humans are non-mysterious, reducible objects. More importantly, in the more practical sense that Vladimir makes the claim you are, as a human being, predictable in many ways. Your thinking can be predicted with some confidence to operate with known failure modes that are consistently found in repeated investigations of other humans. Self reports in particular are known to differ from reliable indicators of state if taken literally and their predictions of future state are even worse. If you told me, for example, that you would finish a project two weeks before the due date I would not believe you. If you told me your confidence level in a particular prediction you have made on a topic in which you are an expert then I will not believe you. I would expect that you, like that majority of experts, were systematically overconfident in your predictions. Orthonormal may be mistaken in his prediction about your nihilist tendencies but Vladimir is absolutely correct that you are a non-mysterious human being, with all that it entails. It gives me a warm glow inside whenever I hear of someone breaking free from that trap.
2Eliezer Yudkowsky
How odd. I remember that one of the key steps for me was realizing that if my drive for objective purpose could be respectable, than so could all of my other terminal values like having fun and protecting people. But I don't think I've ever heard someone else identify that as their key step until now... assuming we are talking about the same mental step. It seems like there's just a big library of different "key insights" that different people require in order to collapse transcendent morality to morality.
That was totally awesome to watch. Thanks byrnema and Furcas!
Cool. :D This helps me understand why my own transition from objective to subjective morality was easier than yours. I didn't experience what you're experiencing because I think my moral architecture sort of rewired itself instantaneously. If these are the three steps of this transition: * 1) Terminal values --> Objective Morality --> Instrumental values * 2) Terminal values --> XXXXXXXXXXXXX --> Instrumental values * 3) Terminal values --> Instrumental values ... I think I must have spent less than a minute in step 2, whereas you've been stuck there for, what, weeks?
2Wei Dai
Can you expand on this please? How do you know it's highly unlikely?
First, it doesn't seem like the kind of thing evolution would select for. Our brains may be susceptible to making the kind of mistake that leads one to believe in the existence of (and the need for) objective morality, but that would be a bias, not a terminal value. Second, we can simply look at the people who've been through a transition similar to byrnema's, myself included. Most of us have successfully expunged (or at least minimized) the need for an Objective Morality from our moral architecture, and the few I know who've failed are badly, badly confused about metaethics. I don't see how we could have done this if the need for an objective morality was terminal. Of course I suppose there's a chance that we're freaks.
1Wei Dai
I think you're wrong here. It is possible for evolution to select for valuing objective morality, when the environment contains memes that appear to be objective morality and those memes also help increase inclusive fitness. An alternative possibility is that we don't so much value objective morality, as disvalue arbitrariness in our preferences. This might be an evolved defense mechanism against our brains being hijacked by "harmful" memes. I worry there's a sampling bias involved in reaching your conclusion.
3Paul Crowley
If it's any consolation, you're likely to be a lot happier out the other side of your deconversion. When you're half converted, it feels like there is a True Morality, but it doesn't value anything. When you're out the other side you'll be a lot happier feeling that your values are enough.
Yeah, with your comment I do see the light at the end of the tunnel. What it has pointed out to me is that while I'm questioning all my values, I might as well question my value of 'objective' value. It should be neurologically possible to displace my value of "objective good" to "subjective good". However, I'm not sure that it would be consistent to remain an epistemological realist after that, given my restructured values. But that would be interesting, not the end of the world.
I don't understand this. Can you say it again with different words? I am specifically choking on "designed" and "reason."
We're the product of evolution, yes? That's what I meant by 'designed'. When I drive to the store, I have a reason: to buy milk. I also have a reason to buy milk. I also have a reason for that. A chain of reasons ending in a terminal value given to me by evolution -- something you and I consider 'good'. However, I have no loyalty to evolution. Why should I care about the terminal value it instilled in me? Well, I understand it made me care. I also understand that the rebellion I feel about being forced to do everything is also the product of evolution. And I finally understand that there's no limit in how bad the experience can be for me as a result of these conflicting desires. I happen to be kind of OK (just angry) but the universe would just look on, incuriously, if I decided to go berserk and prove there was no God by showing there is no limit on how horrible the universe could be. How's that for a big goal? I imagine that somebody who cares about me will suggest I don't post anything for a while, until I feel more sociable. I'll take that advice.
Why would you feel differently about God? It always struck me that if God existed he had to be a tremendous asshole given all the suffering in the world. Reading the old testament certainly paints a picture of a God I would have no loyalty to and would have no reason to care about his terminal values. Evolution seems positively benevolent by comparison.
You shouldn't care about your values because they're instilled in you by evolution, your true alien Creator. It is the same mistake as believing you have to behave morally because God says so. You care about your values not because of their historical origin or specifically privileged status, but because they happen to be the final judge of what you care about.
Is this a fair summary: Or is this closer: I am guessing the former. Feel free to take a good break if you want. We'll be here when you get back. :)
What would you infer from my choice? I honestly cannot tell the difference between the two statements.
Well, the difference is mostly semantic but this is a good way to reveal minor differences in definitions that are not inherently obvious. If you see them as the same than they are same for the purposes of the conversation which is all I needed to know. :) The reason I asked for clarification is that this sentence: Can be read by some as: To which I immediately thought, "Wait, if it is the reason, why isn't that the reason?" The problem is just a collision of the terms "design" and "reason." By replacing "design" with "cause" and "reason" with "purpose" your meaning was made clear.
Was any argument given for this claim?
Interesting, this is exactly how I felt a week ago. I am the product of western culture, after all. Anyway, if no arguments are provided I can explain the reasoning since I'm pretty familiar with it. I also know exactly where the error in reasoning was. The error is this: the reasoning assumes that humans desires are designed in a way that makes sense with respect to the way reality is. In other words, that we're not inherently deluded or mislead by our basic nature in some (subjectively) unacceptable way. However, the unexamined premise behind this is that we were designed with some care. With the other point of view -- that we are designed by mechanisms with no in-borne mechanism concerned for our well-being -- it is amazing that experience isn't actually more insufferable than it is. Well, I realize that perhaps it is already as insufferable as it can be without more negatively affecting fitness. But imagine, we could have accidentally evolved a neurological module that experiences excruciating pain constantly, but is unable to engage with behavior in a way to effect selection, and is unable to tell us about itself. Or it is likely, given the size of mind-space, that there are other minds experiencing intense suffering without the ability to seek reprieve in non-existence. How theism works explains that while theists are making stuff up, they can make up everything to be as good as they wish. On the other hand, without a God to keep things in check, there is no limit on how horrible reality can be.
Interestingly, this is the exact opposite of Zen, in which it's considered a premise that we are inherently deluded and misled by our basic nature... and in large part due to our need to label things. As in How An Algorithm Feels From Inside, Zen attempts to point out that our basic nature is delusion: we feel as though questions like "Does the tree make a sound?" and "What is the nature of objective morality?" actually have some sort of sensible meaning. (Of course, I have to say that Eliezer's writing on the subject did a lot more for allowing me to really grasp that idea than my Zen studies ever did. OTOH, Zen provides more opportunities to feel as though the world is an undifferentiated whole, its own self with no labels needed.)
Eliezer Yudkowsky made quite a good essay on this theme - Beyond the Reach of God.
Without God there's no end game, just fleeting existence.
I am reminded of The Parable of the Pawnbroker. Edit: Original link.
Thanks for the edit to the original comment; I was unsure whether you were arguing for a view or just describing it (though I assumed the latter based on your other comments). Like the statement in the original comment (and like most arguments for religion), this one is in great need of unpacking. People invoke things like "ultimate purpose" without saying what they mean. But I think a lot of people who agreed with the above would say that life is worthless if it simply ends when the body dies. To which I say: If a life that begins and eventually ends has no "meaning" or "purpose" (whatever those words mean), then an infinitely long one doesn't either. Zero times infinity is still zero. (Of course I know what the everyday meanings of "meaning" and "purpose" are, but those obviously aren't the meanings religionists use them with.) Edit: nerzhin points out that Zero times infinity is not well defined. (Cold comfort, I think, to the admittedly imaginary theist making the "finite life is worthless" argument.) I am a math amateur; I understand limit notation and "f(x)" notation, but I failed to follow the reasoning at the MathWorld link. Does nerzhin or anyone else know someplace that spells it out more? (Right now I'm studying the Wikipedia "Limit of a function" page.)
Strictly speaking, no.
Edit: this comment happens to reply to an out-of-context sentence that is not endorsed by Zachary_Kurtz. Thanks to grouchymusicologist for noticing my mistake. You happen to be wrong on this one. Please read the sequences, in particular the Metaethics sequence and Joy in the Merely Real.
Pretty sure ZK is not endorsing this view but instead responding to the query "Was any argument given for this claim?" Upvoted ZK's comment for this reason.
Thanks, my mistake.
no problem.. it happens