All of Logos01's Comments + Replies

The software needs a way to track who was responding to which questions. That's because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.

The cookies thing is because it's not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn't necessarily the only thing these servers will ever be running.

(I didn't write the software but I am providing the physical hosting it's running on.)

Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)

Do you have any good evidence that this assertion applies to Cephalopods?

Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There's no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.

(Note: I model "animal intelligence" in terms of emotional responses; while... (read more)

I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors). But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.) Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods. Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove. On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)
  • Be comfortable in uncertainty.

  • Do whatever the better version of yourself would do.

  • Simplify the unnecessary.

Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.

There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.

An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of w... (read more)

"If it weren't for my horse, I never would've graduated college." >_<

An omnipotent omnibenevolent being would have no need for such "shorthand" tricks to create infinite worlds without suffering. Yes you could always raise another aleph level for greater infinities; but only by introducing suffering at all.

Which violates omnibenevolence.

I don't buy it. A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will. And yet we have cancer and people raping children.

Oh, it did try. Unfortunately, Adam exercised his free will in the wrong way. Better luck next universe.

I am thiiiiiiiiis confident!

I'm surprised to see this dialogue make so little mention of the material evidence* at hand with regards to the specific claims of Christianity. I mean; a god which was omnipotent and omnibenevolent would surely create a world with less suffering for humanity than what we conjecture an FAI would orchestrate, yes? Color me old-fashioned but I assign the logically** impossible a zero probability (barring of course my being mistaken about logical impossibilities).

* s/s//
** s/v/c/

See Plantinga's free will defense for human and the variant for natural evils; it defuses the logical argument from evil. (Of course it does this by postulating 'free will', whatever that is, but I don't think free will is nearly as clear cut a p=~0 as the existence of evils...)

but then changes its mind and brings us back as a simulation."

This is commonly referred to as a "counterfactual" AGI.

Indeed. Which is why happiness is not a terminal value.

Yes, they do. And that's the end of this dialogue.

(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for "Aumanning.")

Indeed. But they do demonstrate the principle in question.

The principal you're trying to demonstrate is that one shouldn't fear changing one's substrate since it's already happening. So, no they don't.

Actually it's more complicated than that. Not just water atoms; over time your genetic pattern changes -- the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.

Neurogenesis does occur in adults -- so not even on a cellular level is your brain the same today as it was yesterday.

Furthermore -- what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.

Note that neither of these developments are generally considered good.

Missed that about the class. Makes a difference, definitely.

I'm not really sure what non-local phenomena are [...]

Two options: trust the assertions of those who are sure, or learn of them for yourself. :)

1 v 2 -- is your "meat" persistent over time? (It is not).

2 v 3 are non differentiable -- 2 is 3.

4 is implied by 2/3. It is affirmed by physics simulations that have atomic-level precision, and by research like the Blue Brain project.

5 is excluded by the absence of non-local phenomena ('psychic powers').

I agree that my meat does not persist over time. The class of patterns of information-flow that can occur within meat includes the pattern of information-flow that occurs within your meat. 3 therefore asserts that I am you, in addition to being me. 2 does not assert this. They seem like different claims to me, insofar as any of these claims are different from the others. I'm not really sure what non-local phenomena are, or what they have to do with psychic powers, or what they have to do with the proper referent for "I".

A change of substrate occurs daily for you. It's just of a similar class. What beyond simple "yuck factor" gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?

No, it doesn't. You could argue that there's a renewal of atoms (most notably water), but swapping water atoms doesn't have physical meaning, so… No. Heck, even cut&paste transportation doesn't change substrate. The "yuck factor" I feel cause me to doubt this: If an EM of me would be created during my sleep, what probability would I assign to wake up as silicon, or as wetware? I'm totally not sure I can say 1/2.

Scientific truths include the measurement of net harm to society for any given action -- which then impact utilitarian consequentialistic morals. ("It's unjust to execute anyone. Ever.")

Scientific truths include observations as to what occurs "in nature" which then informs naturalistic morals ("It's not natural to be gay/left-handed/brilliant" )

Scientific truths include observations about the role morality plays in those species we can observe to possess it, thereby informing us practically about what actions or inactions or r... (read more)

As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.

Consider the following set of statements: 1) "I am my meat." 2) "I am the unique pattern of information-flow that occurs within my meat." 3) "I am the class of patterns of information-flow that can occur within meat, of which this unique pattern is one example." 4) "I am the class of patterns of information-flow that can occur within any substrate, of which this unique pattern is one example." 5) "I am all the matter and energy in the universe." What sorts of experiences would constitute evidence for one of them over the others?

Moral truths which ignore scientific truths are invalid.

True, but scientific truths that imply moral truths are very, very thin on the ground. (I personally doubt there are any scientific truths dispositive of most actually contested moral issues).

"Entities must not be replicated beyond necessity". Both interpretations violate this rule. The only question is which violates it more. And the answer to that seems to one purely of opinion.

So throwing out the extra stuff -- they're using exactly the same math.

Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity.

Sure. But MWI and CI use the same formulae. They take the same inputs and produce the same outputs.

Everything else is just that -- interpretation.

One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.

And those would be different calculations.

In this case, the math bei

... (read more)
The interpretation in this context can imply unobserved output. See the discussion with dlthomas below. Part of the issue is that the interpretation isn't separate from the math.

and dramatically simpler than the Copenhagen interpretation

No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.

. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments.

It is not without its own extra entities of equally enormously additive nature however; and even and those abnormal conclusions are as valid from the CI as is quantum immortality from MWI.

-- I speak as someone who rejects both.

Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity. For example, consider a computer program that when given a positive integer n, outputs the nth prime number. One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*. In this case, the math being used is pretty similar, so the complexity shouldn't be that different. But that's a more subtle and weaker claim.

There is nothing circular about the definition -- merely recursive. "GNU" stands for "GNU is Not UNIX".

As soon as you observe two things to directly interact with one another, you may safely asssert that both exist under my definition.

This is, frankly, not very complicated to figure out.

Recursive definitions must bottom out at some point. The ones that do not are called circular. You didn't say so before. Now, we two are interacting now (I hope), so we do exist, after all? And what about the characters in the virtual world of a computer game I mentioned before? I certainly saw them interacting. So sorry for my stupidity.

For example, would you say that software "exists"?

No. But it is real. Software is a pattern by which electrons, magnetic fields, or photochemically-active media are constrained. The software itself is never a thing you can touch, hold, see, or smell, or taste; it never at any point is ever capable of directly interacting with anything. Just like you and me; we are not our bodies; nor our brains; nor the electrons or chemicals that flow through the brains. We are patterns those things are constrained by. I am the unique pattern that, in times ... (read more)

Do pebbles actually exist? But they are composed from quarks,

I refer to this as the Reductionist Problem of Scale. "Psychology isn't real because it's all just biology. Biology isn't real because it's all just chemistry. Chemistry isn't real because it's all just Physics." I don't see this as so much of a 'minefield' as a need to recognize that "scale matters". In unaided-human-observable Newtonian space, there is no question that pebbles are "totally a thing" -- they are. You can hold one in your hand. You can touch one... (read more)

Hmm. Under your definition, "to exist, a thing must directly interact in some fashion with other things which exist". For this to be non-circular, you must specify at least one thing that is known to exist. I thought, this one certainly-known-to-exist thing is myself. If you say that under your definition I don't exist, then what can be known to exist and how can it be known to do so?
  • Adding \s\s before your \n will let you do newlines in Markup syntax.

Thank you.

I would be interested in knowing what resources you used for this sequence.

As an autist there is a huge swath of innate skills 'normal' people possess I can only emulate. Social success for me is indistinguishable from Dark Art skill.

Both require powers, the second involves using them unethically.
To start with, I would recommend (in the following order) Thinking Fast and Slow- Kahneman and Tversky Influence: Science and Practice - Cialdini How We Decide- Lehrer How to Win Friends and Influence People - Carnegie Nudge: Thaler and Sunstein Cialdini and Carnegie have a bad habit of not citing sources, so you may want to take any unsubstantiated claims with a grain of salt. This list is not comprehensive. If anyone else would like to add some recommendations for books or particularly informative studies, I would definitely appreciate it. In addition to reading, experience in dealing with people is very important for things like this. If you are not currently employed, I would recommend getting a job in sales. This will give you a chance to practice and experiment in a relatively safe environment. Additionally, I have heard that unusual behavior is more accepted in bars, so that might be worth looking into (I'm under 21 and live in America, so that is not really an option for me. As such, bear in mind that this is secondhand advice). Finally, if you are particularly skilled in some subject area, you may want to consider tutoring. In addition to bringing in money and helping someone else, this will allow you to experience being in a high-status situation. Once again, the list of recommended experiences is not comprehensive. I would welcome any additional suggestions.

Yes, but I don't think this uses the word 'exist' in the same way.

I'd say not. I tend to use two independent terms when discussing the nature of a thing's existence; I will discuss first whether or not something is real; and then whether or not that real thing exists.

To be real; a thing must be an accurate description of some pattern of behavior that things which exist conform to. (I realize this is dense/inscrutable, more in a bit.) To exist; a thing must directly interact in some fashion with other things which exist; it must be 'instantiable'.

So ... (read more)

I'm not convinced this distinction holds up all that well. For example, would you say that software "exists"? How about supply functions []? Nations? Boeing 747s? People? Force fields []? Edit: yes, what gRR said.
This is a philosophical mire. Do pebbles actually exist? But they are composed from quarks, electrons, etc, and these are in principle indistinguishable from one another, so a pebble is only defined by relations between them, doesn't it make the pebble only 'real'? On the other hand, when I play a computer game, do the various objects in the virtual world exist? Presumably, yes, because they interact with me in some fashion, and I exist (I think...). What if I write a program to play for me and stop watching the monitor. Do they stop existing?
An excellent and useful distinction.

Then you do not mean that

pleasure is the "measure of utility". That is; utility is pleasure; pleasure is utility.

Eudaimonic pleasure -- happiness -- is of a nature that wireheading would not qualify as valid happiness/pleasure. It would be like 'empty calories'; tasty but unfulfilling.

So no, I do not not mean that 'pleasure is the "measure of utility"' is the mainstream consensus view on LessWrong. I do mean that, and I believe it to be so. "Hedons" and "utilons" are used interchangeably here.

So you do not mean that LWers hold that pleasure (by which I mean the standard definition) is the measure of utility, and that these people would wirehead and are therefore wrong.

But it's a misconstrual of eudaimonia to think it reduces to pleasure, and a misuse of 'hedonism' to refer to goals other than pleasure.

This is simply not true. Eudaimonia is essentially epicurian hedonism, as contrasted with cyrenaic.

I think we're better to follow Aristotle than Epicurus in defining eudaimonia. It's at least the primary way the word is used now. Being a good human is just not a sort of pleasure.
Looking only at the wiki page, epicurian moral thought [] doesn't look like what I remember from reading Aristotle's Ethics. But it's been a while.
Definitely just mincing words here, but... Hedonism and eudaimonia can both be considered types of 'happiness' - thus we talk about "hedonic well-being" and "eudaimonic well-being", and we can construe both as ways of talking about 'happiness'. But it's a misconstrual of eudaimonia to think it reduces to pleasure, and a misuse of 'hedonism' to refer to goals other than pleasure.

(I'm thinking of this and this.)

Eudaimonic hedonism is still a form of hedonism.

(EDIT: Specifically it's epicurian as compared to cyrenaic.)

I see. Then you do not mean that is the consensus view here at LW. Since after all, the consensus view here is that wireheading is a bad idea.
That seems entirely wrong. In fact, I think "eudaimonic hedonism" is just a contradiction in terms. Normally eudaimonic well-being is contrasted with hedonistic well-being. ETA: Maybe you were thinking, "Eudaimonist utlitiarianism is still a form of utilitarianism"?

intrinsic values are values that a thing has merely by being what it is.

My question from the outset was "what's the use of happiness?" Responding to that with "its own sake" doesn't answer my question. To say that 'being useful is useful for its own sake' is to make an intrinsic utilitity statement of utility.

We -- or rather I -- framed this question in terms of utility from the outset.

Now -- hedonism is the default consensus view here on (obviously I am a dissenter. My personal history as being clinically anhedonic maaa... (read more)

1Swimmer963 (Miranda Dixon-Luinenburg) 11y
My answer to this would be that happiness doesn't necessarily have any value outside of human brains. But that doesn't matter. For most people, it's one of those facets of life that is so basic, so integrated into everything, that it's impossible not to base a lot of decisions on "what makes me happy." (And variants: what makes me satisfied with myself, what makes me able to be proud of myself...I would consider these metrics to be happiness-based even if they don't measure just pleasure in the moment.) You can try to make general unified theories about what should be true, but in the end, what is true is that human brains experience a state called happiness, and it's a state most people like and want, and that doesn't change no matter what your theory is.
What? Really? (I'm thinking of this [] and this [].)
Thanks for the link. Of course I should have checked that.... I'd like to point out that you find this in the second paragraph: "For an eudaemonist, happiness has intrinsic value" Given the rest of what you've said, and my attachment to happiness as self-evidently valuable, a broader conception of "happiness" (as in eudamonia above) may avoid adverse outcomes like wireheading (assuming it is one). As other commenters here have noted there is no single definition anyway. You might say the broader it becomes, the less useful. Sure, but any measure would probably have to be really broad--like "utility". When I said I don't think 'intrinsic worth' is a thing, it's because I was identifying it with utility, and... I guess I wasn't thinking of (overall) utility as a 'thing' because to me, the concept is really vague and I just think of it an aggregate. An aggregate of things like happiness that contribute to utility. I mentioned how if you're going to call anything a terminal value, happiness seems like a good one. Now I don't think so: you seem to be saying that you shouldn't (edit: aren't justified in considering) anything a terminal value other than utility itself, which seems reasonable. Is that right? More to the point: I'm not sure; it now seems to me it oughtn't to. Maybe another Less Wronger can contribute more, though not me.

I think you're still missing the point.

Which is funny, because I am increasingly coming to the same conclusion with regards to your integration of my statements as you respond to me with essentially the same "talking points" in a manner that shows you haven't contemplated that at the very least merely repeating yourself isn't going to cause me to consider your old point any more relevant than it was the first time I offerred a rebuttal with new informational value.

At some point, I have learned that in such dialogues the only productive thing ... (read more)

Good point. And I think I'll have to exit too, because I have the feeling that I'm doing something wrong, and I'm frankly too tired to figure that out right now. Just one question. "Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth." Among other things I may be confused about, I'm (still) confused what intrinsic worth might be. Since I don't (currently) think 'intrinsic worth' is a thing, it seems to me that it is just the nature of a terminal value that it's something you choose, so I don't see the violation. EDIT: Edited statement edited out.

However, Kidder's depictions of Farmer's personality portray him as a happy man. Perhaps the book will be of some help, if you indeed have not yet read it.

I'm quite certain that the vast majority of people who ever encountered me in meatspace would make the mistake of thinking that I am a happy person. I laugh, I smile, I go through all of the motions. I am upbeat and concerned with the wellbeing of others. I am patient to a fault, and nigh unto never show any signs of any kind of being foul-mannered or intemperate.

Those who know me when I am ... myself -- know a very different person. They are few.

This is still in line with Kidder's depiction of Farmer. In one scene, Farmer has dinner with two close colleagues with whom he's worked a long time. Kidder has been invited by Farmer, but compared to his usual, jovial self, Farmer blows up at one of his colleagues and beleaguers the colleague on how they are not serving the people who need them most (poor people) first. Kidder, shocked at how Farmer so blatantly torments and manipulates his colleague into capitulating to Farmer's wishes, asks the other colleague whether this is normal. She responds, "You think that was bad? What he was doing to Jim was nothing. On a a scale of one to ten, that was about a five." Regardless, the book both educates and entertains; I recommend at least checking it out.

I also notice a correlation between times I am unhappy,

One fo the many things I dislike about the English language is that it does not readily acknowledge that "happy" is no excluded middle.

I consider my normative state to be just under or around the 'happy' threshold, which I'd consider as between happiness and unhappiness. Happiness essentially equates to a certain chemical balance in the brain, and the same holds true for unhappiness. When the brain releases neurotransmitters equitably, I'd postulate the brain's chemical balance to reflect neutral emotions. As an aside, I've heard genuinely, innocently laughing releases endorphins just as effectively as exercise; what do you emotively experience when these endorphins release? If you want to hack happiness, exercise or some media you find consistently hilarious might work through pure chemistry. (Note: I may be mistaken in the neuroscience, though doubt it; I'm working on a piece of paper that declares proficiency in the field.)

So of course you can call happiness a terminal value.

I can call my cat a Harrier Jet, too. Doesn't mean she's going to start launching missiles from under the wings she doesn't have.

Or, if maximal utility doesn't include increased happiness, you're doing it wrong, assuming happiness is what you value.

You're confusing abitrary with intrinsic. To qualify as a terminal value a thing must be self-evidently valuable; that is, it must be inherently 'good'. More of it must always be 'better' than less of it. I know this to be false of happiness/pleasure;... (read more)

I think you're still missing the point. You can call happiness a terminal value, because you decide what those are. I think you are confused here; what do you mean by inherently 'good'? Why must more of X always be better than less for? Does this resolve it?: yes, happiness =/= utility. I never claimed it was and I don't think anyone did. But among all the things that, when aggregated = your 'utility function', happiness (for most people) is a pretty big thing. And it is aggregate utility, and only aggregate utility, that would always be better with more. Then, I suppose happiness isn't a terminal value. I think I was wrong. The only "terminal" value would be total utility.... but happiness is so fundamental and "self-evidently valuable" to most people that it seems useful to call such thing a "terminal value" if indeed you're going to call anything that. P.S. I think you think you're saying something meaningful when you say "useful". I don't think you are. I think you're just expressing the idea of an aggregate utility, the sum. If not, what do you mean? EDIT: This threw me for a loop: "I know this to be false of happiness/pleasure; I know that there is such a thing as "too happy"." Obviously, if happiness is a terminal value, you're right you can't be too happy. I think I'm either confused or tired or both. And if it so happens that in reality people don't desire, upon reflection, to maximize happiness because there's a point at which it's bad, then I understand you; such a person would be inconsistent to call happiness a terminal value in such a case. Why do you think you can have too much happiness? (Think of some situation.) Presumably there's some trade-off..... Now consider someone else in that same situation. Would someone else also think they have too much happiness in that situation? Because if that's not the case, you just have different terminal values. Ultimately, someone may judge their happiness to be most important to them. You can say, 'no, the

Right now some people prefer happiness.

This is handwaving. That is; you use a description to fulfill the role of an explanation.

Many of the people who prefer happiness also endorse desiring to be happy

This is also a description, not an explanation.

No justification is required for preferring one's preferences.

... I cannot help but find this to be a silly assertion. "That's the default"? That's just... not true.

f you keep asking "Why?" enough you are bound to end up at the bottom level terminal goals from which other ins

... (read more)
It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood? The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing. Was this helpful?

This is just what they happen to do.

This is nonsensical. Do we always conform to patterns merely because they're the patterns we always adhered to, unquestioningly? The question is being asked now.

If there is no decent answer -- then what justifies this article?

Right now some people prefer happiness. Many of the people who prefer happiness also endorse desiring to be happy and as such they right now prefer not to self modify away from desiring happiness. No justification is required for preferring one's preferences. That's the default. If you keep asking "Why?" enough you are bound to end up at the bottom level terminal goals from which other instrumental goals may be derived. Agents don't need to justify having terminal goals to you.
(Apologies for my earlier comment. I've been in a negative emotional funk lately, and at least I am more self-aware because of that comment and its response. Anyway---) I'm a little confused about why you think it's hand-waving or question-begging to call happiness a 'terminal value'. Here's why. Your utility function is just a representation of what you want to increase. So if happiness is what you want to increase, it ought to be represented (heavily) in your utility function, whatever that is. As far as I know, "utility" is a (philosophical) explication - a more precise and formalized way of talking about something. This is a concept I learned from some metalogic: logic is an "explication", see, that hopes to improve on intuitive reasoning based on intuitively valid inferences. Logical tautologies like P-->P aren't themselves considered to be true, they're only considered to represent truths, like 'if snow is white then snow is white', all of which is supposed to remind you that just because you dress something up in formalism doesn't mean you don't have to refer back to what you're trying to represent in the first place, which may be mundane in the sense of 'everyday'. So of course you can call happiness a terminal value. Your values aren't constructed by your utility function; they're described by it. In my opinion you're taking things backward. Or, if maximal utility doesn't include increased happiness, you're doing it wrong, assuming happiness is what you value. [Note: was edited.]

I meant, borrowing your words, are you consumed with a driving ambition or devotion to achieving the uttermost limits of what you could achieve or become, utilizing the maximum of your potential to impact the world? Are you always striving for something you don't have; wishing to be more than you are in every possible sense of the word... including the efficacy with which you strive for more?

Not as much as I should be.

And if so, what are you doing about it?

1) Completely overhauling my professional capacity and career-path. In the last two years I've... (read more)

Sounds like the right track. To answer your earlier question: Pretty much. I'd rather do what I want than pursue fuzzies. The cake is a lie.
Have you read Mountains Beyond Mountains by Tracy Kidder? It's a non-fiction book recording the story of Dr. Paul Farmer, a tremendously benevolent epidemiologist that shares your worldview, does all he can to medically assist the poor (specifically, the Haitian poor), and still cannot meet the standards he sets himself. However, Kidder's depictions of Farmer's personality portray him as a happy man. Perhaps the book will be of some help, if you indeed have not yet read it.

Not fulfilling the potential itself, but rather the capacity to do so, (which can only properly be measured by the actualization / acting-upon-of said capacity). As to why -- well, fundamentally it's the notion that maximized instrumentality is the maximally optimal instrumental state. From there the question becomes; "is maximized instrumentality useful?"

That is a "self-proving" terminal value. One need only ask the question to see that it implies its answer. "Is being useful useful?" Well... yes. Whatever it is you want to ... (read more)

Surely a self-proving value is one where the question "Is X valuable?" is self-proving?

Emotion dressed up as pseudo-intellectualism. How do I know that/ Because the answer is so supremely obvious.

... Is there maybe some other manner in which I could explain that I was revealing my emotional biases in order to get them out of the way of the dialogue that would be more effective for you?

... What exactly is so baffling? People want to be happy for it's own sake

Why? The position is alien to me.

but in the end you're going to be faced with the fact that it's just a terminal value and you're going to have a hard time "explaining"

... (read more)
The only useful answer here seems to be giving a causal explanation for why humans have the preferences that they have. Something to do with the folks who have different preferences not living long enough to get laid a lot. There isn't any reasoning that people need to execute based on more fundamental principles of what to desire. This is just what they happen to do. It is rather common to you fellow humans.

That question segfaults in my parser.

I meant, borrowing your words, are you consumed with a driving ambition or devotion to achieving the uttermost limits of what you could achieve or become, utilizing the maximum of your potential to impact the world? Are you always striving for something you don't have; wishing to be more than you are in every possible sense of the word... including the efficacy with which you strive for more? And if so, what are you doing about it?

"Happiness" is neither to be aimed at nor avoided. Doing what you truly want is to be aimed at, and not avoided.

So then you reject altogether the core premise of the article, which also stated; "Actively want to be happier. Motivation and investment matter."

Of course, I can also note that the only way, from my perspective, to guarantee your maximal significance (in terms of material impact upon the world) -- is to always strive for something you don't have; to wish to be more than you are in every possible sense of the word... incl... (read more)

What are you doing about it?

Might be fun?

Okay. So what?

Sure beats the alternative?

Does it? Why? What is the alternative? How is 'happy' better?

It's compatible with all the good drugs and keeps you off all the bad ones?

I can't help but not that this is post hoc ergo propter hoc reasoning.

Let's engage in a bit of an exercise. I am a thirty year old man. I have never in my life experienced happiness. I have no prospects of becoming happy. Everyone I have ever seen who was happy seemed, in many fundamental ways, contemptible to me: they were complacent, they were 'satisfied... (read more)

Did these people differ in this respect from people who were not happy? And does the post hoc propter hoc thing apply here also? ETA: If "happiness" is the state of living the life you truly want, then if your wants are small, "happiness" will be easily achieved, but if your wants are large, not so easily. You will therefore observe of people that on average, the greater their ambition, the less their happiness; the greater their happiness, the less their ambition. It would be an error to read the wrong causality into this correlation, and attempt to achieve happiness "by lopping off our desires, ... like cutting off our feet, when we want shoes." (Jonathan Swift) "Happiness" is neither to be aimed at nor avoided. Doing what you truly want is to be aimed at, and not avoided.

Alright... after reading through much of this, a certain line struck me over and over again.

Actively want to be happier. Motivation and investment matter.

I have only one question.


(To clarify: I mean: Why be happy? Why want to be happy? How is it useful? What 'good' is happiness?)

I've read your other responses, and while I don't think my experience will assist you in an attempt to feel the emotion, it may assist in your ability to understand the emotion's desirability. I find myself more productive when I'm happy; my mind has less cluttered thoughts, due to less anxiety and cognitive duress (I can only best describe this as a state when my subconscious works overtime on thoughts I'm only barely conscious of, and each time I try to deeply contemplate a new thought, somewhere along the halfway point it's unwillingly relegated to my subconscious as something I'm anxious over pops to the top of my mind). I also notice a correlation between times I am unhappy, and a lowering in my self-confidence. Normally my confidence hovers right below what I would consider 'a level of confidence conducive to hubris', and when happy, it can tend to spill over into this danger zone. When unhappy, my confidence becomes akin to the normative level of confidence I've observed most people whom I've encountered to likely possess. This diminishing of confidence too lessens my productivity, and to reestablish my normative confidence level, I must then end my unhappiness. Thus, for me, over-abundant happiness can be dangerous, but wading just above the happiness threshold gives me clarity of mind and purposeful focus. Hope this helped elucidate happiness's utility for you. Cheers!
It's some people's terminal value. It's okay if it's not yours. Is there a more complicated reason why you feel "fulfilling your potential to impact the world" is important?
Might be fun? Sure beats the alternative? It's compatible with all the good drugs and keeps you off all the bad ones?
I agree with Will Newsome's comment [], especially the second paragraph. Looking for "happiness" as something distinct from and on top of living the life you truly want is like going to London and looking for "London". A signpost pointing to "London" is only useful when you are far from it.

Emotional blackmail on LeStrange. Also -- half a year is too long a time period. by far.

Figure without time turners but with healing magics and potions an eight month birth. Rip the kid out of her womb, and heal her back into active duty. You lose her services for maybe a month. (Up to six months in and she's still combat-capable.)

Heal both kid and mother, and there you go. (also, if we can assume accelerated gestation potions then we get even more silly. No "downtime" at all No need for time turners.)

It's the same argument though.

How much money would I have to pay you for you to let me rape you in a way that causes no physical trauma, after dosing you with a drug that prevents you from remembering the hour before or after it?

Would that dollar amount change if I told you I had already given you the drug?

The problem I see is your treatment of this arrangement as a "black box" of you[entering] and you[exiting]. But this is illegitimate. There were ten rounds of you[copy-that-dies] that would also be you[entering].

Have you read EY's writings on timeless decision theory? It seems to me that this is a variation.

So instead of a war, let's look at a potential asteroid strike.

I didn't say that there weren't good reasons for resisting the pointlessly-occurring phenomenon. I said only that it was pointless. Or are you now going to impose fundamental purposefulness and agency onto the very fabric of the cosmos? This gets exceedingly ridiculous. I have never once argued that your usage is invalid. Why do you insist on refusing to recognize mine, despite the legitimacy of the terms and the framing with which I have presented them demonstrating clearly that I was usi... (read more)

It looks like the pair of you are having trouble communicating. Would you like to: * Taboo [] "pointless", "futile" and "lose", * Hug [] the query []?
Load More