I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.
A proposed law to require psychologists who testify in court to dress like wizards:
When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong…
I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,
C preferred being single to being with A.
C preferred being with W to being single.
C preferred being with A to being with W.
I think all three of us could have been spared some heartache if we had figured out that was what was going on.
I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will ... (read more)
Have you tried modafinal [http://www.gwern.net/Modafinil]?
6Kaj_Sotala11y
It's not prescribed in Finland without a special permit from the authorities,
and I don't want to take the risk of trying to obtain something that's
considered an illegal drug.
3MileyCyrus11y
My sympathies.
2AlexSchell11y
Do you use an alarm clock? If so, your problem might have less to do with sleep
deprivation (which I don't think should cause the sort of acute effects you
describe) and more with getting up at the wrong time within a sleep cycle. If
you have an iPhone or iPod touch, give Sleep Cycle
[http://www.sleepcycle.com/index.html] a try for avoiding this problem. I think
there are similar apps for different platforms. If you're not using an alarm
clock (or are already using something like Sleep Cycle), I'd be genuinely
surprised.
1Kaj_Sotala11y
I do use an alarm clock, but after going to bed at the right time for a couple
of evenings, I start to wake up on my own, a little before the clock would
sound. The alarm clock is just there as a backup, and to let me remain
mostly-awake in bed for about 10-20 minutes longer before telling me to actually
get up (as opposed to just getting awake).
ETA: I should specify that if I don't go to bed at the right time, I don't wake
up naturally - well, I do, but so late that I'll feel groggy and generally
inenergetic.
2AlexSchell11y
Hmm, I still don't know if I should be surprised or not, as I'm having trouble
parsing your last sentence. When you go to bed late, do you not set your alarm
clock? Or do you sleep through your alarm? Or do you wake up naturally (but
groggy) right before the alarm goes off?
2Kaj_Sotala11y
I have attempted:
A) Going to bed late and setting the alarm at the usual early time
B) Going to bed late and setting the alarm a couple of hours later
C) Going to bed late and not setting an alarm at all
With A, I'll wake to the clock but be groggy. With B I'm not necessarily so
groggy but still not as energetic as I would have if I'd gone to bed early and
woken up early. With C I'll wake up naturally at some late time and feel pretty
lethargic.
I was about to say that there are two dimensions here - groggy/neutral/awake and
energetic/neutral/lethargic. Very roughly, A leaves me groggy/neutral, B leaves
me neutral/neutral and C leaves me neutral/lethargic. But that doesn't sound
entirely right, either - all three often also tend to leave me an extra
unspecified uncomfortable feeling that I can't quite put into words, and which
might be part of what I'm calling "groggy" or "lethargic" in the above. (Going
to bed on time and getting up early leaves me awake/energetic or at least
neutral/energetic, as well as without that extra uncomfortable feeling.)
Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacr... (read more)
Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
Suppose old person and child (perhaps better: young adult) would both gain 2
years, so we equalize payoff. What then? Why not be prioritarian at the margin
of aggregate indifference?
0A1987dM11y
WELL, YOUNG ADULTS TYPICALLY ENJOY LIFE MORE*, SO...
* I've heard old people saying they wish they could become young again, but I
haven't heard any young people saying they can't wait to become old.
7Thrasymachus11y
Hello there, I'm the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is
in a finite supply with efficient distribution between persons, then something
like "if I extend my life to 10n then 9 other peeps who would have lived n years
like me would not" will be true. The problem is this violates norms about what a
just outcome is. If I put you and nine others behind a veil of ignorance and
offered you an 'everyone gets 80 years' versus 'one of you gets 800, whilst the
rest of you get nothing', I think basically everyone would go for everyone
getting 80. One of the consequences of that would seem to be expecting whoever
'comes first' in the existence lottery to refrain from life extension to allow
subsequent persons to 'have their go'.
If you don't buy that future persons are objects of moral concern, then the
foregoing won't apply. But I think there are good reasons to treat them as
objects of full moral concern (including a 'right'/'interest' in being alive in
the first place). It seems weird (given B theory), that temporally remote people
count for less, even though we don't think spatial distance is morally salient.
Better, we generally intuit things like a delayed doomsday machine that
euthanizes all intelligent life painlessly in a few hundred years is a very bad
thing to do.
If you dislike justice (or future persons), there's a plausible aggregate-only
argument (which bears a resemblance to Singer's work). Most things show
diminishing marginal returns, and plausibly lifespan will too, at least after
the investment period: 20 to 40 is worth more than 40-60, etc. If that's true,
and lifespan is in finite supply, then we might get more utility by having many
smaller lives rather than fewer longer ones suffering diminishing returns. The
optimum becomes a tradeoff in minimizing the 'decay' of diminishing returns
versus the cost sunk into development of a human being through childhood and
adolescence
6Richard_Kennaway11y
You lose me the moment you introduce the moral premise. Why is it better for two
people to each live a million years than one to live two million? This looks
superficially the same sort of question as "Why is it better for two people to
each have a million dollars than for one to have two million?", but in the
latter scenario, one person has two million while the other has nothing. In the
lifetimes case, there is no other person. The moral premise presupposes that
nonexistent people deserve some of other peoples' existence in the same way that
existing paupers deserve some of other peoples' wealth.
You may have an argument to that effect, but I didn't see it in my speed-run
through your slides (nice graphic style, BTW, how do you do that?) or in your
comment above. Your argument that we place value on future people only considers
our desire to avoid calamities falling upon existent future people.
Diminishing returns for longer lifespans is only a problem to be tackled if it
happens. The only diminishing returns I see around me for the lifespans we have
result from decline in health, not excess of experience.
0Thrasymachus11y
The nifty program is Prezi.
I didn't particularly fill in the valuing future persons argument - in my
defence, it is a fairly common view in the literature not to discount future
persons, so I just assumed it. If I wanted to provide reasons, I'd point to
future calamities (which only seem plausibly really bad if future people have
interests or value - although that needn't on be on a par with ours),
reciprocity across time (in the same way we would want people in the past to
weigh our interests equal to theirs when applicable, same applies to us and our
successors), and a similar sort of Rawlsian argument that if we didn't know we
would live now on in the future, the sort of deal we would strike would be those
currently living (whoever they are) to weigh future interests equal to their
own. Elaboration pending one day, I hope!
6Kaj_Sotala11y
I find this argument incoherent, as I reject the idea of a person at the age of
1 being the same person as they are at the age of 800 - or for that manner, the
idea of a person at the age of 400 being the same person as they are at the age
of 401. In fact, I reject the idea of personal continuity in the first place, at
least when looking at "fairness" at such an abstract level. I am not the same
person as I was a minute ago, and indeed there are no persons at all, only
experience-moments. Therefore there's no inherent difference in whether someone
lives 800 years or ten people live 80 years. Both have 800 years worth of
experience-moments.
I do recognize that "fairness" is still a useful abstraction on a societal
level, as humans will experience feelings of resentment towards conditions which
they perceive as unfair, as inequal outcomes are often associated with lower
overall utility, and so forth. But even then, "fairness" is still just a
theoretical fiction that's useful for maximizing utility, not something that
would have actual moral relevance by itself.
As for the diminishing marginal returns argument, it seems inapplicable. If
we're talking about the utility of a life (or a life-year), then the relevant
variable would probably be something like happiness, but research on the topic
has found age to be unrelated to happiness (see e.g. here
[http://www.midus.wisc.edu/findings/pdfs/74.pdf]), so each year seems to produce
roughly the same amount of utility. Thus the marginal returns do not diminish.
Actually, that's only true if we ignore the resources needed to support a
person. Childhood and old age are the two periods where people don't manage on
their own, and need to be cared for by others. Thus, on a (utility)/(resources
invested) basis, childhood and old age produce lower returns. Now life extension
would eliminate age-related decline in health, so old people would cease to
require more resources. And if people had fewer children, we'd need to invest
few
0Thrasymachus11y
Hello Kaj,
If you reject both continuity of identity and prioritarianism, then there isn't
much left for an argument to appeal to besides aggregate concerns, which lead to
a host of empirical questions you outline.
However, if you think you should maximize expected value under normative
uncertainty (and you aren't absolutely certain aggregate util or
consequentialism is the only thing that matters), then there might be motive to
revise your beliefs. If the aggregate concerns 'either way' turn out to be a
wash between immortal society and 'healthy aging but die' society, then the
justice/prioritarian concerns I point to might 'tip the balance' in favour of
the latter even if you aren't convinced it is the right theory. What I'd hope to
show is something like prioritarianism at the margin or aggregate indifference
(ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is
needed to buy the argument.
3Kaj_Sotala11y
True, and I probably worded my opening paragraph in an unnecessarily aggressive
way, given that premises such as accepting/rejecting continuity aren't really
correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference
related to your concerns, then I do find it conceivable - though maybe unlikely
- that those concerns would tip the balance. But I wouldn't expect such a tight
balance to manifest itself in any real-world scenarios. (Of course, one could
argue that theoretical ethics shouldn't concern itself too much with worrying
about its real world-relevance in the first place. :)
I'd still be curious to hear your opinion about the empirical points I
mentioned, though.
1Thrasymachus11y
I'm not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people
'accrue' life, and so there's plausibly diminishing returns. If we dismiss that
and talk of experience moments, then a diminishing argument would have to say
something like "experience-moments in 'older' lives are not as good as younger
ones". Like you, I can't see any particularly good support for this (although I
wouldn't be hugely surprised if it was so). However, we can again play the
normative uncertainty card to just mean our expected degree of diminishing
returns are attenuated by * P(continuity of identity)
I agree there are 'investment costs' in childhood, and if there are only costs
in play, then our aggregate maximizer will want to limit them, and extending
lifetime is best. I don't think this cost is that massive though between having
it once per 80 years or once per 800 or similar. And if diminishing returns
apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly
win-win: so if we don't have loads of children and so we never approach carrying
capacity. I suspect this issue will be at most a near-term thing: our posthuman
selves will assumedly tile the universe optimally. There are a host of
counterveiling (and counter-counterveiling) concerns in the nearer term. I'm not
sure how to unpick them.
3Kaj_Sotala11y
I'm not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number
of workers if the extra workers would start to get in each other's way, or the
amount of resources needed for administration increased at a faster-than-linear
speed. Or if you were planting crops, you might get diminishing returns in the
amount of fertilizer you used, since the plants simply could not use more than a
certain amount of fertilizer effectively, and might even suffer from there being
too much. But while there are various reasons for why you might get diminishing
returns in different fields, I can't think of plausible reasons for why any such
reason would apply to years of life. Extra years of life do not get in each
other's way, and I'm not going to enjoy my 26th year of life less than my 20th
simply because I've lived for a longer time.
0Thrasymachus11y
I was thinking something along the lines that people will generally pick the
very best things, ground projects, or whatever to do first, and so as they
satisfy those they have to go on to not quite so awesome things, and so on. So
although years per se don't 'get in each others way', how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so
you can pick even more enjoyable things, etc.)
1Kaj_Sotala11y
That sounds more like diminishing marginal utility
[https://secure.wikimedia.org/wikipedia/en/wiki/Diminishing_marginal_utility#Diminishing_marginal_utility]
than diminishing returns
[https://secure.wikimedia.org/wikipedia/en/wiki/Diminishing_returns]. (E.g.
money has diminishing marginal utility because we tend to spend money first on
the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are
essentially "used up" afterwards - once a person has had an awesome time writing
a book, they need to move on to something else the next year. This does not seem
right: rather, they're more likely to keep writing books. It's true that it will
eventually get harder and harder to find even more enjoyable activities, simply
because there's an upper limit to how enjoyable an activity can be. But this
doesn't lead to diminishing marginal utility: it only means that the marginal
utility of life-years stops increasing.
For example, suppose that somebody's 20. At this age they might not know
themselves very well, doing some random things that only give them 10 hedons
worth of pleasure a year. At age 30, they've figured out that they actually
dislike programming but love gardening. They spend all of their available time
gardening, so they get 20 hedons worth of pleasure a year. At age 40 they've
also figured out that it's fun to ride hot air balloons and watch their gardens
from the sky, and the combination of these two activities lets them enjoy 30
hedons worth of pleasure a year. After that, things basically can't get any
better, so they'll keep generating 30 hedons a year for the rest of their lives.
There's no point at which simply becoming older will derive them of the
enjoyable things that they do, unless of course there is no life extension
available, at which case they will eventually lose their ability to do the
things that they love. But other than that, there will never be diminishing
marginal utility.
Of
0Ghatanathoah10y
If you are arguing that we should let people die and then replace them with new
people due to the (strictly hypothetical) diminishing utility they get from
longer lives, you should note that this argument could also be used to justify
killing and replacing handicapped people. I doubt you intended that way, but
that's how it works out.
To make it more explicit, in a utilitarian calculation there is no important
difference between a person whose utility is 5 because they only experienced 5
utility worth of good things, and someone whose utility is 5 because they
experienced 10 utility of good things and -5 utility worth of bad things. So a
person with a handicap that makes their life difficult would likely rank about
the same as a person who is a little bored because they've done the best things
already.
You could try to elevate the handicapped person's utility to normal levels
instead of killing them. But that would use a lot of resources. The most
cost-effective way to generate utility would be to kill them and conceive a new
able person to replace them.
And to make things clear, I'm not talking about aborting a fetus that might turn
out handicapped, or using gene therapy to avoid having handicapped children. I'm
talking about killing a handicapped person who is mentally developed enough to
have desires, feelings, and future-directed preferences, and then using the
resources that would have gone to support them to concieve a new, more able
replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize
that "maximize total utility" is a limited rule that only works in "special
cases" where the population is unchanging and entities do not differ vastly in
their ability to convert resources into utility. Accurate population ethics
likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly
hoping you can find a way to kill existing people and replace them with happier
ones y
0TheOtherDave10y
Why should morality mean caring about the people who exist now, rather than
caring about the people who will exist in a year?
0Ghatanathoah10y
Obviously it's morally good to care about people who will exist in a year. The
"replacements" that I am discussing are not people who will exist. They are
people who will exist if and only if someone else is killed and they are created
to replace them.
Now, I think I typical counterargument to the point I just made is to argue
that, due to the butterfly effect, any policy made to benefit future people will
result in different sperms hitting different ovums, so the people who benefit
from these policies will be different from the people who would have suffered
from the lack of them. From this the counterarguer claims that it is acceptable
to replace people with other people who will lead better lives.
I don't think this argument holds up. Future people do not yet have any
preferences, since they don't exist yet. So it makes sense to, when considering
how to best benefit future people, take actions that benefit future people the
most, regardless of who those people end up being. Currently existing people, by
contrast, already have preferences. They already want to live. You do them a
great harm by killing and replacing them. Since a future person does not have
preferences yet, you are not harming them if you make a choice that will result
in a different future person who has a better life being born instead.
1TheOtherDave10y
Suppose that a hundred years ago, Sam was considering the possibility of the
eventual existence of people like us living lives like ours, and deciding how
many resources to devote to increasing the likelihood of that existence.
I'm not positing prophetic abilities here; I don't mean he's peering into a
crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is
considering in a general way the possibility of people who might exist in a
century and the sorts of lives they might live and the value of those lives. For
simplicity's sake I assume that Sam is very very smart, and his forecasts are
generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the
various other hypothetical future people who don't exist in our world). It seems
to follow that he ought to be willing to devote resources to us. (My culture
sometimes calls this investing in the future, and we at the very least talk as
though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that
project will tend to be resources that aren't available to other projects, like
satisfying the preferences of his neighbors. This isn't necessary... it may be,
for example, that the best way to benefit you and I is to ensure that our
grandparents' preferences were fully satisfied... but it's possible.
Agreed?
And if I'm understanding you correctly, you're saying that if it turns out that
devoting resources towards arranging for the existence of our lives does require
depriving his neighbors of resources that could be used to satisfy their
preferences, it's nevertheless OK -- perhaps even good -- for Sam to devote
those resources that way.
Yes?
What's not OK, on your account, is for Sam to harm his neighbors in order to
arrange for the existence of our lives , since his neighbors already have
preferences and we don't.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diver
2Ghatanathoah10y
Let's imagine that Sam is talking with a family who are planning on having
another child. Sam knows, somehow, that if they conceive a child now they will
give birth to a girl they will name Alice, and that if they wait a few years
they will have a boy named Bob. They have enough money to support one more child
and still live reasonably comfortable lives. It seems good for Sam to recommend
the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice's
growth in utero, so she will be born with a minor disability that will make her
life hard, but still very much worth living and worth celebrating. He also knows
that if the mother waits a few years her illness will clear up and she will be
able to have healthy children who will have lives with all the joys Alice does,
but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait
a few years and have Bob. And that he should not at all be bothered at the idea
that he is "killing" Alice to create Bob.
Now, let's imagine a second scenario in which the family has already had Alice.
And let's say that Alice has grown sufficiently mature that no one will dispute
that she is a person with preferences. And her life is a little difficult, but
very much worth living and worth celebrating. The mother's illness has now
cleared up so that she can have Bob, but again, the family does not have enough
money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford
to have Bob. And just to avoid making the family's grief a confounding factor,
let's say Sam is friends with Omega [http://wiki.lesswrong.com/wiki/Omega], who
has offered to erase all the family's memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the
reason this is is that in the first hypothetical Alice did not exist, a
0TheOtherDave10y
So, I can't quite figure out how to map your response to my earlier comment, so
I'm basically going to ignore my earlier comment. If it was actually your intent
to reply to my comment and you feel like making the correspondence more
explicit, go ahead, but it's not necessary.
WRT your comment in a vacuum: I agree that it's good for lives to produce
utility, and I also think it's good for lives to be enjoyable. I agree that it's
better to choose for better lives to exist. I don't really care how many lives
there are in and of itself, though as you say more lives may be instrumentally
useful. I don't know what "worthwhile" means, and whatever it means I don't know
why I should be willing to trade off either utility production or enjoyment for
a greater number of worthwhile lives. I don't know why the fact that someone has
preferences should mean that I have a duty to take care of them.
1Ghatanathoah10y
I understand that my previous argument was probably overlong, roundabout, and
had some huge inferential differences, so I'll try to be more clear:
A "worthwhile life" is a synonym for the more commonly used term: "life worth
living." Basically, it's a life that contains more good than bad. I just used it
because I thought it carried the same meaning while sounding slightly less
clunky in a sentence.
The idea that it was good for a society to have a large number of distinct
worthwhile lives at any given time was something I was considering after
contemplating which was better, a society with a diverse population of different
people, or a society consisting entirely of brain emulators of the same person.
It seemed to me that if the societies had the same population size, and the same
level of utility per person, that the diverse society was not just better, but
better by far.
It occurred to me that perhaps the reason it seemed that way to me was that
having a large number of worthwhile lives and a high level of utility were
separate goods. Another possibility that occurred to me was that having a large
number of distinct individuals in a society increased the amount of positive
goods such as diversity, friendship, love, etc. In a previous discussion you
seemed to think this idea had merit
[http://lesswrong.com/lw/g6a/some_scary_life_extension_dilemmas/86q6].
Thinking about it more, I agree with you that it seems more likely that having a
large number of worthwhile lives is probably good because of the positive values
(love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean
caring about the people who exist now, rather than caring about the people who
will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much
as people who exist now. Temporal separations are just as morally meaningless as
spatial
1TheOtherDave10y
So, consider the following alternative thought experiment:
Alice exists at time T1.
In (A) Alice exists at T2 and in (B) Alice doesn't exist at T2 and Bob does, and
Bob is superior to Alice along all the dimensions I care about (e.g., Bob is
happier than Alice, or whatever).
Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to
be whether T1 is the present or not... if it is, then I should prefer A; if it
isn't, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference
like that.
If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the
past, then I ought to prefer B to A if T1 is in the present as well. The idea
that I ought to prefer A to B if (and only if) T1 is the present seems
unjustified.
I agree with you, though, that this idea is probably held by most people.
0Ghatanathoah10y
No, it doesn't matter when T1 is. All that matters is that Alice exists prior to
Bob.
If Omega [http://wiki.lesswrong.com/wiki/Omega] were to tell me that Alice would
definitely exist 1,000 years from now, and then gave me the option of choosing
(A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000
years ago in the past and had been killed and replaced by Bob my response would
be "That's terrible!" not "Yay!"
Now if T1 is in the future and Omega gave me option (C), which changes the
future so that Alice is never created in the first place and Bob is created
instead, I would choose (C) over (A). This is because in (C) Alice does not
exist prior to Bob, whereas in (A) and (B) she does.
0TheOtherDave10y
Ah! OK, correction accepted.
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was
not replaced by Bob, thereby causing Bob not to have existed, would you take it?
(I'm genuinely unsure what you'll say here)
If you knew that the consequence of doing so would be that everyone in the world
right now is a little bit worse off, because Alice will have produced less value
than Bob in the same amount of time, would that affect your choice? (I expect
you to say no, it wouldn't.)
1Ghatanathoah10y
You're not the only one who is unsure. I've occasionally pondered the ethics of
time-travel and they make my head hurt. I'm not entirely sure time travel where
it is possible to change the past is a coherent concept (after, if I change the
past so Alice never died then what motivated present me to go save her?). If
this is the case then any attempt to inject time travel into ethical reasoning
would result in nonsense. So it's possible that the crude attempts at answers I
am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut
feeling is that maybe it's wrong to go back and change it. This is partly
because Bob does exist prior to me making the decision to go back in time, so it
might be "killing him" to go back and change history. If he was still alive at
the time I was making the decision I'm sure he'd beg me to stop. The larger and
more important part is that, due to the butterfly effect, if I went back and
changed the past I'd essentially be killing everybody who existed in the present
and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs.
If you tried to use time travel to stop World War Two, for instance, you would
be erasing from existence everyone who had been born between World War Two and
the point where you activated your time machine (because WWII affected the birth
and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that
creates a whole new timeline, while allowing the original one to continue
existing as a parallel universe. If that is the case then yes, I'd save Alice.
But I don't think this is an effective thought experiment either, since in this
case we'd get to "have our cake and eat it too," by being able to save Alice
without erasing Bob.
So yeah, time travel is something I'm really not sure about the ethics of.
My main argument
1TheOtherDave10y
Huh. I think I'm even more deeply confused about your position than I thought I
was, and that's saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing
because Bob is more valuable than Alice (or valuable-to-others, if that means
something different), then most of my objections to it evaporate. I think we're
good.
On a more general note, I'm not really sure how to separate valuable-to-others
from valuable-to-self. The examples you give of the latter are things like
having fun, but it seems that the moment I decide that Alice having fun is
valuable, Alice's fun stops being merely valuable to Alice... it's valuable to
me, as well. And if Alice having fun isn't valuable to me, it's not clear why I
should care whether she's having fun or not.
0Ghatanathoah10y
You're absolutely right that in real life such divisions are not clear cut, and
there is a lot of blurring on the margin. But dividing utility into
"utility-to-others" and "utility-to-self" or "self-interest" and
"others-interest" is a useful simplifying assumption, even if such categories
often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world
where Alice exists, and has a job that benefits lots of other people. For her
labors, Alice is given X resources to consume. She gains Y utility from
consuming from them. Everyone in this world has such a large amount of resources
that giving X resources to Alice generates the most utility, everyone else is
more satiated than Alice and would get less use out of her allotment of
resources if they had them instead.
Bob, if he was created in this world, would do the same
highly-beneficial-to-others job that Alice does, and he would do it exactly as
well as she did. He would also receive X resources for his labors. The only
difference is that Bob would gain 1.1Y utility from consuming those resources
instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is
sufficiently beneficial to everyone else (medical research for example) then it
may be good to kill Alice to create Bob, if killing her is the only possible way
to do so.
1TheOtherDave10y
So, as I said before, as long as you're not saying that it's wrong to kill Alice
even if doing so leaves everyone better off, then I don't object to your moral
assertion.
That said, I remain just as puzzled by your notion of "utility to Alice but not
anyone else" as I was before. But, OK, if you just intend it as a simplifying
assumption, I can accept it on that basis and leave it there.
3lsparrish11y
I appreciated the level of thought you put into the argument, even though it
does not actually convince me to oppose life extension. Thank you for writing
(and prezi-ing) it, I look forward to more.
Basically, the hidden difference if you put me and 9 others behind a veil of
ignorance and ask us to decide whether we each get 80 or one gets 800, is that
in that case you have the presence of 10 people competing and trying to avoid
being "killed" whereas in the choice between creating one 800 year old versus 10
80 year olds is conducted without an actual threat being posed to anyone.
While you can establish that the 10 people would anticipate with fear (and hence
generate disutility) the prospect of being destroyed / prevented to live, that's
not the same as establishing that 9 completely nonexistent people would generate
the same disutility even if they never started to exist.
0Thrasymachus11y
I don't think the thought experiment hinges on any of this. Suppose you were on
you own and Omega offered you certainty of 80 years versus 1/10 of 800 and 9/10
of nothing. I'm pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future
people would want to agree that those who 'live first' should refrain from life
extension and let the others 'have their go'.
0lsparrish11y
Loss aversion is another thing altogether, if most people choose 80 sure years
instead of 800 years at a 1/10 risk it doesn't necessarily prove that it is
actually less valuable.
Suppose Omega offers to copy you and let you live out 10 lives simultaneously
(or one after another, restoring from the same checkpoint each time) on the
condition that each instance dies and is irrecoverably deleted after 80 years.
Is that worth more than spending 800 years alive all in one go?
0Thrasymachus11y
Plausibly, depending on your view of personal identity, yes.
I won't be identical to my copies, and so I think I'd play the same sorts of
arguments I want to do so far - copies are potential people, and behind a veil
of ignorance between whether I'd be a copy or the genuine article, the
collection of people would want to mutually agree the genuine article picks the
former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different
from justice. I mean, veil of ignorance heuristic specifies a risk averse agent,
and difference principle seems to be loss averse.
3[anonymous]11y
Glad to see someone using Prezi.
My main contention with the argument is the assumptions it makes about future
people. Assuming a society that could commit life extensions on the grand scales
talked about in this argument, why is it still assumed future persons must be
considered as identical to current one (who, in the argument, I assume to be the
ones capable of taking or foregoing the life extensions)?
As has been mentioned, these future people are non-existent. What suggests that
they will be or must be part of the equation eventually? It seems less an
argument of "would you take 800 for yourself or 80 for you and your children"
and more "would you take 800 for yourself and agree not to have children or
would you rather have children and risk what comes?"
I know we hold sentimentality for having children (since, you know, it's our
primary function and all) but this whole argument seems more the classic
"immortal children" problem: how can you fit an infinite person supply in a
finite space? And the simplest answer to me seems: until you find a way to
increase the space, you limit the supply. Some may not like that idea but if
it's a case of existent humans' interests vs non-existent (and possibly never
existent) human interests, then I would have to side with the former (myself
being one of them makes it much easier for me of course).
2skepsci11y
I noticed an obvious fallacy in the linked argument:
What? Surely if infinite person-years are possible, it's better for everyone to
be immortal than only some, so life extension would be morally preferable, not
morally neutral.
Also, why are we assuming the number of person-years lived is independent of the
average lifespan? All he exhibited was an upper bound independent of the average
lifespan, which is not at all the same thing. If you can't justify the
hypothesis that lifespan is a zero-sum game, the entire argument falls apart.
1lsparrish11y
The main argument is that taking years from potential beings and adding them to
existing ones is unjust, hence immoral. Given that, depending on the exact shape
of the infinite universes scenario, life extension could be moral, amoral, or
immoral.
If longer-lived people can reproduce and find new space more quickly than
shorter lived people, life extension would be moral. (For example say more
experienced people have more motive or ability to create new universes.) However
all else being equal (for example, say the limit on reproduction is some
unchangeable physical constant that says we cannot make black holes any faster
than x, and we have already maxed that out), the fact that shorter lived people
are dying and creating spaces for more kids makes that the more moral scenario.
While I agree that this is a flaw in the argument (longer lives can possibly
result in more new kids born / new spaces opened than shorter ones), I don't
think it is my true rejection of the argument overall, because it is not
unreasonable to think the new spaces that can be opened is limited and/or cannot
be increased by longer lives. I think the real problem is the idea that one can
behave unjustly to a person whose existence is only potential, through the act
of taking away their existence.
1skepsci11y
To me, the entire argument sounds like a rationalization for not signing up for
cryo.
Signed,
Someone who has rationalized a reason for not signing up yet for cryo, and
suspects that the real reason is laziness.
-1Locke11y
So sign the hell up.
-3Thrasymachus11y
If infinite person years, then (so long as life is net positive) we have
infinite utility, and I can't see obviously whether doling this out to a
'smaller' or 'larger' set of people (although both will have same cardinality)
will matter. But anyway, I don't think anyone really thinks we can wring
infinite amounts of life out of the universe.
Total life-time will have some upper bound. So in worlds where we are
efficiently filling up lifespan, the choice is between more short-lived people
or fewer long-lived people. In the real world for the foreseeable future, that
won't quite apply - plausibly, there will be chunks of lifetime that can only be
got at by extending your life, and couldn't be had by a future person, so you
doing so doesn't deprive anyone else. However, that ain't plausible for an
entire society (or a large enough group) extending their lives. Limiting case:
if everyone made themselves immortal, they could only add people by increases in
carrying capacity.
0lsparrish11y
If longer lived people tend to create more spaces to expand into in an infinite
universe, and this results in reproduction at a normal or higher rate, that
would indicate that longer lived people are more moral, since the disutility of
the long lived people dying would be (relatively) absent from the equation.
If there is a point of diminishing returns on the creation of new people --
perhaps having a trillion lives is less than 1000 times as valuable (including
in the sense of "justice") as having a billion lives in existence at a given
time -- life extension could be more efficient at producing valuable life years
and hence more moral.
Life might grow less worth living over time (Note: excluded for sake of argument
from your prezi), but it might also grow more worth living over time. These are
not mutually exclusive: an evil dictator might produce more negative utility by
being in power for a long time whereas a scientist or diplomat might produce
larger amounts of positive utility by living longer. There could be internalized
examples of these as well -- a person whose pain grows with each passing year
and has to live with the memories thereof, or a person who falls more in love
with their spouse or some such thing over time.
However I tend to think there would be selection effects in favor of the
positive cases and against the negative ones -- suicide and assassination, for
example -- so I don't much fear the negative cases being the long term trend.
Rather I think longer lived people (all else equal, including health) produce
more positive utility per unit of time than shorter lived ones.
0[anonymous]11y
I noticed an obvious fallacy:
Also, why are we assuming the number of person-years lived is independent of the
average lifespan? All he exhibited was an upper bound independent of the average
lifespan, which is not at all the same thing.
When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?
I would also be interested in research regarding this topic. I "suffer" from a
similar phenomenon. The most annoying part ist that I am unable to judge if it
was a good or bad idea I forgot. Also, this phenomenon occurs more often if I am
tired.
1ZankerH11y
Anecdote: Discussing this with a particularly non-rational acquaintance, they
remarked that I'm likely subconsciously discarding horrible ideas, and
preventing myself from coming up with them again, and therefore the better for
it.
1[anonymous]11y
I've had the same thing occur to me many times, especially once I went into
college. However, I did an experiment that might help shed some light on the
issue for you.
I attempted to brute force my way through the problem. I kept pens and note pads
on hand, specifically sticky notes. When I had any idea I felt worth keeping,
I'd jot it down on the spot. No context (so I wouldn't write down what I was
doing or where I was) just the idea itself. I soon collected a wall of sticky
notes (it became quite infamous in the dorms) full of these ideas. I still have
them all, in a notebook full of card stock, organized by type.
The problem I find, going back over the many different ideas, is that, on the
whole, the ideas have lost any inspiration they once had. Looking over them, I
see the ideas as either a.) common knowledge (meaning the idea was probably new
at the time but since then I've just grown used to through other routes of
knowledge) or b.) trite and even childish.
So, if it helps, it would seem that your friend may be onto something as, for
the most part, my wall of ideas serve either as reminders of things I already
know or things that don't matter.
0John_Maxwell11y
I read somewhere that furiously writing down everything you were thinking about
is a good way to dredge up forgotten thoughts, and it sometimes works for me.
Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?
The dangers pointed to by the thought experiment aren't restricted to
exploitation by an outside entity. An AI should be able to safely consider the
hypothesis "If I don't destroy my future light cone, 3^^^3 people outside the
universe will be killed" regardless of where the hypothesis came from.
But even if we're just worried about mugging, how could you possibly weight it
enough? Even if paying once doomed me to spend the rest of my life paying $5 to
muggers, the utility calculation still works out the same way.
2ArisKatsaris11y
My idea is as follows:
Mugger: Give me 5 dollars, or I'll torture 3^^^3 sentient people across the
omniverse using my undetectable magical powers.
AI: If I make my decision on this and similar trades based on a decision process
DP0 of comparing the disutility(3^^^3 torture) P(you're telling the truth)
compared to the disutility(giving you 5 dollars), then even if you're telling
the truth, a different malicious agent may then merely name a threat that
involves 3^^^^3 tortures, and thus make me cause a vastly great amount of
disutility in his service. Indeed there's no upper bound to the disutility such
a hypothetical agent may claim will cause, and therefore surrendering to such
demands mean a likewise unbounded exploitation potential. Therefore I will not*
use the decision process DP0, and will instead utilize some different decision
process (like "Never surrender to blackmail" or "Always demand proportional
evidence before considering sufficiently extraordinary claims").
2endoself11y
Saving 3^^^^3 people is more than worth a bit of vulnerability to blackmail. If
3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in
danger and in that case "never surrender to blackmail" is a strictly worse
strategy.
Also, DP0 isn't even a coherent decision process. The expected utilies will fail
to converge if "there's no upper bound to the disutility such a hypothetical
agent may claim" and these claims are interpreted with some standard assumptions
[http://arxiv.org/abs/0712.4318], so the agent has no way of even comparing
expected utilities of actions.
2ArisKatsaris11y
This isn't about beliefs, this is about decisions. The process of epistemic
rationality needn't be modified, only the process of instrumental rationality.
Regardless of how much probability the AI assigns to the danger for 3^^^^3
people, it needn't be the right choice to decide based on a mere probability of
such danger multiplied to the disutility of the harm done.
Unless having the decision process that surrenders to blackmail and being known
to have it is what will put these people in danger in the first place. In that
case, either you modify your decision process so that you precommit to not
surrender to blackmail and prove it to other people in advance, or pretend to
not surrender and submit to individual blackmails if enough secrecy of such
submission can be ensured so that future agents won't be likely to be encouraged
to blackmail.
But this was just an example of an alternate decision theory, e.g. one that had
hardwired exceptions against blackmail. I'm not actually saying it need be
anything as absolute or simple as that -- if it were as simple as that I'd have
solved the Pascal's Mugger problem by saying "TDT plus don't submit to
blackmail" instead of saying "weigh against your decision process by a factor
proportional to its exploitability potential"
0endoself11y
We seem to be thinking of slightly different problems. I wasn't thinking of the
mugger's decision to blackmail you as dependent on their estimate that you will
give in. There are possible muggers who will blackmail you regardless of your
decision theory and refusing to submit to blackmail would cause them to produce
large negative utilities.
2ArisKatsaris11y
And as I said my example about a blanket refusal to submit to blackmail was just
an example. My more general point is to evaluate the expected utility of your
decision theory itself, not just the individual decision.
0endoself11y
In the situation I presented, the decision theory had no effect on the utility
other than through its effect on the choice. In that case, the expected utility
of the decision theory and the expected utility of the choice reduce to the same
thing, so your proposal doesn't seem to help. Do you agree with that, or am I
misapplying the idea somehow?
2ArisKatsaris11y
I'm not sure that they reduce to the same thing. In e.g. Newcomb's problem, if
you reduce your two options to "P(full box A) U(full Box A)" versus "P(full box
A) U(full box A) + U(full box B)", where U(x) is the utility of x, then you end
up two-boxing, that's causal decision theory.
It's only when you consider the utility of different decision theories, that you
end up one boxing, because then you're effectively considering U(any decision
theory in which I one-box) vs U(any decision theory in which I two-box) and you
see that the expected utility of one-boxing decision theories is greater.
In Pascal's mugging... again I don't have the math to do this (or it would have
been a discussion post, not an open-thread comment), but my intuition tells me
that a decision theory that submits to it is effectively a decision theory that
allows its agent to be overwritten by the simplest liar there is, and therefore
of total negative utility. The mugger can add up-arrows until he has
concentrated enough disutility in his threat to ask the AI to submit to his
every whim and conquer the world on the mugger's behalf, etc...
1endoself11y
If the adversary does not take into account your decision theory in any way
before choosing to blackmail you, U(any decision theory where I pay if I am
blackmailed) = U(pay) and U(any decision theory where I refuse to pay if I am
blackmailed) = U(refuse), since I will certainly be blackmailed no matter what
my decision theory is, so what situation I am in has absolutely no
counterfactual dependence on my action.
The truth of this statement is very hard to analyze, since it is effectively a
statement about the entire space of possible decision theories
[https://en.wikipedia.org/wiki/Universal_quantification]. Right now, I am not
aware of any decision theory that can be made to overwrite itself completely
just by promising it more utility or threatening it with less. Perhaps you can
sketch one for me, but I can't figure out how to make one without using an
unbounded utility function, which wouldn't give a coherent decision agent using
current techniques as per the paper that I linked a few comments up.
Anyway, I don't really have a counter-intuition about what is going wrong with
agents that give into Pascal's mugging. Everything gets incoherent very quickly,
but I am utterly confused about what should be done instead.
That said, if an agent would take the mugger's threat seriously under a naive
decision theory and that disutility is more than the disutility of of being
exploitable by arbitrary muggers, decision-theoretic concerns do not make the
latter disutility greater in any way. The point of UDT-like reasoning is that
"what counterfactually would have happened if you decided differently" means
more than just the naive causal interpretation would indicate. If you precommit
to not pay a mugger, the mugger (who is familiar with your decision process)
won't go to the effort of mugging you for no gain. If you precommit not to find
shelter in a blizzard, the blizzard still kills you.
0thomblake11y
So the AI is not an expected utility maximizer?
If it is not, then what is it? If it is, then what calculations did it use to
reach the above decision - what were the assigned probabilities to the scenarios
mentioned?
0ArisKatsaris11y
It's an expected utility maximizer, but it considers the expected utility of its
decision process, not just the expected utility of individual decisions. In a
world where there exist more known liars than known superhuman entities, and any
liar can claim superhuman powers, any decision process that allows them to
exploit you is of negative expected utility.
It's like the professor who in the example agrees to accept a delayed essay that
was delayed for the reason of a grandmother's death, because this is a valid
reason that will largely not be exploited, but not "I wanted to watch my
favorite team play", because lots of others students would be able to use the
same excuse. The professor's not just considering the individual decision, but
whether decision process would be of negative utility in a more general manner.
0thomblake11y
It seems to me that you run into the mathematical problem again when trying to
calculate the expected utility of its decision process. Some of the outcomes of
the decision process are associated with utilities of 3^^^3.
0ArisKatsaris11y
Perhaps. I don't have the math to see how the whole calculation would go.
But it seems to me that the utility of 3^^^3 is associated with a particular
execution instance. However when evaluating the decision process as a whole (not
the individual decision) the 3^^^3 utility mentioned by the mugger doesn't have
a privileged position over the the hypothetical malicious/lying individuals that
can just even more easily talk about utilities or disutilities of 3^^^^3 or
3^^^^^3, or even have their signs reversed (so that they torture people if you
submit to their demands despite their claims to the opposite).
So the result should ideally be a different decision process that is able to
reject unsubstantiated claims by potentially-lying individuals completely,
instead of just trying to fudge the "Probability" of the truth-value of the
claim, or the calculated utility if the claim is true.
-4mwengler11y
Give me $5 or I will torture 3^^^^3 sentient people across the omniverse for
1,000 years each and then kill them. using my undetectable magical powers. You
can pay me by paypal to mwengler@gmail.com. Unless 20 people respond (or the
integrated total I receive reaches $100) then I will carry out the torture.
Now you may think I am making the above statement to make a point. Indeed it
seems probable, but what if I am not? How do you weigh the very finite
probability that I mean it against 3^^^^3 sentient lives
I feel confident that the amount of money I recieve by paypal will be a more
meaningful statement about what people really think of
(ininitesimal probability) * (nearly infinite evil) = well over $5 worth of
utilons
Do others agree? Or do they think these comments which cost nothing bu another
15 minutes away from reading a different post are what really mean something?
1ArisKatsaris11y
The issue is how to program a decision theory (or meta-decision theory, perhaps)
that doesn't fall victim to Pascal's mugging and similar scenarios, not to show
that humans mostly don't fall victim to it.
1NancyLebovitz11y
However, it's probably worth figuring out what processes people use which cause
them to not be very vulnerable to Pascal's Mugging.
Or is it just that people aren't vulnerable to Pascal's Mugging unless they're
mentally set up for it? People will sometimes give up large amounts of personal
value to prevent small or dubious amounts of damage if their religion or
government tells them to.
0mwengler11y
I think there is not enough discussion of the quality of information. Conscious
beings tell you things to increase their utility functions, not to inform you.
Magicians trick you on purpose and (most of us) realize that, and they are not
even above human intelligence. Scammers scam us. Well meaning idiots sell us
vitamins and minerals and my sister just aked me about spending a few $1000 on a
red light laser to increase her well being!
The whole one-box vs two-box thing, if someone claiming to be a brilliant alien
had pulled this off 100 times and was now checking in with me, I would find it
much more believable that they were a talented scam artist than that they could
do calculations to do predictions that required a ^ to express relative to any
calculations we know of now that can be done.
Real intelligences don't believe anywher near everything they hear. And they
STILL are gullible.
0TheOtherDave11y
I agree with your first paragraph, but I'm not convinced of your second
paragraph... at least, if you intend it as a rhetorical way of asserting that
there is no possible way to weight the evidence properly. It's just another
proposition; there's evidence for and against it.
I think we get confused here because we start with our bottom line already
written.
I "know" that the EV of destroying my light cone is negative. But theory seems
to indicate that, when assigning a confidence interval P1 to the statement
"Destroying my future light cone will preserve 3^^^3 extra-universal people"
(hereafter, statement S1), a well-calibrated inference engine might assign P1
such that the EV of destroying my light cone is positive. So I become anxious,
and I try to alter the theory so that the resulting P1s are aligned with my
pre-existing "knowledge" that the EV of destroying my light cone is negative.
Ultimately, I have to ask what I trust more: the "knowledge" produced by the
poorly calibrated inference engine that is my brain, or the "knowledge" produced
by the well-calibrated inference engine I built? If I trust the inference
engine, then I should trust the inference engine.
Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.
I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).
1) Write a comment or PM in Markdown syntax. Post the thing.
2) Select the text and copy it straight into the WYSIWYG editor
3) Delete the original post or PM.
It is such an obvious solution, yet I didn't think of it for months.
To avoid cluttering up "Recent Comments" etc, one could type it up off LW (this
[http://markdownr.com/] or this
[http://joncom.be/experiments/markdown-editor/edit/] seem pretty good) and then
copy it in. (Though, the PM idea works too!)
0[anonymous]11y
Very true, but editing old PMs dosen't do that. Send a PM to yourself and then
deleting it seems the most expedient solution. Thanks for the links however!
I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.
Can you speak about your dreams into a tape recorder, and transcribe them later?
0Douglas_Knight11y
I kept a dream journal for about 5 years. I think it (temporarily) increased
recall of dreams. The most interesting thing I observed was that the recorded
dreams were seasonally concentrated.
0JGWeissman11y
What kind of cognitive block? Do you not know what to write? Do you not think
about recording your dream at the appropiate time? Do you feel like writing
about your dream would be a bad thing?
3Grognor11y
The last one, sort of. It usually takes the form of, "You don't want THAT to be
in your dream log, do you? You'd better skip it just this once. It's okay,
you'll write down the next one. That dream sucked anyway, and you're already
forgetting it besides. Also don't you have better things to do?"
All, of course, with the low-level realization that I know all of this is
bullshit but I obey it anyway.
Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.
Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.
Agree, but disagree with the assertion that you can't observe them. (If that's
not an assertion, then whatever.)
2Viliam_Bur11y
Even if you can't observe them, can you somehow logically infer their existence
and can you influence them? If no, then thinking about them is just wasting
time.
It becomes a problem only if you cannot observe them, but you can influence
them, and despite lack of observation you can make at least some probabilistic
estimates about the effect of your influence.
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
I proposed about two months ago; I'm getting married this coming Sunday. I
mention this to qualify the following advice/input.
The process of getting engaged and getting married may seem (to some) like a
stupid, defunct, irrelevant process for unevolved, unenlightened, hidebound
ape-descendents. I propose that this is a naive view of the situation, and that
the process of engagement and marriage, having existed for a long time, in many
cultures, and being actually a relatively evolved and functional procedure,
constitutes a very instrumentally rational process to undertake for any
sufficiently interested couple.
The members of a relationship are likely to have very different implicit
expectations with regards to
* when it's appropriate to get engaged
* when it's appropriate to get married (after getting engaged)
* what marriage actually "means"
* what constitutes an appropriately-sized wedding
* the importance of and timing of having children
* the importance of family, e.g. how much continuing parental involvement is
welcome
* finances, debt, and standard of living
* what actions would constitute a violation of trust
* etc.
Both partners will likely have a largely unexamined implicit life-plan with
various unstated assumptions about all of these issues, and more. Some of these
things will simply not come up until you start talking seriously about
commitment. Furthermore, you may not really start talking seriously about
commitment until after you are engaged. Even if you thought you had been serious
before. When one goes through this process of public commitment, the process of
social reinforcement makes real the commitment in a sense that is almost
impossible to internalize without such peer recognition.
All of these things can come up regardless of how "rational" both partners
happen to be. Konkvistador elsewhere in this comment thread asked
If you want children, and you forsee yourself having a lot of complex values
relating to the well-
5[anonymous]11y
Why would anyone want to get engaged? But I do second the request for this data.
Edit: Removed "in the world "
"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.
Better?
In any case, as we discussed below, my original interpretation was that this is
about the general desirability of [X]. I also obviously implied I've heard
strong reasons against [X] but few convincing ones in its favour.
Woman: Yay I want to get married with the man I love! Does anyone have any advice?
Man: Marriage is a bad idea. I can't see why anyone would want that.
Woman: I'm allowed to want things! You are being mean.
Man: Don't try and chain the poor guy with whom I suddenly identify!
Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!
Now I'm wondering what would've happened if my boyfriend had made the post.
-1GLaDOS11y
I find this sexist! But true.
In any case it was sweet sweet drama
[http://i0.kym-cdn.com/entries/icons/original/000/005/742/sweetjesus.jpg].(^_^)
6NancyLebovitz11y
It's better.
I would say that "I'm surprised that you're planning on [X], considering [list
of drawbacks]" would work at least as well.
I was surprised at Alicorn (who's generally a calm poster) saying that she was
allowed to want things. It seemed weirdly out of line with the discussion. When
I saw the beginning of the thread again, "why in the world" jumped out at me as
aggressive.
Something that's showing more clearly to me on another reread is that you
genuinely didn't see what you might have done that was problematic.
I'm wondering if there's something odd going on at your end-- I don't think you
usually misread things the way you misread Alicorn's original request.
It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.
This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.
It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/
No.
Taking out "in the world" tones it down, in the same way that taking the spikes
out of a club tones it down. "Why would anyone..." is still a rhetorical
question asserting that anyone who does is a dolt. You do the same in another
comment: "Why would anyone make a lifetime commitment?"
Clearly, many people do get engaged, do get married, do make lifetime
commitments. A majority of people, even, at least here in the West; I do not
know how it is in Slovenia. (The disadvantageous tax regime you have in Slovenia
was done away with long ago in the UK: married couples can elect to be taxed as
separate individuals.) But saying "Why would anyone do such a thing" does not
invite discussion, it shuts it off. If you actually wanted to know people's
reasons, you would actually ask them, and listen to the answers.
8[anonymous]11y
Ok fair enough, can you propose a better way to ask?
I was interested in the following:
* Why do so few people who want to get married question the wisdom of such a
step considering its high costs and dubious benefits (in comparison to say
cohabitation)?
* Why do people in general want to get married? (this is different from the
question of whether it is rational to marry)
* Is it rational for most people who marry to do so?
I was not specifically interested in why Alicorn wanted to get married. I did
want to provoke, maybe even shock people into thinking about it beyond cached
thoughts.
7TimS11y
When I got married, I thought about this a little, and I concluded that marriage
(but not cohabitation) would:
* Create a partner with a non-betrayal stance towards me (i.e. would not defect
against me in a one-shot Prisoner's dilemma game).
* Signal to others that I and partner had a non-betrayal stance towards each
other.
It's an interesting question why marriage is able to create that first effect,
and I don't have a good answer. I do think that many people go into marriage
without thinking of these considerations, and I think that is a mistake. In
other words, I think that the answer to your third question is no. But that
depends on society's tolerance of cohabitation, which wasn't always society's
attitude.
0[anonymous]11y
I can think this is because it is an act that is supposed to entail the
following:
* shared reproductive interests
* shared financial interests
* at least some pair bonding (Oxytocin makes you love your kdis and love your
romantic partner, in extreme cases enough to be willing to sacrifice
yourself)
3TimS11y
To me, those things are implied by the "non-betrayal" stance. Agreement on
childbearing, shared financial interest, and pair bonding (i.e. shared emotional
interest) are consequences of the fundamental agreement not to betray. As you
note, each of those could be achieved without marriage - but most people act as
if this were not possible. I'm just as confused as you.
That is different from noting the incidental benefits of legal marriage - if I
die without a will, my wife gets my property. To achieve the same effect without
marriage, I'd have to actually create a will. And so on for all the legal rights
I want my wife to have (e.g. de facto legal guardian if I am incapacitated). But
I want my wife to have those rights because of the non-betrayal stance, and if
that wasn't our relationship, I wouldn't want her to have those rights.
2Richard_Kennaway11y
Ask as if you did not already have a presumption about what the answer should
be. Telling people they're idiots unless they agree with you will only convince
them you are someone they do not want to talk to.
Your latest reformulation is better -- the key substitution is "do" instead of
"would". The second and third bullet points are absolutely fine, but in the
first and in the final paragraph you're still sticking your own oar in with
"considering its high costs and dubious benefits" and "shock people into
thinking about it beyond cached thoughts". There are, as it happens, people who
have thought carefully about what arrangement they want to make on these
matters, and without having to be told about cached thoughts either, but you
will never hear them with that approach.
2MileyCyrus11y
The high cost of divorce can make a lifetime commitment more robust. It also
helps with taxes, visas and health care.
The high cost of divorce can make a lifetime commitment more robust.
Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized... (read more)
Note: 50% of all marriages, not 50% of all married people. The people who get
married (and divorced) several times drag down the overall success rate.
Googling around revealed various claims of the success rate for first marriage:
more than 70 percent
[https://unitedfamiliesinternational.wordpress.com/2011/02/14/myth-buster-monday-%E2%80%9Cfifty-percent-of-marriages-end-in-divorce-%E2%80%9D/],
50 to 60 percent [http://www.divorcerate.org/], 70 to 90 percent
[http://www.truthorfiction.com/rumors/d/divorce.htm], etc.
1Douglas_Knight11y
I find Stevenson-Wolfers [https://www.nber.org/papers/w12944.pdf] (alt
[https://scholar.google.com/scholar?cluster=7284524747075786497] alt
[https://wayback.archive.org/web/20130406093201/http://www.nber.org/papers/w12944.pdf])
a credible source. It says that 50% of first marriages in the US from the 70s
lasted 25 years. Marriages from the 80s look slightly more stable. The best
graph is Figure 2 [https://i.imgur.com/LBdphWo.png] on page 37.
8MileyCyrus11y
I'm white and educated. Those stats don't apply to me.
There is much more cash and property shared in a typical long-term romantic
relationship than a typical platonic. I wouldn't share an apartment with my
brother unless he signed a state-enforced contract.
Can you explain to me what disadvantages marriage has for a person who would
wants to raise children with the help of a long-term romantic partner?
2[anonymous]11y
Can you explain what advantages it has that are exclusive to it?
Considering the ceremony itself is often a major financial burden, shouldn't we
seek good reasons in its favour rather than responses to "why not!"? But to
proceed on this line anyway, from anecdotal evidence in my circle of
acquaintances custody battles seem to be much more nasty and hard on the
children among those who are married. The relationships between men and their
children is also much more damaged and strained.
1MileyCyrus11y
I'm not trying to debate you, I'm trying to optimize my life. I want to
reproduce with a partner who will stick around for decades, at least. If you
have a compelling case for why my life would be better without marriage, I'd
love to hear it.
Is there any legal precedent that gives a never-married man better access to his
children than a divorced man?
7shokwave11y
I shall call this the "loving, consensual model" of a relationship:
* Preferring to be with someone if and only if they prefer to be with you,
* and them preferring to be with you if and only if you prefer to be with them,
* and you prefer to be with them, satisfying 2,
* and they prefer to be with you, satisfying 1,
* gives us a situation of cohabitation, which is sufficient for your stated
needs.
Given that you should be indifferent between cohabitation and marriage, and
marriage has non-zero costs, why would you prefer marriage?
The reason is insidious, cloaked in the positive connotations of marriage and
love, but nevertheless incontrovertible.
You don't prefer to be with someone if and only if they prefer to be with you.
You prefer to be with someone.
Of course, it's illegal to directly enforce this preference. Unlawful
imprisonment, and all that. So you'd go with the consensual model, but raise the
costs of them preferring to be separate as much as legally possible. Like, say,
requiring a contract that is costly and messy to break.
9MixedNuts11y
Yes, if I have various kinds of entanglement and dependence on someone, such as
living together, sharing finances and expensive objects like a car, sharing
large parts of our social lives, and possibly having children, I don't want them
to be able to leave at a moment's notice. This doesn't make be feel especially
evil.
3shokwave11y
Really? I'd suggest you don't want them to have a positive expected value on
leaving at a moment's notice rather than wanting them restricted, but in any
case... the solution is to structure your entanglements and dependence in such a
way that this opportunity is available to them if they desire it, not to try to
force contracts and obligations onto them in order to restrict them.
5MixedNuts11y
Can you rephrase? I'm thinking things like "If we have a kid, we shouldn't split
up even if we're a little unhappy" and "If I've quit my job to be a homemaker,
don't stop giving me money without warning". Are you saying to avoid getting in
such situations in the first place? Or are you saying not to marry jerks who
will leave you and the kids in the dust?
3shokwave11y
Yes; the kid increases the cost of splitting up, so being a little unhappy
doesn't justify making the kid really unhappy. You don't need a marriage for
this, you just need to think about the situation for five minutes.
Pay partially into an account that is available to the homemaker and not you;
with a month's head start the account will have enough to pay out to the
homemaker for at least a month. This is equivalent to a month's warning. It took
me like fifteen seconds to think of this and it's already better than the
equivalent financial situation within a marriage.
There are just better ways of doing everything marriage needs to do, except
installing a huge cost on leaving, so it seems duplicitous to prefer marriage to
these other ways if you ostensibly only care aout the other things.
0TheOtherDave11y
There are lots of situations where precommitting to doing something at some
future time, and honoring that precommittment at that time regardless of whether
I desire to do that thing at that time, leaves me better off than doing at every
moment what I prefer to do at that moment.
"Marriage" as you've formulated it here -- namely, a precommitment to remain
"with" someone (whatever that actually means) even during periods of my life
when I don't actually desire to be "with" them at that moment -- might be one of
those situations.
It's not clear to me that the connotations of "insidious" would apply to
marriage in that scenario, nor that the implication that marriage is not loving
and consensual would be justified in that scenario.
0smk11y
I am legally married because I need the legal and financial benefits that
marriage provides in my country. However, in an ideal fantasy world, I wouldn't
need those benefits and I wouldn't be legally married. But I would still be
married! Just without government involvement. (BTW I have no interest in raising
kids.)
It's normal for people to hear "marriage" and think "legal marriage" but I hate
that.
2TheOtherDave11y
Can you clarify what you mean by "need," here? In particular, does it mean
something different than "benefit from"?
0smk11y
Um, I dunno. I'm just referring to that fact that I don't have my own source of
health insurance, so I need to be on his, but in an ideal world I would have my
own.
4[anonymous]11y
Why do you need to marry someone to live with them for decades and raise
children? Are millions of people living happily in such arrangements doing
something wrong or sub-optimally? If you think different arrangements are better
for different people, why do you think you are a particular kind of person?
Can we taboo the word "marriage"?
No. But neither do married men have much better chances of such an outcome.
3Viliam_Bur11y
There is still a difference between "not much better" and "not better". I do not
know the exact number, but if contact with your children is an important part of
your utility function, then even increasing the chance by say 5% is worth doing,
and could justify the costs of marriage.
(Even if the family law is strongly biased against males, it may still be
rational for males to seek marriage.)
0[anonymous]11y
I mean I know this is a Western peculiarity but it always strikes me as
essentially crazy how people in other such discussion I have consistently seem
to mix up, conflate and implicitly equate the following:
* traditional marriage
* legal concept of marriage
* religious marriage
* cohabitation with children
So easily! In Slovenia someone getting married at a Church has ZERO legal
consequences. Why would it? It is ridiculous to claim religious ceremonies and
legal categories should have anything to do with each other. Why should priest
have the right to make legally binding arrangements? When someone decides to get
a civil marriage they go to a magistrate and basically sign a contract, this
carries legal consequences. Living with someone for some time has some legal
consequences and the rights and responsibilities come pretty close to civil
marriage. All of these are also different from the implicit traditional
responsibilities and privileges people assume exist in a "marriage". And if
religious people get to call their rituals marriage, why can't I as a secular
person have a community of people call something marriage? As long as we are
clear this isn't civil marriage, the kind the state recognizes, there is no
possible harm in this, nor is it illegal in my country.
I don't see a good reason why societies want to forcibly (from what I understand
in the US they actually mess with people's private lives by persecuting people
who live with kids with more than one person and all marriages are a state
affair) conflate these separate things.
6ArisKatsaris11y
Again in the interests of teaching you to communicate more efficiently: Whenever
you say "Why would anyone" when you already know that some people do this (and
it's not just some bizarre hypothetical/fictional world you're discussing), this
signals that it's mainly a rhetorical question and that you believe these people
to be just insane/irrational/not thinking clearly.
So, a question that signals an actual request for information better is "Why do
some people make lifetime committments?"
As opposed to what percentage of non-marriage relationships?
5[anonymous]11y
Good catch. I guess considering the context of the debate with MileyCyrus a good
enough comparison would be the stability of relationships by people who choose
cohabitation with children.
3Alicorn11y
Watching the stars burn down won't be as much fun without him.
ETA: We're American, so Amerocentric advice is likely to be useful to us.
3[anonymous]11y
I'm sorry this is a nice sounding and romantic, but useless answer. It was
Valentines day yesterday, I was bombarded with enough relationship related
cached thoughts as it is.
Or are you saying the other person will literally die or refuse to ever interact
with you if you don't "marry" them? Also do you expect US government granted
21st century marriages to remain enforced then? Indeed do you have any evidence
whatsoever that a stable relationship can last that long or is likley to without
significant self-modification? In addition why this crazy notion of honouring
exactly one person with such a honour? Isn't it better to wait until group
marriages are legalized?
If you don't feel like discussing the issue please acknowledge it directly.
You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.
That having been said:
Yeah, I know about Valentine's day. That's why this was on my mind.
I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.
My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.
Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.
It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.
As I said I didn't mean to be. I am genuinely curious why in the world someone
would do this because I haven't heard any good reasons in favour of it except
that it is "tradition" or that else they'd be living in sin and fear of
punishment by a supernatural entity.
But I do apologize for any personal offence I may have inadvertently caused. I
did not meant to imply either you or your partner (about whom I know nothing!)
where particularly unsuited for this arrangement. I was questioning its
necessity or desirability in general. I generally have been pretty consistent at
questioning the value of this particular legally binding institution so it seems
unlikely that I wouldn't have posed the exact same question in response to
anyone else making such a request.
I will not apologize for posing uncomfortable questions. I don't want other
people respecting my own ugh fields [http://lesswrong.com/lw/21b/ugh_fields/] so
I generally on LessWrong don't bother avoiding poking into those of others.
-16Alicorn11y
-2drethelin11y
Picking on you? You responded to him. You're going out of your way to be
offended. You can feel free to not explain your viewpoints, but when someone
poses a question don't respond with a throw-away comment and then get annoyed it
gets responded to.
0Alicorn11y
It seems nicer than eloping.
6[anonymous]11y
I didn't mean to be rude, I was genuinely curious about the answer.
3J_Taylor11y
I truly hope that, one day, someone will answer the question that you actually
asked instead of a bunch of vaguely related questions. Unfortunately, this is
the most relevant article I could find. It's not that great.
http://stats.org/stories/2008/is_ideal_time_marry_nov10_08.html
[http://stats.org/stories/2008/is_ideal_time_marry_nov10_08.html]
1shminux11y
From your other comments it seems clear that expressing and projecting
attachment to this person has positive utility for you, even if it would change
little in your relationship. Is this his (I presume) view, as well? Do
either/any of you see any obvious negatives in being engaged and eventually
married? If not, why wait?
3Alicorn11y
"Why wait?" is a perfectly reasonable question, but simply answering it "let's
not!" probably doesn't yield the best expected value. (It might work perfectly
fine. It'd probably work perfectly fine. But it seems likely to be slightly less
conducive to everything being perfectly fine than some better-calibrated choice
of timing.)
0MileyCyrus11y
Questions I would consider (privately):
* If I knew this relationship didn't have long-term potential, would I break it
off?
* What would I need to know about this person in order to become engaged? What
would make me break it off?
* How much am I likely to learn about this person in the next
month/six-months/year? How can I learn what I need to know?
Try to avoid living together before marriage.
[http://www.sciencedaily.com/releases/2009/07/090713144122.htm].
7Alex_Altair11y
That seems like really dangerous advice to me. The article confirms my
suspicion:
The solution is not to avoid living together before marriage; the solution is to
break up when you know you should.
3Alicorn11y
Too late.
-2MileyCyrus11y
In that case, remind yourself that the costs of moving your stuff out are
trivial compared to the costs of continuing a poor relationship.
If you are looking for marriage, give yourself a deadline for deciding whether
to get engaged or break it off. Share your deadline with a brutally honest
friend. When the deadline comes, you and your friend can evaluate what you've
learned about your relationship and whether it's worth continuing.
2Alicorn11y
Thanks, but this is really not the sort of advice I need.
Me-and-the-relevant-person are, you know, in a healthy relationship that
consists significantly of conversations. I do not need to do anything cloak and
dagger here. I could probably just say "hey let's be engaged RIGHT NOW" and he'd
probably say "okay!" after some amount of thought. I'm just trying to figure out
if I risk torpedoing something I value by doing that now as opposed to in six
months or a year or whatever.
9[anonymous]11y
Getting married/engaged can involve drama and bad memories, because of the
necessity of considering such things as the Rehearsal party, Bachelor party,
Bachelorette party, Wedding party, and the Honeymoon.
For instance, due to a slight breakdown in communications, I ended up spending a
substantial amount of my Bachelor party being responsible for driving/watching
my underaged brother. He's a good little brother and it wasn't any one
particular person's fault. But that wasn't part of the "Series of fairy tale
events that I had been visualizing in my head."
I can probably think of about ten more anecdotes like that of around that time.
That one was actually one of the mild ones.
I'm under the impression many people give bog standard advice like the wedding
might be a fairy tale, but what about the marriage afterwards? I would like to
point out the reverse perspective: You may have a fairy tale marriage, but your
time period around your wedding is likely going to be a set of extremely
difficult feats in social event planning.
Actually, I'm curious what the effects of being more familiar with Less Wrong
when I got married would have been. I would have had more practice in lowering
my expectations and dispelling overly idealistic fantasies based on no evidence,
both of which from my current perspective seem like they would have been amazing
useful skills to have during wedding planning.
This is not to say you can't have a perfect series of parties topped off by a
fantastic honeymoon. That actually does happen. I sincerely wish it happens for
you. But If I were to couch this in terms of advice to Michaelos 2008, I would
tell him that he should not EXPECT it to happen, because he's never done it
before and planning social events was never his or his soon to be wife's forte.
But honestly I'm not sure he would have had enough context to get that advice.
So in terms of your actual question about doing it now, six months from now, or
a year from now, I would say first di
5Alicorn11y
Thank you! I will update in favor of getting help from my socially-adept
friends, especially married ones. I will also attempt to aim my
drive-to-do-overcomplicated-socially-dramatic-things at this challenge when it
appears rather than expecting to accomplish it all with more ordinary
planning-of-stuff skills.
5smk11y
Two years is the time frame one always hears, isn't it? I only did a very quick
search but most of what I found seemed to be referring to the same study by Ted
Huston, and I didn't even find the study itself. My impression is that 2 years
(25 months, one article said) was the average time spent dating before marriage
(not before engagement, as you asked) for happy, stable couples, however they
judge that. So, not the most helpful.
But, it does kind of match my intuition that one should wait until New
Relationship Energy is mostly over before making that decision, and I often read
that NRE (though it's usually not called that in these articles) typically lasts
about 2 years (this matches my limited experience). Also, I'm monogamous, but
I'd guess that even if your NRE with Partner A has faded, NRE with Partner B
could spill over onto your other relationship(s) and affect your judgment there
too?
4Alicorn11y
I don't remember hearing 2 years, although it is relevant data that you have
done so. One complication is that we started dating two years ago, but were
broken up for somewhat more than a year in the middle before getting back
together. So we've spent less than two years dating, but about two years
conducting an extended empirical observation about whether we prefer being
together or not.
1MileyCyrus11y
I'm afraid I was projecting my own goals into your situation. Sorry.
I didn't mean to suggest your relationship was unhealthy. All I meant to say was
that you shouldn't let logistics become a trivial inconvenience.
[http://lesswrong.com/lw/f1/beware_trivial_inconveniences/]
0MixedNuts11y
I'm not sure which answer points to "Engage" here! I would guess "yes", since it
allows you to reason "...and I'm still around, which means I believe it has
long-term potential, which means we should get engaged". But "no" indicates
attachment to the person and a willingness to make the relationship work even if
it's rocky.
I was told this would be a more appropriate place than the discussion board for this post:
I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.
I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.
I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?
In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.
The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.
It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.
I don't think it's suspicious at all. The legal tradition deliberately orders
its exponents to restrict its scope to enforceable laws without too major
backlashes. (I know there are legal maxims expressing these concepts, but they
just aren't coming to mind for some reason.)
EDIT: Mnemosyne popped up an example maxim: 'Ad impossibilia nemo tenetur.'
1mwengler11y
Puny compared to what?
2fubarobfusco11y
Indeed. There are no ideas of justice on exhibit other than human ones, so
calling them "puny" seems like merely saying nasty things about reality.
I think ClinicalTrials.gov [http://clinicaltrials.gov/ct2/home] might be what
you're looking for. For anything less than human clinical trials, you'd likely
need inside knowledge of the organization conducting the study/experiment.
0shminux11y
This question seems a bit vague. What kind of experiments? Why do you want to
know about them in advance?
-1MileyCyrus11y
Mostly psychology. I'm particularly interested in experiments that would have
political implications.
Because I want to be able to look at them and decide what kind of results would
support a theory verses undermine it, before I (and the world) becomes biased by
the actual results.
0shminux11y
Interesting. Maybe you can give examples of past experiments that had "political
implications" and what theory they may have falsified.
Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.
I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.
In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another in... (read more)
I think you'd be better off looking for cases where some great improbable wrong
occurred since no one was concerned about improbable events. That said, human
history requires some very large numbers, but not any ^s.
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing the... (read more)
The AI is not supposed to change it values, regardless of whether it is powerful
enough to realize them. Values are not up for grabs
[http://wiki.lesswrong.com/wiki/The_utility_function_is_not_up_for_grabs]. Once
the AI has some values it either wins and reshapes reality according to them or
loses. Changing the values is one form losing. It seems that mostly anything
that counts as a value system would object to changing an agent subscribing to
that system into an agent using something else, so the AI won't follow any
internal logic of value-change (unless some other agent forces it) and if it
changes its values it will be by mistake (so closer to a random walk). Part of
the idea of FAI is to build an AI that won't make those mistakes.
The ideas coming into your awareness are very strongly pre-filtered; creativity
is far from random noise. For one, the ideas are all relevant and somehow
extrapolated from your knowledge of the world. Some of them might seem stupid
but its only because of the pre-selection -- they never get compared to the idea
of 'blue mesmerizingly up the slightly irreverent ladder, then dwarf the pegasus
with the quantum sprocket' (and even this still makes a lot of sense compared to
most random messages).
It counts as failure to preserve humanity. An AI that does that is probably
unfriendly (barring the coercion by external powerful agents. Eliezer actually
wrote a story [http://lesswrong.com/lw/y4/three_worlds_collide_08/] about such
scenario, without AIs though.)
Sure seems like it.
2mwengler11y
I agree but I don't think that changes my conclusions. In teaching humans to be
more creative, they are taught to pay more attention for a longer time to at
least some of the outlier ideas. Indeed, a lot of times I think the difference
between the intellectually curious and creative people I like to interact with
and the rest is that the rest have predecided a lot of things, turned their
thresholds for "unreal" ideas coming in to consciousness up higher than I have
turned mine. Maybe they are right more often than I am, but the real measure of
why they do this is that their ancestors who outsurvived a lot of other people
trying a lot of other things did that same level of filtering, and it resulted
in winjning more wars, having more children that survived, killing more
competitors, or some combination of these and other results that constitute
selection pressures.
An AI in the process of FOOMing, which necessarily has the capacity to consider
a lot more ideas in a lot more detail than we do, what makes you think that AI
will constrain itself by the values it used to have? Unless you think we have
the same values as the first self-replicating molecules that began life on
earth, the FOOMing of Natural Intelligence (which has taken billions of years)
has been accompanied by value changes.
0mwengler11y
A remarkably strong claim.
My initial reaction is that humanity's values have certainly changed over time.
I think it would require some rather unattractive mental gymnastics to claim
that people who beat their children for their own good and people who owned
slaves and people who beat, killed, and/or raped either slaves or other people
they had vanquished as their right "really" had the same values we currently
have, but just hadn't really thought them through, or that our values applied in
their world would have lead us to similar beliefs about right and wrong.
I had even thought my own values had changed over my lifetime. I'm not as sure
of that, but what about that?
Certainly, it seems, as the human species has evolved its values have changed.
Do chimpanzees and bonobos have different values than we do, or the same? If the
same, I'd love to see your mental gymnastics to justify that, I would expect
them to be ugly. If different, does this mean that our common ancestor has
necessarily "lost," assuming its values were some intermediate between ours,
chimps, and bonobos, and all of its descendants have different values than it
had?
As I understand the word values, our values have changed over time, different
groups of humans have some different values from each other, and if there is a
"kernel" of common values in our species, that this kernel most likely differs
from the kernel of values in homo neanderthalis or other sentient predecessors
of modern homo sapiens.
So if NI (Natural Intelligence) in its evolution can change values (can it?)
with generally broad consensus that "we" have not lost in this process, why
would an AI be precluded from futzing with its values as it worked on
self-modifying to increase its intelligence?
-1APMason11y
Because, if the AI worked, it would consider the fact that if it changed its
values, they would be less likely to be maximised, and would therefore choose
not to change its values. If the AI wants the future to be X, changing itself so
that it wants the future to be Y is a poor strategy for achieving its aims - the
future will end up not-X if it does that. Yes, humans are different. We're not
perfectly rational. We don't have full access to our own values to begin with,
and if we did we might sometimes screw up badly enough that our values change.
An FAI ought to be better at this stuff than we are.
0mwengler11y
I think assuming an AI cannot employ a survival strategy which NI such as
ourselves are practically defined by seems extremely dangerous indeed. Perhaps
even more importantly, it seems extremely unlikely that an AI which has FOOMed
way past us in intelligence would be more limited than us in its ability to
change its own values as part of its self modification.
The ultimate value, in terms of selection pressures, is survival. I don't see a
mechanism by which something which can self modify will not ultimately wind up
with values that are more conducive to its survival than the ones it started out
with.
And I certainly would like to see why you assert this is true, are there
reasons?
1APMason11y
Yes, reasons:
The AI is not subject to selection pressure the same way we are: it does not
produce millions of slightly-modified children which then die or reproduce
themselves. It just works out the best way to get what it wants (approximately)
and then executes that action. For example, if what the AI values is its own
destruction, it destroys itself. That's a poor way to survive, but then in this
case the AI doesn't value its own survival. If there were a population of AIs
and some destroyed themselves, and some didn't, then yes there would be some
kind of selection pressure that led to there being more AIs of a non-suicidal
kind. But that's not the situation we're talking about here. A single AI,
programmed to do something self-destructive, will not look at its programming
and go "that's stupid" - the AI is its programming.
It think "more limited" is the wrong way to think of this. Being subject to
values-drift is rarely a good strategy for maximising your values, for obvious
reasons: if you don't want people to die, taking a pill that makes you want to
kill people is a really bad way of getting what you want. If you were acting
rationally, you wouldn't take the pill. If the AI is working, it will turn down
all such offers (if it doesn't, the person who created the AI screwed up). It's
we who are limited - the AI would be free from the limit of noisy values-drift.
-1TimS11y
Humans have changed values to maximize other values (such as survival)
throughout history. That's cultural assimilation in a nutshell. But some people
choose to maximize values other than survival (e.g. every martyr ever). And that
hasn't always been pointless - consider the value to the growth of Christianity
created by the early Christian martyrs.
If an AI were faced with the possibility of self-modifying to reduce its
adherence to value Y in order to maximize value X, then we would expect the AI
to do so only when value X was "higher priority" than value Y. Otherwise, we
would expect the AI to choose not to self-modify.
-2mwengler11y
Interesting. I think I may even agree with you. In that story each race would
need to conclude that the other races are "unfriendly". So Eliezer has written a
story in which all the NATURAL intelligences (except us of course) are
"unfriendly," and in which a human would need to agree that from the point of
view of the other intelligent races, human intelligence was "unfriendly."
Perhaps all intelligences are necessarily "unfriendly" to all other
intelligences. This could even apply at the micro level, perhaps each human
intelligence is "unfriendly" to all other human intelligences. This actually
looks pretty real and pretty much like what happens in a world where survival is
the only enforced value. Humans have the fascinating conundrum that even though
we are unfriendly to the other humans, we have a much better chance of surviving
and thriving by working with the other humans. The alliances and technical
abilities and so on are, if not balance across all humans and all groups, at
least balanced enough across many of them so that the result is a plethora of
competing / cooperating intelligences where the jury is still out on who is the
ultimate winner. Breeding in to us the ability (the value?) that "others" are
our allies against "the enemies" clearly has resulted in collective efforts of
cooperation that have produced quickly cascading production ability in our
species. "We" worried about the Nazis FOOMing and winning, we worried the
Soviets might FOOM and win. Our ancestors fought against every tribe that lived
5 miles away from them, before cultural evolution allowed them (us) to cooperate
in groups of hundreds of millions.
So in Eliezer's story, 3 NI's have FOOMed and then finally run into each other.
And they CANNOT resist getting up in each other's grills. And why not? what are
the chances that the final intelligence IF only one is left will have been one
which was shy about destroying potential competitors before they destroyed it?
I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.
Do they tend to cancel? How well?
A while ago Yvain posted on Prospect Theory, which I think is salient to your query.
A proposed law to require psychologists who testify in court to dress like wizards:
I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,
C preferred being single to being with A.
C preferred being with W to being single.
C preferred being with A to being with W.
I think all three of us could have been spared some heartache if we had figured out that was what was going on.
Currently listening to the Grace-Hanson podcasts. Topics:
I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will ... (read more)
Why Life Extension is Immoral
Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacr... (read more)
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?
I've just seen the Wikipedia article for the ‘overwhelming gain paradox’:
... (read more)Ever feel like you contribute nothing to society? Well, it's time to consider volunteering!
Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?
Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.
I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).
It is such an obvious solution, yet I didn't think of it for months.
I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.
Do con-artistry and the Dark arts share similar strategies? If so any in particular?
If you're interested, I would be willing to sell you some.
OPERA group finds source of their 60 nanosecond discrepancy.
Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.
Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.
Agree or disagree?
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.
This whole conversation was such a cliché.
Woman: Yay I want to get married with the man I love! Does anyone have any advice?
Man: Marriage is a bad idea. I can't see why anyone would want that.
Woman: I'm allowed to want things! You are being mean.
Man: Don't try and chain the poor guy with whom I suddenly identify!
Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!
Man: I'm sorry.
Woman: Apology accepted.
It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.
This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.
It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/
Why would anyone make a lifetime commitment?
Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized... (read more)
You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.
That having been said:
Yeah, I know about Valentine's day. That's why this was on my mind.
I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.
My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.
Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.
It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.
I was told this would be a more appropriate place than the discussion board for this post:
I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.
I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.
Thanks, Jeremy
I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?
In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.
The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.
And I wonder h... (read more)
It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.
What's the best way to find out about scientific experiments before they are conducted?
Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.
I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.
In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another in... (read more)
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing the... (read more)