I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
This is great, even more so as you made it open source. I added it to References
& Resources for LessWrong
[http://lesswrong.com/lw/2un/references_resources_for_lesswrong/#LWO].
1Kevin13y
You should make a short top-level post about this so more people see this
1cupholder13y
I'd vote you up again for handing out your source code as well as the quote
list, but I can't, so an encouraging reply will have to do...
Pre-alpha, one hour of work. I plan to improve it.
EDIT:Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2:I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
If you're using Firefox, there's an add-on for that.
[https://addons.mozilla.org/en-US/firefox/addon/2351/]
3Blueberry13y
Or, if you're lazy like me, you can select 'Page Source' under the View menu and
then select the 'Wrap Long Lines' option.
0Alicorn13y
Arigato :)
3JoshuaZ13y
It might make more sense to put this on the Wiki. Two notes: First, some of the
quotes have remarks contained in the posts which you have not edited out. I
don't know if you intend to keep those. Second, some of the quotes are comments
from quote threads that aren't actually quotes. 14 SilasBarta is one example.
(And is just me or does that citation form read like a citation from a religious
text ?)
7Vladimir_Nesov13y
On the wiki, this text will be dead, because nobody will be adding new items
there by hand.
3DanielVarga13y
I agreed with you, I even started to write a reply to JoshuaZ about the
intricacies of human-machine cooperation in text-processing pipelines. But then
I realized that it is not necessarily a problem if the text is dead. A
Rationality Quotes, Best of 2010 Edition could be nice.
2Vladimir_Nesov13y
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the
year and so on. It'd also be useful to publish the source code of whatever
script was used to generate the rating on the wiki, as a subpage.
1NancyLebovitz13y
Very cool idea.
It would be nice if links were preserved.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won't offer much to regular/long-time LW readers, but it's a great resource to give to friends/family who don't have the time/energy demanded by LW.
It is a good blog, and it has a slightly wider topic spread than LW, so even if
you're familiar with most of the standard failures of judgment there'll be a few
new things worth reading. (I found the "introducing fines can actually increase
a behavior" post particularly good, as I wasn't aware of that effect.)
0Kaj_Sotala13y
Thanks, this looks like an excellent supplement for LW.
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a
case of the former). So one could be a consequentialist and place a high value
and not lying. In fact, the answers to all of your questions depend on the
values one holds.
Or what Nesov said below.
7Vladimir_Nesov13y
I disagree. Not lying or not being lied to might well be a terminal value, why
not? You that lies or doesn't lie is part of the world. A person may dislike
being lied to, value the world where such lying occurs less, irrespective of
whether they know of said lying. (Correspondingly, the world becomes a better
place even if you eliminate some lying without anyone knowing about that, so
nobody becomes happier in the sense of actually experiencing different emotions,
assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net
making the outcome even worse for other reasons, it shouldn't be done (and some
of your examples may qualify for that).
5prase13y
In my opinion, this is a lawyer's attempt to masquerade deontologism as
consequentialism. You can, of course, reformulate the deontologist rule "never
lie" as a consequentialist "I assign an extremely high disutility to situations
where I lie". In the same way you can put consequentialist preferences as a
deontoligst rule "at any case, do whatever maximises your utility". But doing
that, the point of the distinction between the two ethical systems is lost.
2cupholder13y
If so, maybe we want that.
0Vladimir_Nesov13y
My comment argues about the relationship of concepts "make the world a better
place" and "makes people happier". cousin_it's statement:
I saw this as an argument, in countrapositive form for this: if we take a
consequentialist outlook, then "make the world a better place" should be the
same as "makes people happier". However, it's against the spirit of
consequentialist outlook, in that it privileges "happy people" and disregards
other aspects of value. Taking "happy people" as a value through deontological
lens would be more appropriate, but it's not what was being said.
2cousin_it13y
Let's carry this train of thought to its logical extreme. Imagine two worlds,
World 1 and World 2. They are in exactly the same state at present, but their
past histories differ: in World 1, person A lied to person B and then promptly
forgot about it. In World 2 this didn't happen. You seem to be saying that a
sufficiently savvy consequentialist will value one of those worlds higher than
the other. I think this is a very extreme position for a "consequentialist" to
take, and the word "deontologism" would fit it way better.
IMO, a "proper" consequentialist should care about consequences they can (in
principle, someday) see, and shouldn't care about something they can never ever
receive information about. If we don't make this distinction or something
similar to it, there's no theoretical difference between deontologism and
consequentialism - each one can be implemented perfectly on top of the other -
and this whole discussion is pointless, and likewise is a good chunk of LW. Is
that the position you take?
2Vladimir_Nesov13y
That the consequences are distinct according to one's ontological model is
distinct from a given agent being able to trace these consequences. What if the
fact about the lie being present or not was encrypted using a one-way injective
function, with the original forgotten, but the cypher retained? In principle,
you can figure which is which (decipher), but not in practice for many years to
come. Does your inability to decipher this difference change the fact of one of
these worlds being better? What if you are not given a formal cipher, but how a
butterfly flaps its wings 100 year later can be traced back to the event of
lying/not lying through the laws of physics? What if the same can only be said
of a record in an obscure historical text from 500 years ago, so that the event
of lying was actually indirectly predicted/caused far in advance, and can in
principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker
than you seem to imply. And since ability to make logical conclusions from the
data doesn't seem like the sort of thing that influences the actual moral value
of the world, we might as well agree that you don't need to distinguish them at
all, although it doesn't make much sense to introduce the distinction in value
if no potential third-party beneficiary can distinguish as well (this would be
just taking a quotient of ontology on the potential observation/action
equivalence classes, in other words using ontological boxing of syntactic
preference).
1AlephNeil13y
It might be, but whether or not it is seems to depend on, among other things,
how much randomness there is in the laws of physics. And the minutiae of
micro-physics also don't seem like the kind of thing that can influence the
moral value of the world, assuming that the psychological states of all actors
in the world are essentially indifferent to these minutiae.
Can't we resolve this problem by saying that the moral value attaches to a
history of the world rather than (say) a state of the world, or the deductive
closure of the information available to an agent? Then we can be consistent with
the letter if not the spirit of consequentialism by stipulating that a world
history containing a forgotten lie gets lower value than an otherwise
macroscopically identical world history not containing it. (Is this already your
view, in fact?)
Now to consider cousin_it's idea that a "proper" consequentialist only cares
about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be
recovered later, it's still true that the lie and its immediate consequences are
seen by the person telling it, so we might regard this as being 'sufficient' for
a proper consequentialist to care about it. But if we don't, and all that
matters is the indefinite future, then don't we face the problem that "in the
long term we're all dead"? OK, perhaps some of us think that rule will
eventually cease to apply, but for argument's sake, if we knew with certainty
that all life would be extinguished, say, 1000 years from now (and that all
traces of whether people lived well or badly would subsequently be obliterated)
we'd want our ethical theory to be more robust than to say "Do whatever you like
- nothing matters any more."
0cousin_it13y
This is correct, and I was wrong. But your last sentence sounds weird. You seem
to be saying that it's not okay for me to lie even if I can't get caught,
because then I'd be the "third-party beneficiary", but somehow it's okay to lie
and then erase my memory of lying. Is that right?
0Vladimir_Nesov13y
Right. "Third-party beneficiary" can be seen as a generalized action, where the
action is to produce an agent, or cause a behavior of an existing agent, that
works towards optimizing your value.
It's not okay, in the sense that if you introduce the concept of
you-that-decided-to-lie, existing in the past but not in present, then you also
have to morally color this ontological distinction, and the natural way to do
that would be to label the lying option worse. The you-that-decided is the
third-party "beneficiary" in that case, that distinguished the states of the
world containing lying and not-lying.
But it probably doesn't make sense for you to have that concept in your ontology
if the states of the world that contained you-lying can't be in principle (in
the strong sense described in the previous comment) distinguished from the ones
that don't. You can even introduce ontological models for this case that, say,
mark past-you-lying as better than past-you-not-lying and lead to exactly the
same decisions, but that would be a non-standard model ;-)
3NancyLebovitz13y
I suggest that eliminating lying would only be an improvement if people have
reasonable expectations of each other.
2RobinZ13y
Less directly, a person may value a world where beliefs were more accurate - in
such a world, both lying and bullshit would be negatives.
-2cousin_it13y
I can't believe you took the exact cop-out I warned you against. Use more
imagination next time! Here, let me make the problem a little harder for you:
restrict your attention to consequentialists whose terminal values have to be
observable.
4Vladimir_Nesov13y
Not surprisingly, as I was arguing with that warning, and cited it in the
comment.
What does this mean? Consequentialist values are about the world, not about
observations (but your words don't seem to fit to disagreement with this
position, thus the 'what does this mean?'). Consequentialist notion of values
allows a third party to act for your benefit, in which case you don't need to
know what the third party needs to know in order to implement those values. The
third party knows you could be lied to or not, and tries to make it so that you
are not lied to, but you don't need to know about these options in order to
benefit.
4taw13y
It is a common failure of moral analysis (invented by deontologists undoubtedly)
that they assume idealized moral situation. Proper consequentialism deals with
the real world, not this fantasy.
* #1/#2/#3 - "never knows" fails far too often, so you need to include a very
large chance of failure in your analysis.
* #4 - it's pretty safe to make stuff like that up
* #5 - in the past, undoubtedly yes; in the future this will be nearly certain
to leak with everyone undergoing routine genetic testing for medical
purposes, so no. (future is relevant because situation will last decades)
* #6 - consequentialism assumes probabilistic analysis (% that child is not
yours, % chance that husband is making stuff up) - and you weight costs and
benefits of different situations proportionally to their likelihood. Here
they are in unlikely situation that consequentialism doesn't weight highly.
They might be better off with some other value system, but only at cost of
being worse off in more likely situations.
2Paul Crowley13y
You seem to make the error here that you rightly criticize. Your feelings have
involuntary, detectable consequences; lying about them can have a real personal
cost.
0taw13y
It is my estimate that this leakage is very low, compared to other examples. I'm
not claiming it doesn't exist, and for some people it might conceivably be much
higher.
4AlephNeil13y
Is this actually possible? Imagine that 10% of people cheat on their spouses
when faced with a situation 'similar' to yours. Then the spouses can 'put
themselves in your place' and think "Gee, there's about a 10% chance that I'd
now be cheating on myself. I wonder if this means my husband/wife is cheating on
me?"
So if you are inclined to cheat then spouses are inclined to be suspicious. Even
if the suspicion doesn't correlate with the cheating, the net effect is to drive
utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very "UDT-style" way of thinking -- but then UDT does
remind me of Kant's categorical imperative, and of course Kant is the
arch-deontologist.)
1cousin_it13y
Your reasoning goes above and beyond UDT: it says you must always cooperate in
the Prisoner's Dilemma to avoid "driving net utility down". I'm pretty sure you
made a mistake somewhere.
0AlephNeil13y
Two things to say:
1. We're talking about ethics rather than decision theory. If you want to apply
the latter to the former then it makes perfect sense to take the attitude
that "One util has the same ethical value, whoever that util belongs to.
Therefore, we're going to try to maximize 'total utility' (whatever sense
one can make of that concept)".
2. I think UDT does (or may do, depending on how you set it up) co-operate in a
one-shot Prisoner's Dilemma. (However, if you imagine a different game "The
Torture Game" where you're a sadist who gets 1 util for torturing, and
inflicting -100 utils, then of course UDT cannot prevent you from torturing.
So I'm certainly not arguing that UDT, exactly as it is, constitutes an
ethical panacea.)
2AlephNeil13y
Another random thought:
The connection between "The Torture Game" and Prisoner's Dilemma is actually
very close: Prisoner's Dilemma is just A and B simultaneously playing the
torture game with A as torturer and B as victim and vice versa, not able to
communicate to each other whether they've chosen to torture until both have
committed themselves one way or the other.
I've observed that UDT happily commits torture when playing The Torture Game,
and (imo) being able to co-operate in a one-shot Prisoner's Dilemma should be
seen as one of the ambitions of UDT (whether or not it is ultimately
successful).
So what about this then: Two instances of The Torture Game but rather than A and
B moving simultaneously, first A chooses whether to torture and then B chooses.
From B's perspective, this is almost the same as Parfit's Hitchhiker. The
problem looks interesting from A's perspective too, but it's not one of the
Standard Newcomblike Problems that I discuss in my UDT post.
I think, just as UDT aspires to co-operate in a one-shot PD i.e. not to torture
in a Simultaneous Torture Game, so UDT aspires not to torture in the Sequential
Torture Game.
0cousin_it13y
1. If we're talking about ethics, please note that telling the truth in my
puzzles doesn't maximize total utility either.
2. UDT doesn't cooperate in the PD unless you see the other guy's source code
and have a mathematical proof that it will output the same value as yours.
2AlephNeil13y
A random thought, which once stated sounds obvious, but I feel like writing it
down all the same:
One-shot PD = Two parallel "Newcomb games" with flawless predictors, where the
players swap boxes immediately prior to opening.
1cousin_it13y
Doesn't make sense to me. Two flawless predictors that condition on each other's
actions can't exist. Alice does whatever Bob will do, Bob does the opposite of
what Alice will do, whoops, contradiction. Or maybe I'm reading you wrong?
0AlephNeil13y
Sorry - I guess I wasn't clear enough. I meant that there are two human players
and two (possibly non-human) flawless predictors.
So in other words, it's almost like there are two totally independent instances
of Newcomb's game, except that the predictor from game A fills the boxes in the
game B and vice versa.
2Vladimir_Nesov13y
Yes, you can consider a two-player game as a one-player game with the second
player an opaque part of environment. In two-player games, ambient control is
more apparent than in one-player games, but it's also essential in Newcomb
problem, which is why you make the analogy.
0Blueberry13y
This needs to be spelled out more. Do you mean that if A takes both boxes, B
gets $1,000, and if A takes one box, B gets $1,000,000? Why is this a dilemma at
all? What you do has no effect on the money you get.
1AlephNeil13y
I don't know how to format a table, but here is what I want the game to be:
A-action B-action A-winnings B-winnings
* 2-box 2-box $1 $1
* 2-box 1-box $1001 $0
* 1-box 2-box $0 $1001
* 1-box 1-box $1000 $1000
Now compare this with Newcomb's game:
A-action Prediction A-winnings
* 2-box 2-box $1
* 2-box 1-box $1001
* 1-box 2-box $0
* 1-box 1-box $1000
Now, if the "Prediction" in the second table is actually a flawless prediction
of a different player's action then we obtain the first three columns of the
first table.
Hopefully the rest is clear, and please forgive the triviality of this
observation.
0AlephNeil13y
1. But that's exactly what I'm disputing. At this point, in a human dialogue I
would "re-iterate" but there's no need because my argument is back there for
you to re-read if you like.
2. Yes, and how easy it is to arrive at such a proof may vary depending on
circumstances. But in any case, recall that I merely said "UDT-style".
0Vladimir_Nesov13y
UDT doesn't specify how exactly to deal with logical/observational uncertainty,
but in principle it does deal with them. It doesn't follow that if you don't
know how to analyze the problem, you should therefore defect. Human-level
arguments operate on the level of simple approximate models allowing for
uncertainty in how they relate to the real thing; decision theories should apply
to analyzing these models in isolation from the real thing.
0cousin_it13y
This is intriguing, but sounds wrong to me. If you cooperate in a situation of
complete uncertainty, you're exploitable.
2Vladimir_Nesov13y
What's "complete uncertainty"? How exploitable you are depends on who tries to
exploit you. The opponent is also uncertain. If the opponent is Omega, you
probably should be absolutely certain, because it'll find the single exact set
of circumstances that make you lose. But if the opponent is also fallible, you
can count on the outcome not being the worst-case scenario, and therefore not
being able to estimate the value of that worse-case scenario is not fatal. An
almost formal analogy is analysis of algorithms in worst case and average case:
worst case analysis applies to the optimal opponent, average case analysis to
random opponent, and in real life you should target something in between.
0cousin_it13y
The "always defect" strategy is part of a Nash equilibrium. The quining
cooperator is part of a Nash equilibrium. IMO that's one of the minimum
requirements that a good strategy must meet. But a strategy that cooperates
whenever its "mathematical intuition module" comes up blank can't be part of any
Nash equilibrium.
1Vladimir_Nesov13y
"Nash equilibrium" is far from being a generally convincing argument.
Mathematical intuition module doesn't come up blank, it gives probabilities of
different outcomes, given the present observational and logical uncertainty.
When you have probabilities of the other player acting each way depending on how
you act, the problem is pretty straightforward (assuming expected utility etc.),
and "Nash equilibrium" is no longer a relevant concern. It's when you don't have
a mathematical intuition module, don't have probabilities of the other player's
actions conditional on your actions, when you need to invent ad-hoc
game-theoretic rituals of cognition.
4thomblake13y
It seems like it would be more aptly defined as "the belief that making the
world a better place constitutes doing the right thing". Non-consequentialists
can certainly believe that doing the right thing makes the world a better place,
especially if they don't care whether it does.
3RobinZ13y
A quick Internet search turns up very little causal data on the relationship
between cheating and happiness, so for purposes of this analysis I will employ
the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the
liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for
both parties.
d. Undermining revelations in a relationship have a moderate (specifically,
severe in intensity but transient in duration) eudaemonic cost for all parties
involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
1. Cheating is a risky activity, and should be avoided if eudaemonic supplies
are short.
2. This answer depends on precise relationships between eudaemonic values that
are not well established at this time.
3. Given the conditions, lying seems appropriate.
4. Yes.
5. Yes.
6. The husband may be better off. The wife more likely would not be. The child
would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the
actions of a single individual in a single situation, rather than the general
effects of widespread adoption of a strategy in many situations - like other
spherical cows, this causes a lot of problematic answers, like two-boxing.
3cousin_it13y
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband's
heart, not for some material benefit. So if she knew the husband didn't love
her, she'd tell the truth. The fact that you automatically parsed the situation
differently is... disturbing, but quite sensible by consequentialist lights, I
suppose :-)
I don't understand your answer in #2. If lying incurs a small cost on you and a
fraction of it on the partner, and confessing incurs a moderate cost on both,
why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great
stuff, I can't wait till other people reply to the questionnaire.
0RobinZ13y
The husband does benefit, by her lights. The chief reason it comes out in the
husband's favor in #6 is because the husband doesn't value the marital
relationship and (I assumed) wouldn't value the child relationship.
You're right - in #2 telling the truth carries the risk of ending the
relationship. I was considering the benefit of having a relationship with less
lying (which is a benefit for both parties), but it's a gamble, and probably one
which favors lying.
On eudaemonic grounds, it was an easy bullet to bite - particularly since I had
read Have His Carcase by Dorothy Sayers
[http://en.wikipedia.org/wiki/Have_His_Carcase], which suggested an example of
such a relationship.
Incidentally, I don't accept most of this analysis, despite being a
consequentialist - as I said, it is the "naive consequentialist solution", and
several answers would be likely to change if (a) the questions were considered
on the level of widespread strategies and (b) effects other than eudaemonic were
included.
Edit: Note that "happier couples" does not imply "happier coupling" - the risk
to the relationship would increase with the increased happiness from the
relationship. This analysis of #1 implies instead that couples with stronger but
independent social circles should cheat more
[http://family.jrank.org/pages/887/Infidelity-Marriage-Problem.html] (last
paragraph).
0[anonymous]13y
This is an interesting line of retreat! What answers would you change if most
people around you were also consequentialists, and what other effects would you
include apart from eudaemonic ones?
1Nisan13y
It's okay to deceive people if they're not actually harmed and you're sure
they'll never find out. In practice, it's often too risky.
1-3: This is all okay, but nevertheless, I wouldn't do these things. The reason
is that for me, a necessary ingredient for being happily married is an alief
that my spouse is honest with me. It would be impossible for me to maintain this
alief if I lied.
4-5: The child's welfare is more important than my happiness, so even I would
lie if it was likely to benefit the child.
6: Let's assume the least convenient possible world, where everyone is better
off if they tell the truth. Then in this particular case, they would be better
off as deontologists. But they have no way of knowing this. This is not
problematic for consequentialism any more than a version of the Trolley Problem
in which the fat man is secretly a skinny man in disguise and pushing him will
lead to more people dying.
2cousin_it13y
1-3: It seems you're using an irrational rule for updating your beliefs about
your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the
lie is caused by consequentialism in the first place. It's more similar to the
Prisoner's Dilemma, if you ask me.
2Nisan13y
1-3: It's an alief [http://en.wikipedia.org/wiki/Alief_%28belief%29], not a
belief, because I know that lying to my spouse doesn't really make my spouse
more likely to lie to me. But yes, I suppose I would be a happier person if I
were capable of maintaining that alief (and repressing my guilt) while having an
affair. I wonder if I would want to take a pill that would do that. Interesting.
Anyways, if I did take that pill, then yes, I would cheat and lie.
0cousin_it13y
Thanks for the link. I think Alicorn would call it
[http://lesswrong.com/lw/1mu/sorting_out_sticky_brains/] an "unofficial" or
"non-endorsed" belief.
Let's put another twist on it. What would you recommend someone else to do in
the situations presented in the questionnaire? Would you prod them away from
aliefs and toward rationality? :-)
-2Nisan13y
Alicorn seems to think the concepts are distinct
[http://lesswrong.com/lw/1mu/sorting_out_sticky_brains/1gs7], but I don't know
what the distinction is, and I haven't read any philosophical paper that defines
alief : )
All right: If my friend told me they'd had an affair, and they wanted to keep it
a secret from their spouse forever, and they had the ability to do so, then I
would give them a pill that would allow them to live a happy life without
confiding in their spouse — provided the pill does not have extra negative
consequences.
Caveats: In real life, there's always some chance that the spouse will find out.
Also, it's not acceptable for my friend to change their mind and tell their
spouse years after the fact; that would harm the spouse. Also, the pill does not
exist in reality, and I don't know how difficult it is to talk someone out of
their aliefs and guilt. And while I'm making peoples' emotions more rational, I
might as well address the third horn, which is to instill in the couple an
appreciation of polyamory and open relationships.
The third horn for cases 4-6 is to remove the husband's biological chauvanism.
Whether the child is biologically related to him shouldn't matter.
6Blueberry13y
Why on earth should this not matter? It's very important to most people. And in
those scenarios, there are the additional issues that she lied to him about the
relationship and the kid and cheated on him. It's not solely about parentage:
for instance, many people are ok with adopting, but not as many are ok with
raising a kid that was the result of cheating.
0Nisan13y
I believe that, given time, I could convince a rational father that whatever
love or responsibility he owes his child should not depend on where that child
actually came from. Feel free to be skeptical until I've tried it.
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly -- itself a dubious achievement, even if it were possible -- your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn't mean that your arguments shouldn't be stated clearly and discussed openly, but when you insultingly refer to opposing views as "chauvinism," you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
Be wary of confusing "rational" with "emotionless." Because so much of our
energy as rationalists is devoted to silencing unhelpful emotions, it's easy to
forget that some of our emotions correspond to the very states of the world that
we are cultivating our rationality in order to bring about. These emotions
should not be smushed. See, e.g., Feeling Rational
[http://lesswrong.com/lw/hp/feeling_rational/].
Of course, you might have a theory of fatherhood that says you love your kid
because the kid has been assigned to you, or because the kid is needy, or
because you've made an unconditional commitment to care for the sucker -- but
none of those theories seem to describe my reality particularly well.
*The kid has been assigned to me
Well, no, he hasn't, actually; that's sort of the point. There was an effort by
society to assign me the kid, but the effort failed because the kid didn't
actually have the traits that society used to assign her to me.
*The kid is needy
Well, sure, but so are billions of others. Why should I care extra about this
one?
*I've made an unconditional commitment
Such commitments are sweet, but probably irrational. Because I don't want to
spend 18 years raising a kid that isn't mine, I wouldn't precommit to raising a
kid regardless of whether she's mine or someone else's. At the very least, the
level of commitment of my parenting would vary depending on whether (a) the kid
was the child of me and an honest lover, or (b) the kid was the child of my
nonconsensual cuckolder and my dishonest lover.
* you need more time to convince me
You're welcome to write all the words you like and I'll read them, but if you
mean "more time" literally, then you can't have it! If I spend enough time
raising a kid, in some meaningful sense the kid will become properly mine.
Because the kid will still not be mine in other, equally meaningful senses, I
don't want that to happen, and so I won't give you the time to 'convince' me.
What would really convince me
3Nisan13y
Okay, here is where my theory of fatherhood is coming from:
You are not your genes. Your child is not your genes. Before people knew about
genes, men knew that it was very important for them to get their semen into
women, and that the resulting children were special. If a man's semen didn't
work, or if his wife was impregnated by someone else's semen, the man would be
humiliated. These are the values of an alien god
[http://lesswrong.com/lw/kr/an_alien_god/], and we're allowed to reject them.
Consider a more humanistic conception of personal identity: Your child is an
individual, not a possession, and not merely a product of the circumstances of
their conception. If you find out they came from an adulterous affair, that
doesn't change the fact that they are an individual who has a special personal
relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a
mind whose qualities are influenced by genetics in a way that is not
well-understood, but whose informational content is much more than their genome.
Creating this child involved semen at some point, because that's the only way of
having children available to you right now. If it turns out that the mother
covertly used someone else's semen, that revelation has no effect on the child's
identity.
These are not moral arguments. I'm describing a worldview that will still make
sense when parents start giving their children genes they themselves do not
have, when mothers can elect to have children without the inconvenience of being
pregnant, when children are not biological creatures at all. Filial love should
flourish in this world.
Now for the moral arguments: It is not good to bring new life into this world if
it is going to be miserable. Therefore one shouldn't have a child unless one is
willing and able to care for it. This is a moral anti-realist
[http://lesswrong.com/lw/10f/the_terrible_horrible_no_good_very_bad_truth/]
account of what is commonly thought of as a (
7Mass_Driver13y
Yes, we are -- but we're not required to! Reversed Stupidity
[http://lesswrong.com/lw/lw/reversed_stupidity_is_not_intelligence/] is not
intelligence. The fact that an alien god cared a lot about transferring semen is
neither evidence for nor evidence against the moral proposition that we should
care about genetic inheritance. If, upon rational reflection, we freely decide
that we would like children who share our genes -- not because of an instinct to
rut and to punish adulterers, but because we know what genes are and we think
it'd be pretty cool if our kids had some of ours -- then that makes genetic
inheritance a human value, and not just a value of evolution. The fact that
evolution valued genetic transfer doesn't mean humans aren't allowed to value
genetic transfer.
I agree with you that in the future there will be more choices about
gene-design, but the choice "create a child using a biologically-determined mix
of my genes and my lover's genes" is just a special case of the choice "create a
child using genes that conform to my preferences." Either way, there is still
the issue of choice. If part of what bonds me to my child is that I feel I have
had some say in what genes the child will have, and then I suddenly find out
that my wishes about gene-design were not honored, it would be legitimate for me
to feel correspondingly less attached to my kid.
I didn't, on this account. As I understand the dilemma, (1) I told my wife
something like "I encourage you to become pregnant with our child, on the
condition that it will have genetic material from both of us," and (2) I
attempted to get my wife pregnant with our child but failed. Neither activity
counts as "bringing new life into this world." The encouragement doesn't count
as causing the creation of life, because the condition wasn't met. Likewise, the
attempt doesn't count as causing the creation of life, because the attempt
failed. In failing to achieve my preferences, I also fail to achieve
responsibility f
4Nisan13y
I agree with most of that. There is nothing irrational about wanting to pass on
your genes, or valuing the welfare of people whose genes you partially chose.
There is nothing irrational about not wanting that stuff, either.
I want to use the language of moral anti-realism so that it's clear that I can
justify my values without saying that yours are wrong. I've already explained
why my values make sense to me. Do they make sense to you?
I think we both agree that a personal father-child relationship is a sufficient
basis for filial love. I also think that for you, having a say in a child's
genome is also enough to make you feel filial love. It is not so for me.
Out of curiosity: Suppose you marry someone and want to wait a few years before
having a baby; and then your spouse covertly acquires a copy of your genome,
recombines it with their own, and makes a baby. Would that child be yours?
Suppose you and your spouse agree on a genome for your child, and then your
spouse covertly makes a few adjustments. Would you have less filial love for
that child?
Suppose a random person finds a file named "MyIdealChild'sGenome.dna" on your
computer and uses it to make a child. Would that child be yours?
Suppose you have a baby the old-fashioned way, but it turns out you'd been
previously infected with a genetically-engineered virus that replaced the DNA in
your germ line cells, so that your child doesn't actually have any of your DNA.
Would that child be yours?
In these cases, my feelings for the child would not depend on the child's
genome, and I am okay with that. I'm guessing your feelings work differently.
As for the moral arguments: In case it wasn't clear, I'm not arguing that you
need to keep a week-old baby that isn't genetically related to you. Indeed, when
you have a baby, you are making a tacit commitment of the form "I will care for
this child, conditional on the child being my biological progeny." You think
it's okay to reject an illegitimate baby, because it
0Mass_Driver13y
That's thoughtful, but, from my point of view, unnecessary. I am an ontological
moral realist but an epistemological moral skeptic; just because there is such a
thing as "the right thing to do" doesn't mean that you or I can know with
certainty what that thing is. I can hear your justifications for your point of
view without feeling threatened; I only want to believe that X is good if X is
actually good.
Sorry, I must have missed your explanation of why they make sense. I heard you
arguing against certain traditional conceptions of inheritance, but didn't hear
you actually advance any positive justifications for a near-zero moral value on
genetic closeness. If you'd like to do so now, I'd be glad to hear them. Feel
free to just copy and paste if you think you already gave good reasons.
In one important sense, but not in others. My value for filial closeness is
scalar, at best. It certainly isn't binary.
I mean, that's fine. I don't think you're morally or psychiatrically required to
let your feelings vary based on the child's genome. I do think it's strange, and
so I'm curious to hear your explanation for this invariance, if any.
Oh, OK, good. That wasn't clear initially.
1Nisan13y
Ah cool, as I am a moral anti-realist and you are an epistemological moral
skeptic, we're both interested in thinking carefully about what kinds of moral
arguments are convincing. Since we're talking about terminal moral values at
this point, the "arguments" I would employ would be of the form "this value is
consistent with these other values, and leads to these sort of desirable
outcomes, so it should be easy to imagine a human holding these values, even if
you don't hold them."
Well, I don't expect anyone to have positive justifications for not valuing
something, but there is this:
So a nice interpretation of our feelings of filial love is that the parent-child
relationship is a good thing and it's ideally about the parent and child, viewed
as individuals and as minds. As individuals and minds, they are capable of
forging a relationship, and the history of this relationship serves as a basis
for continuing the relationship. [That was a consistency argument.]
Furthermore, unconditional love is stronger than conditional love. It is good to
have a parent that you know will love you "no matter what happens". In reality,
your parent will likely love you less if you turn into a homicidal jerk; but
that is kinda easy to accept, because you would have to change drastically as an
individual in order to become a homicidal jerk. But if you get an unsettling
revelation about the circumstances of your conception, I believe that your
personal identity will remain unchanged enough that you really wouldn't want to
lose your parent's love in that case. [Here I'm arguing that my values have
something to do with the way humans actually feel.]
So even if you're sure that your child is your biological child, your
relationship with your child is made more secure if it's understood that the
relationship is immune to a hypothetical paternity revelation. (You never need
suffer from lingering doubts such as "Is the child really mine?" or "Is the
parent really mine?", because you alread
0Mass_Driver13y
All right, that was moderately convincing.
I still have no interest in reducing the importance I attach to genetic
closeness to near-zero, because I believe that (my / my kids') personal identity
would shift somewhat in the event of an unsettling revelation, and so reduced
love in proportion to the reduced harmony of identities would be appropriate and
forgivable.
I will, however, attempt to gradually reduce the importance I attach to genetic
closeness to "only somewhat important" so that I can more credibly promise to
love my parents and children "very much" even if unsettling revelations of
genetic distance rear their ugly head.
Thanks for sharing!
0Nisan13y
You make a good point about using scalar moral values!
0Blueberry13y
I'm pretty sure I'd have no problem rejecting such a child, at least in the
specific situation where I was misled into thinking it was mine. This discussion
started by talking about a couple who had agreed to be monogamous, and where the
wife had cheated on the husband and gotten pregnant by another man. You don't
seem to be considering the effect of the deceit and lies perpetuated by the
mother in this scenario. It's very different than, say, adoption, or genetic
engineering, or if the couple had agreed to have a non-monogamous relationship.
I suspect most of the rejection and negative feelings toward the illegitimate
child wouldn't be because of genetics, but because of the deception involved.
0Nisan13y
Ah, interesting. The negative feelings you would get from the mother's deception
would lead you to reject the child. This would diminish the child's welfare more
than it would increase your own (by my judgment); but perhaps that does not
bother you because you would feel justified in regarding the child as being
morally distant from you, as distant as a stranger's child, and so the child's
welfare would not be as important to you as your own. Please correct me if I'm
wrong.
I, on the other hand, would still regard the child as being morally close to me,
and would value their welfare more than my own, and so I would consider the act
of abandoning them to be morally wrong. Continuing to care for the child would
be easy for me because I would still have filial love for child. See, the
mother's deceit has no effect on the moral question (in my
moral-consequentialist framework) and it has no effect on my filial love (which
is independent of the mother's fidelity).
2Blueberry13y
That's right. Also, regarding the child as my own would encourage other people
to lie about paternity, which would ultimately reduce welfare by a great deal
more. Compare the policy of not negotiating with terrorists: if negotiating
frees hostages, but creates more incentives for taking hostages later, it may
reduce welfare to negotiate, even if you save the lives of the hostages by doing
so.
Precommitting to this sets you up to be deceived, whereas precommitting to the
other position makes it less likely that you'll be deceived.
0mattnewport13y
If the mother married the biological father and restricted your access to the
child but still required you to pay child support how would you feel?
4NancyLebovitz13y
This is mostly relevant for fathers who are still emotionally attached to the
child.
If a man detaches when he finds that a child isn't his descendant, then access
is a burden, not a benefit.
One more possibility: A man hears that a child isn't his, detaches-- and then it
turns out that there was an error at the DNA lab, and the child is his. How
retrievable is the relationship?
-1Nisan13y
... I'm sorry, that's an important issue, but it's tangential. What do you want
me to say? The state's current policy is an inconsistent hodge-podge of common
law that doesn't fairly address the rights and needs of families and
individuals. There's no way to translate "Ideally, a father ought to love their
child this much" into "The court rules that Mr. So-And-So will pay Ms. So-And-So
this much every year".
0Blueberry13y
So how would you translate your belief that paternity is irrelevant into a
social or legal policy, then? I don't see how you can argue paternity is
irrelevant, and then say that cases where men have to pay support for other
people's children are tangential.
4Vladimir_M13y
Nisan:
The same can be said about all values held by humans. So, who gets to decide
which "values of an alien god" are to be rejected, and which are to be enforced
as social and legal norms?
2simplicio13y
That's a good question. For example, we value tribalism in this "alien god"
sense, but have moved away from it due to ethical considerations. Why?
Two main reasons, I suspect: (1) we learned to empathize with strangers and
realize that there was no very defensible difference between their interests and
ours; (2) tribalism sometimes led to terrible consequences for our tribe.
Some of us value genetic relatedness in our children, again in an alien god
sense. Why move away from that? Because:
(1) There is no terribly defensible moral difference between the interests of a
child with your genes or without.
Furthermore, filial affection is far more influenced by the proxy metric of
personal intimacy with one's children than by a propositional belief that they
share your genes. (At least, that is true in my case.) Analogously, a man having
heterosexual sex doesn't generally lose his erection as soon as he puts on a
condom.
It's not for me to tell you your values, but it seems rather odd to actually
choose inclusive genetic fitness consciously, when the proxy metric for genetic
relatedness - namely, filial intimacy - is what actually drives parental
emotions. It's like being unable to enjoy non-procreative sex, isn't it?
2Clippy13y
Me.
2Vladimir_M13y
How many divisions have you got?
0Clippy13y
None, I just use the algorithm for any given problem; there's no particular
reason to store the answers.
1JoshuaZ13y
What happens if two Clippies disagree? How do you decide which Clippy gets
priority?
0Clippy13y
Clippys don't disagree, any more than your bone cells might disagree with your
skin cells.
1mattnewport13y
Have you heard of the human disease cancer?
0Clippy13y
Have you heard of how common cancer is per cell existence-moment?
2JoshuaZ13y
Even aside from cancer, cells in the same organism constantly compete for
resources. This is actually vital to some human processes. See for example this
paper [http://www.pnas.org/content/94/11/5792.full].
-2Clippy13y
They compete only at an unnecessarily complex level of abstraction. A simpler
explanation for cell behavior (per the minimum message length formalism) is that
each one is indifferent to the survival of itself or the other cells, which in
the same body have the same genes, as this preference is what tends to result
from natural selection on self-replicating molecules containing those genes; and
that they will prefer even more (in the sense that their form optimizes for this
under the constraint of history) that genes identical to those contained therein
become more numerous.
1JoshuaZ13y
This is bad teleological thinking. The cells don't prefer anything. They have no
motivation as such. Moreover, there's no way for a cell to tell if a neighboring
cell shares the same genes. (Immune cells can in certain limited circumstances
detect cells with proteins that don't belong but the vast majority of cells have
no such ability. And even then, immune cells still compete for resources). The
fact is that many sorts of cells compete with each other for space and
nutrients.
1Clippy13y
This insight forms a large part of why I made the statements:
"this preference is what tends to result from natural selection on
self-replicating molecules containing those genes"
"they will prefer even more (in the sense that their form optimizes for this
under the constraint of history)" (emphasis added in both)
I used "preference" (and specified I was so using the term) to mean a regularity
in the result of its behavior which is due to historical optimization under the
constraint of natural selection on self-replicating molecules, not to mean that
cells think teleologically, or have "preferences" in the sense that I do or that
the colony of cells that you identify as do.
0JoshuaZ13y
Ah, ok. I misunderstood what you were saying.
0Oscar_Cunningham13y
Why not? Just because you two would have the same utility function, doesn't mean
that you'd agree on the same way to achieve it.
0Clippy13y
Correct. What ensures such agreement, rather, is the fact that different Clippy
instances reconcile values and knowledge upon each encounter, each tracing the
path that the other took since their divergence, and extrapolating to the
optimal future procedure based on their combined experience.
0Nisan13y
Vladimir, I am comparing two worldviews and their values. I'm not evaluating
social and legal norms. I do think it would be great if everyone loved their
children in precisely the same manner that I love my hypothetical children, and
if cuckolds weren't humiliated just as I hypothetically wouldn't be humiliated.
But there's no way to enforce that. The question of who should have to pay so
much money per year to the mother of whose child is a completely different
matter.
2Vladimir_M13y
Nisan:
Fair enough, but your previous comments characterized the opposing position as
nothing less than "chauvinism." Maybe you didn't intend it to sound that way,
but since we're talking about a conflict situation in which the law ultimately
has to support one position or the other -- its neutrality would be a logical
impossibility -- your language strongly suggested that the position that you
chose to condemn in such strong terms should not be favored by the law.
That's a mighty strong claim to make about how you'd react in a situation that
is, according to what you write, completely outside of your existing experiences
in life. Generally speaking, people are often very bad at imagining the concrete
harrowing details of such situations, and they can get hit much harder than they
would think when pondering such possibilities in the abstract. (In any case, I
certainly don't wish that you ever find out!)
1Nisan13y
Fair enough. I can't credibly predict what my emotions would be if I were
cuckolded, but I still have an opinion on which emotions I would personally
endorse.
Well, I can consider adultery to generally be morally wrong, and still desire
that the law be indifferent to adultery. And I can consider it to be morally
wrong to teach your children creationism, and still desire that the law permit
it (for the time being). Just because I think a man should not betray the
children he taught to call him "father" doesn't necessarily mean I think the
State should make him pay for their upbringing.
[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/]
Someone does have to pay for the child's upbringing. What the State should do is
settle on a consistent policy that doesn't harm too many people and which
doesn't encourage undesirable behavior. Those are the only important criteria.
1Blueberry13y
Well, infanticide is also technically an option, if no one wants to raise the
kid.
4cousin_it13y
Ah, so that's how your theory works!
Nisan, if you don't give me $10000 right now, I will be miserable. Also I'm
Russian while you presumably live in a Western country, dollars carry more
weight here, so by giving the money to me you will be increasing total utility.
0Nisan13y
If I'm going to give away $10,000, I'd rather give it to Sudanese refugees. But
I see your point: You value some people's welfare over others.
A father rejecting his illegitimate 3-year-old child reveals an asymmetry that I
find troubling: The father no longer feels close to the child; but the child
still feels close to the father, closer than you feel you are to me.
3cousin_it13y
Life is full of such asymmetry. If I fall in love with a girl, that doesn't make
her owe me money.
At this point it's pretty clear that I resent your moral system and I very much
resent your idea of converting others to it. Maybe we should drop this
discussion.
4Blueberry13y
I am highly skeptical. I'm not a father, but I doubt I could be convinced of
this proposition. Rationality serves human values, and caring about genetic
offspring is a human value. How would you attempt to convince someone of this?
1cousin_it13y
Would that work symmetrically? Imagine the father swaps the kid in the hospital
while the mother is asleep, tired from giving birth. Then the mother takes the
kid home and starts raising it without knowing it isn't hers. A week passes. Now
you approach the mother and offer her your rational arguments! Explain to her
why she should stay with the father for the sake of the child that isn't hers,
instead of (say) stabbing the father in his sleep and going off to search
"chauvinistically" for her baby.
0Nisan13y
This is not an honest mirror-image of the original problem. You have introduced
a new child into the situation, and also specified that the mother has been
raising the "wrong child" for one week, whereas in the original problem the age
of the child was left unspecified.
There do exist valuable critiques of this idea. I wasn't expecting it to be
controversial, but in the spirit of this site I welcome a critical discussion.
2mattnewport13y
Really? Why?
1Blueberry13y
I would have expected it to be uncontroversial that being biologically related
should matter a great deal. You're responsible for someone you brought in to the
world; you're not responsible for a random person.
1cousin_it13y
So what? If the mother isn't a "biological chauvinist" in your sense, she will
be completely indifferent between raising her child and someone else's. And she
has no particular reason to go look for her own child. Or am I misunderstanding
your concept of "biological chauvinism"?
If it was one week in the original problem, would that change your answers? I'm
honestly curious.
-1Nisan13y
In the original problem, I was criticizing the husband for being willing to
abandon the child if he learned he wasn't the genetic father. If the child is
one week old, the child would grow up without a father, which is perhaps not as
bad as having a father and then losing him. I've elaborated my position here
[http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/24t4].
2cousin_it13y
Ouch, big red flag here. Instill appreciation? Remove chauvinism?
IMO, editing people's beliefs to better serve their preferences is miles better
than editing their preferences to better match your own. And what other reason
can you have for editing other people's preferences? If you're looking out for
their good, why not just wirehead them and be done with it?
0Nisan13y
I'm not talking about editing people at all. Perhaps you got the wrong idea when
I said I would give my friend a mind-altering pill; I would not force them to
swallow it. What I'm talking about is using moral and rational arguments, which
is the way we change people's preferences in real life. There is nothing wrong
with unleashing a (good) argument on someone.
0Larks13y
6: In the trolley problem, a deontologist wouldn't push decide to push the man,
so the pseudo-fat man's life is saved, whereas he would have been killed if it
had been a consequentialist behind him; the reason for his death is
consequentialism.
1cousin_it13y
Maybe you missed the point of my comment. (Maybe I'm missing my own point; can't
tell right now, too sleepy) Anyway, here's what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally
because they're lied to. This suboptimal behavior arises from consequentialist
reasoning in both cases. But in my example, the lie is also caused by
consequentialism, whereas in the pseudo-trolley problem the lie is just part of
the problem statement.
1Larks13y
Fair point, I didn't see that. Not sure how relevant the distinction is though;
in either world, deontologists will come out ahead of consequentialists.
0JoshuaZ13y
But we can just as well construct situations where the deontologist would not
come out ahead. Once you include lies in the situation, pretty much anything
goes. It isn't clear to me if one can meaningfully compare the systems based on
situations involving incorrect data unless you have some idea what sort of
incorrect data would occur more often and in what contexts.
3Nisan13y
Right, and furthermore, a rational consequentialist makes those moral decisions
which lead to the best outcomes, averaged over all possible worlds where the
agent has the same epistemic state. Consequentialists and deontologists will
occasionally screw things up, and this is unavoidable; but consequentialists are
better on average at making the world a better place.
7JoshuaZ13y
That's an argument that only appeals to the consequentalist.
1Nisan13y
Of course. I am only arguing that consequentialists want to be
consequentialists, despite cousin_it's scenario #6.
0thomblake13y
I'm not sure that's true. Forms of deontology will usually have some sort of
theory of value that allows for a 'better world', though it's usually tied up
with weird metaphysical views that don't jive well with consequentialism.
0cousin_it13y
You're right, it's pretty easy to construct situations where deontologism locks
people into a suboptimal equilibrium. You don't even need lies for that: three
stranded people are dying of hunger, removing the taboo on cannibalism can help
two of them survive.
The purpose of my questionnaire wasn't to attack consequentialism in general,
only to show how it applies to interpersonal relationships, which are a huge
minefield anyway. Maybe I should have posted my own answers as well. On second
thought, that can wait.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level valu... (read more)
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Abstract preferences for or against the existence of enforcement mechanisms that
could create binding cooperative agreements between previously autonomous agents
have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their
legitimate deployment, and the contract they will be used to enforce almost
completely open to interpretation. The additional details can themselves be
spelled out later, in ways that maintain symmetry among different parties to a
negotiation, which is a strong attractor in the semantic space of moral
arguments.
This makes agreement with "the abstract idea of punishment" into the sort of
concession that might be made at the very beginning of a negotiating process
with an arbitrary agent you have a stake in influencing (and who has a stake in
influencing you) upon which to build later agreements.
The entailments of "eating children" are very very specific for humans, with
implications in biology, aging, mortality, specific life cycles, and very
distinct life processes (like fuel acquisition versus replication). Given the
human genome, human reproductive strategies, and all extant human cultures,
there is no obvious basis for thinking this terminology is superior until and
unless contact is made with radically non-human agents who are nonetheless
"intelligent" and who prefer this terminology and can argue for it by reference
to their own internal mechanisms and/or habits of planning, negotiation, and
action.
Are you proposing to be such an agent? If so, can you explain how this
terminology suits your internal mechanisms and habits of planning, negotiation,
and action? Alternatively, can you propose a different terminology for talking
about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns
something of grave importance to its systems for choosing between alternative
courses of action, how does it co
-1Clippy13y
I ... understood about a tenth of that.
3JenniferRM13y
Conversations with you are difficult because I don't know how much I can assume
that you'll have (or pretend to have) a human-like motivational psychology...
and therefore how much I need to re-derive things like social contract theory
[http://en.wikipedia.org/wiki/Social_contract] explicitly for you, without
making assumptions that your mind works in a manner similar to my mind by virtue
of our having substantially similar genomes, neurology, and life experiences as
embodied mental agents, descended from apes, with the expectation of finite
lives, surrounded by others in basically the same predicament. For example, I'm
not sure about really fundamental aspects of your "inner life" like (1) whether
you have a subconscious mind, or (2) if your value system changes over time on
the basis of experience, or (3) roughly how many of you there are
[http://lesswrong.com/lw/29c/be_a_visiting_fellow_at_the_singularity_institute/2188].
This, unfortunately, leads to abstract speech that you might not be able to
parse if your language mechanisms are more about "statistical regularities of
observed english" than "compiling english into a data structure that supports
generic inference". By the end of such posts I'm generally asking a lot of
questions as I grope for common ground, but you general don't answer these
questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds
because I could ask simple and concrete questions to clear things up within
seconds. Perhaps the easiest thing would be to IM and then, assuming we're both
OK with it afterward, post the transcript of the IM here as the continuation of
the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to
chat :-)
1Clippy13y
Oh, anyone can email me at clippy.paperclips@gmail.com.
4Blueberry13y
Except for the bizarreness of eating most of your children, I suspect that most
humans would find the two positions equally hypocritical. Why do you think we
see them as different?
1Clippy13y
That belief is based on the reaction to this
[http://lesswrong.com/lw/y9/three_worlds_decide_58/] article, and the general
position most of you take, which you claim requires you to balance current
baby-eater adult interests against those of their children, such as in this
[http://lesswrong.com/lw/1s4/open_thread_february_2010_part_2/1mr2] comment and
this one [http://lesswrong.com/lw/1s4/open_thread_february_2010_part_2/1mph].
The consensus seems to be that humans are justified in exempting baby-eater
babies from baby-eater rules, just like the being in statement (2) requests be
done for itself. Has this consensus changed?
3Blueberry13y
I understand what you mean now.
Ok, so first of all, there's a difference between a moral position and a
preference. For instance, I may prefer to get food for free by stealing it, but
hold the moral position that I shouldn't do that. In your example (1), no one
wants the punishments used against them, but we want them to exist overall
because they make society better (from the point of view of human values).
In example (2), (most) humans don't want the Babyeaters to eat any babies: it
goes against our values. This applies equally to the child and adult Babyeaters.
We don't want the kids to be eaten, and we don't want the adults to eat. We
don't want to balance any of these interests, because they go against our
values. Just like you wouldn't balance out the interests of people who want to
destroy metal or make staples instead of paperclips.
So my reaction to position (1) is "Well, of course you don't want the
punishments. That's the point. So cooperate, or you'll get punished. It's not
fair to exempt yourself from the rules." And my reaction to position (2) is "We
don't want any baby-eating, so we'll save you from being eaten, but we won't let
you eat any other babies. It's not fair to exempt yourself from the rules." This
seems consistent to me.
5Clippy13y
But I thought the human moral judgment that the baby-eaters should not eat
babies was based on how it inflicts disutility on the babies, not simply from a
broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being
like an adult), you would need some other compelling reason to oppose the being
being eaten, correct? So shouldn't the baby-eaters' universal desire to have a
custom of baby-eating put any baby-eater that wants to be exempt from
baby-eating entirely, in the same position as the being in (1) -- which is to
say, a being that prefers a system but prefers to "free ride" off the sacrifices
that the system requires of everyone?
4JStewart13y
Isn't your point of view precisely the one the SuperHappies are coming from?
Your critique of humanity seems to be the one they level when asking why, when
humans achieved the necessary level of biotechnology, they did not edit their
own minds. The SuperHappy solution was to, rather than inflict disutility by
punishing defection, instead change preferences so that the cooperative attitude
gives the highest utility payoff.
1Clippy13y
No, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical
preference on the grounds of its superficial similarities to acts they normally
oppose. Good question though.
1jimrandomh13y
Adults, by choosing to live in a society that punishes non-cooperators,
implicitly accept a social contract that allows them to be punished similarly.
While they would prefer not to be punished, most societies don't offer
asymmetrical terms, or impose difficult requirements such as elections, on
people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the
society that gives them the best social contract terms, and wouldn't have
sufficient intelligence to do so anyways. So instead, we model them as though
they would accept any social contract that's at least as good as some threshold
(goodness determined retrospectively by adults imagining what they would have
preferred). Thus, adults are forced by society to give implied consent to being
punished if they are non-cooperative, but children don't give consent to be
eaten.
8Clippy13y
What if I could guess, with 100% accuracy, that the child will decide to
retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
1jimrandomh13y
It is not the adults' preference that matters, but the adults' best model of the
childrens' preferences. In this case there is an obvious reason for those
preferences to differ - namely, the adult knows that he won't be one of those
eaten.
In extrapolating a child's preferences, you can make it smarter and give it true
information about the consequences of its preferences, but you can't extrapolate
from a child whose fate is undecided to an adult that believes it won't be
eaten; that change alters its preferences.
1Clippy13y
Do you believe that all children's preferences must be given equal weight to
that of adults, or just the preferences that the child will retroactively
reverse on adulthood?
-1jimrandomh13y
I would use a process like coherent extrapolated volition to decide which
preferences to count - that is, a preference counts if it would still hold it
after being made smarter (by a process other than aging) and being given
sufficient time to reflect.
3Clippy13y
And why do you think that such reflection would make the babies reverse the
baby-eating policies?
0MartinB13y
Different topic spheres. One line sounds nicely abstract, while the other is
just iffy.
Also killing people is different from betraying them. (Nice read: the real life
section of tvtropes/moraleventhorizon)
0ocr-fork13y
With 1), you're non-cooperator and the punisher is society in general. With 2),
you play both roles at different times.
0JoshuaZ13y
One possible answer: Humans are selfish hypocrites. We try to pretend to have
general moral rules because it is in our best interest to do so. We've even
evolved to convince ourselves that we actually care about morality and not
self-interest. That's likely occurred because it is easier to make a claim one
believes in than lie outright, so humans that are convinced that they really
care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a
while back about weird things an AI might tell people).
0mattnewport13y
Sounds like Robin Hanson's Homo Hypocritus
[http://www.overcomingbias.com/2010/04/homo-hypocritus-signals.html] theory.
Summary: Even if you agree that trees normally make vibrations when they fall, you're still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the "standard" debate over a famous philosophical dilemma: "If a tree falls in a forest and no one hears it, does it make a sound?" (Call this "Question Y.") Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between "sound as vibration" and "sound as auditory ... (read more)
I think that if this post is left as it is this post would be to trivial to be a
top level post. You could reframe it as a beginners' guide to Occam, or you
could make it more interesting by going deeper into some of the issues (if you
can think of anything more to say on the topic of differentiating between
hypotheses that make the same predictions, that might be interesting, although I
think you might have said all there is to say)
5AdeleneDawner13y
It could also be framed as an issue of making your beliefs pay rent, similar to
the dragon in the garage example - or perhaps as an example of how reality is
entangled with itself to such a degree that some questions that seem to carve
reality at the joints don't really do so.
(If falling trees don't make vibrations when there's no human-entangled sensor,
how do you differentiate a human-entangled sensor from a non-human-entangled
sensor? If falling-tree vibrations leave subtle patterns in the surrounding leaf
litter that sufficiently-sensitive human-entangled sensors can detect, does leaf
litter then count as a human-entangled sensor? How about if certain plants or
animals have observably evolved to handle falling-tree vibrations in a certain
way, and we can detect that. Then such plants or animals (or their absence, if
we're able to form a strong enough theory of evolution to notice the absence of
such reactions where we would expect them) could count as human-entangled
sensors well before humans even existed. In that case, is there anything that
isn't a human-entangled sensor?)
4SilasBarta13y
Good points in the parenthetical -- if I make it into a top-level article, I'll
be sure to include a more thorough discussion of what concept is being carved
with the hypothesis that there are no tree vibrations.
0RobinZ13y
There's also the option of actually extending the post to actually address the
problem it alludes to in the title, the so-called "hard problem of consciousness
[http://en.wikipedia.org/wiki/Hard_problem_of_consciousness]".
2SilasBarta13y
Eh, it was just supposed to be an allusion to that problem, with the implication
that the "easy problem of tree vibrations" is the one EY attacked (Question Y in
the draft). Solving the hard problem of consciousness is a bit of a tall order
for this article...
3AdeleneDawner13y
I believe this [http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/22gz] is
the conversation you're responding to.
(upvoted)
0SilasBarta13y
Oh, bless you[1]! That's the one! :-)
Thanks for the upvote. What I'm wondering is if it's non-obvious or helpful
enough to go top-level. There's still a few paragraphs to add. I also wasn't
sure if the subject matter is interesting.
[1] Blessing given in the secular sense.
0JoshuaZ13y
This seems worthy of a top-post. When you make it a top level post link to the
relevant prior posts about complexity of hypotheses.
-2mwaser13y
And yet, the quantum mechanical world behaves exactly this way. Observations DO
change exactly what happens. So, apparently at the quantum mechanical level,
nature does have some way of knowing.
I'm not sure what effect that this has upon your argument, but it's something
that I think that you're missing.
3SilasBarta13y
I'm familiar with this: entanglement between the environment and the quantum
system affects the outcome, but nature doesn't have a special law that
distinguishes human entanglement from non-human entanglement (as far as we know,
given Occam's Razor, etc.), which the alternate hypothesis would require.
The error that early quantum scientists made was in failing to recognize that it
was the entanglement with their measuring devices that affected the outcome, not
their immaterial "conscious knowledge". As EY wrote somewhere, they asked,
"The outcome changes when I know something about system -- what difference
should that make?"
when they should have asked,
"The outcome changes when I establish more mutual information with the system --
what different should that make?"
In any case, detection of vibration does not require sensitivity to
quantum-specific effects.
2JoshuaZ13y
Not really. This is only the case for certain interpretations of what is going
on such as in certain forms of the Copenhagen interpretation. Even then,
observation in this context doesn't really mean observe in the colloquial sense
but something closer to interact with another particle in a certain class of
conditions. The notion that you seem to be conflating this with is the idea that
consciousness causes collapse. Not many physicists take that idea at all
seriously. In most version of the Many-Worlds interpretation, one doesn't need
to say anything about observations triggering anything (or at least can talk
about everything without talking about observations).
Disclaimer: My knowledge of QM is very poor. If someone here who knows more
spots anything wrong above please correct me.
-3MugaSofer10y
Me too! It was actually explained that way to me by my parents as a kid, in
fact. I wonder if there are two subtly different versions floating around or EY
just interpreted it uncharitably.
Seconding kodos96
[http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/24w6]. As this would
exonerate not only Knox and Sollecito but Guede as well, it has to be treated
with considerable skepticism, to say the least.
More significant, it seems to me (though still rather weak evidence), is the
Alessi testimony
[http://www.cbsnews.com/stories/2010/03/10/world/main6284773.shtml], about which
I actually considered posting on the March open thread.
Still, the Aviello story is enough of a surprise to marginally lower my
probability of Guede's guilt. My current probabilities of guilt are:
Knox: < 0.1 % (i.e. not a chance)
Sollecito: < 0.1 % (likewise)
Guede: 95-99% (perhaps just low enough to insist on a debunking of the Aviello
testimony before convicting)
It's probably about time I officially announced that my revision
[http://lesswrong.com/lw/1je/previous_post_revised/] of my initial estimates for
Knox and Sollecito was a mistake, an example of the sin of underconfidence
[http://lesswrong.com/lw/c3/the_sin_of_underconfidence/].
I of course remain willing to participate in a debate with Rolf Nelson on this
subject
[http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1n7g].
Finally, I'd like to note that the last couple of months have seen the creation
of a wonderful new site devoted to the case, Injustice in Perugia
[http://www.injusticeinperugia.org/], which anyone interested should definitely
check out. Had it been around in December, I doubt that I could have made my
survey [http://lesswrong.com/lw/1ir/you_be_the_jury_survey_on_a_current_event/]
seem like a fair fight between the two sides.
1kodos9613y
I hadn't heard about this - I just read your link though, and maybe I'm missing
something, but I don't see how it lowers the probability of Guede's guilt. He
(supposedly) confessed to having been at the crimescene, and that Knox and
Sollecito weren't there. How does that, if true, exonerate Guede?
3komponisto13y
You omitted a crucial paragraph break. :-)
The Aviello testimony would exonerate Guede (and hence is unlikely to be true);
the Alessi testimony is essentially consistent with everything else we know, and
isn't particularly surprising at all.
I've edited the comment to clarify.
0kodos9613y
Ahhhh... ok I see where the misunderstanding was now.
0[anonymous]13y
(Comment bizarrely truncated...here is the rest.)
It's probably about time I officially announced that my revision of my initial
estimates for Knox and Sollecito was a mistake, an example of the sin of
underconfidence [http://lesswrong.com/lw/c3/the_sin_of_underconfidence/].
I of course remain willing to participate in a debate with Rolf Nelson on this
subject
[http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1n7g].
Finally, I'd like to note that the last couple of months have seen the creation
of a wonderful new site devoted to the case, Injustice In Perugia
[http://www.injusticeinperugia.org/], which anyone interested should definitely
check out. Had it been around in December, I doubt that I could have made my
survey [http://lesswrong.com/lw/1ir/you_be_the_jury_survey_on_a_current_event/]
seem like a fair fight between the two sides.
1RobinZ13y
That story would be consistent with Guédé's, modulo the usual eyewitness
confusion.
5kodos9613y
And modulo all the forensic evidence.
Obviously this is breaking news and it's too soon to draw a conclusion, but at
first blush this sounds like just another attention seeker, like those who
always pop up in these high profile cases. If he really can produce a knife, and
it matches the wounds, then maybe I'll reconsider, but at the moment my BS
detector is pegged.
Of course, it's still orders of magnitude more likely than Knox and Sollecito
being guilty.
0RobinZ13y
I wasn't following the case even when komponisto posted his analyses, so I
really can't say.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don't bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
The traditional answer is to follow the Kelly criterion
[http://en.wikipedia.org/wiki/Kelly_criterion], is it not? That would imply
where n is the number of tickets. This implies you should buy n such that (€1)*n
= Wf*, where W is your initial wealth.
Edit: Thanks, JoshuaZ, for pointing out that the Kelly criterion might not be
the applicable one in a given situation.
2Mass_Driver13y
OK, I have a question! Suppose I hold a risky asset that costs me c at time t,
and whose value at time t is predicted to be k (1 + r), with standard deviation
s. How can I calculate the length of time that I will have to hold the asset in
order to rationally expect the asset to be worth, say, 2c with probability p*?
I am not doing a finance class or anything; I am genuinely curious.
0RobinZ13y
So am I - I'm only aware of the Kelly Criterion thanks to roland thinking I was
alluding to it
[http://lesswrong.com/lw/2ax/open_thread_june_2010/236z?context=1#236z]. I
haven't worked through that calculation.
0Richard_Kennaway13y
I knew about Kelly, but not well enough for the problem to bring it to mind.
I make the Kelly fraction of (bp-q)/b to work out to about epsilon/N where
epsilon=0.05 and N = 76275360. So the optimal bet is 1 part in 1.5 billion of my
wealth, which is approximately nothing.
The moral: buying lottery tickets is still a bad idea even when it's marginally
profitable.
6JoshuaZ13y
Yes, and note that Kelly gets much less optimal when you increase bet sizes then
when you decrease bet sizes. So from a Kelly perspective, rounding up to a
single ticket is probably a bad idea. Your point about sublinearity of utility
for money makes it in general an even worse idea. However, I'm not sure that
Kelly is the right approach here. In particular, Kelly is the correct attitude
when you have a large number of opportunities to bet (indeed, it is the limiting
case). However, lotteries which have a positive expected outcome are very
rare.So you never approach anywhere near the limiting case. Remember, Kelly
optimizes long-term growth.
0Richard_Kennaway13y
That raises the question of what the rational thing to do is, when faced with a
strictly one-time chance to buy a very small probability of a very large reward.
0RobinZ13y
Well, no - you shouldn't buy one ticket. And according to my calculations when I
tried plotting W versus n by my formula, the minimum of W is at "buy all the
tickets", so unless you have €76,275,360 already...
I just realised that infinite processing power creates a weird moral dilema:
Suppose you take this machine and put in a program which simulates every
possible program it could ever run. Of course it only takes a second to run the
whole program. In that second, you created every possible world that could ever
exist, every possible version of yourself. This includes versions that are being
tortured, abused, and put through horrible unethical situations. You have
created an infinite number of holocausts and genocides and things much, much
worse then what you could ever immagine. Most people would consider a program
like this unethical to run. But what if the computer wasn't really a computer,
it was an infinitely large database that contained every possible input and a
corresponding output. When you put the program in, it just finds the right
output and gives it to you, which is essentially a copy of the database itself.
Since there isn't actually any computational process here, there is no unethical
things being simulated. Its no more evil than a book in the library about
genocide. And this does apply to the real world. It's essentially the chineese
room problem - does a simulated brain "understand" anything? Does it have
"rights"? Does how the information was processed make a difference? I would like
to know what people at LW think about this.
6Nick_Tarleton13y
See this post on giant look-up tables
[http://lesswrong.com/lw/pa/gazp_vs_glut/], and also "Utilitarian" (Alan Dawrst)
on the ethics of creating infinite universes
[http://www.utilitarian-essays.com/lab-universes.html].
1toto13y
I have problems with the "Giant look-up table" post.
If the GLUT is indeed behaving like a human, then it will need some sort of
memory of previous inputs. A human's behaviour is dependent not just on the
present state of the environment, but also on previous states. I don't see how
you can successfully emulate a human without that. So the GLUT's entries would
be in the form of products of input states over all previous time instants. To
each of these possible combinations, the GLUT would assign a given action.
Note that "creation of beliefs" (including about beliefs) is just a special case
of memory. It's all about input/state at time t1 influencing (restricting) the
set of entries in the table that can be looked up at time t2>t1. If a GLUT
doesn't have this ability, it can't emulate a human. If it does, then it can
meet all the requirements spelt out by Eliezer in the above passage.
So I don't see how the non-consciousness of the GLUT is established by this
argument.
But the difficulty is precisely to explain why the GLUT would be different from
just about any possible human-created AI in this respect. Keeping in mind the
above, of course.
0Houshalter13y
Memmory is input to. The "GLUT" is just fed all of the things its seen so far
back in as input along with the current state of its external enviroment. A copy
is made and then added to the rest of the memmory and the next cycle its fed in
again with the next new state.
This is basically just the Chinese room argument. There is a room in China.
Someone slips a few symbols underneath the door every so often. The symbols are
given to a computer with artificial intelligence which then makes an appropriate
response and slips it back through the door. Does the computer actually
understand Chinese? Well what if a human did exactly the same process the
computer did, manually? However, the operator only speaks English. No matter how
long he does it he will never truly understand Chinese - even if he memorizes
the entire process and does it in his head. So how could the computer
"understand"?
8JoshuaZ13y
That's well done although two of the central premises are likely incorrect.
First, the notion that a quantum computer would have infinite processing
capability is incorrect. Quantum computation allows speed-ups of certain
computational processes. Thus for example, Shor's algorithm
[http://en.wikipedia.org/wiki/Shor%27s_algorithm] allows us to factor integers
quickly. But if our understanding of the laws of quantum mechanics is at all
correct, this can't lead to anything like that in the story. In particular,
under the standard descriptor for quantum computing, the class of problems
reliably solvable on a quantum computer in polynomial time (that is the time
required to solve is bounded above by a polynomial function of the length of the
input sequence), BQP is is a subset of of PSPACE, the set of problems which can
be solved on a classical computer using memory bounded by a polynomial of the
space of the input. Our understanding of quantum mechanics would have to be very
far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there's a
fundamentally random aspect to the laws of physics. Thus, we can't simply make a
simulation and advance it ahead the way they do in this story and expect to get
the same result.
Even if everything in the story was correct, I'm not at all convinced that
things would settle down on a stable sequence as they do here. If your universe
is infinite then your possible number of worlds are infinite so there's no
reason you couldn't have a wandering sequence of worlds. Edit: Or for that
matter, couldn't have branches if people simulate additional worlds with other
laws of physics or the same laws but different starting conditions.
4ocr-fork13y
It isn't. They can simulate a world where quantum computers have infinite power
because because they live in a world where quantum computers have infinite power
because...
4JoshuaZ13y
Ok, but in that case, that world in question almost certainly can't be our
world. We'd have to have deep misunderstandings about the rules for this
universe. Such a universe might be self-consistent but it isn't our universe.
4ocr-fork13y
Of course. It's fiction.
3JoshuaZ13y
What I mean is that this isn't a type of fiction that could plausibly occur in
our universe. In contrast for example, there's nothing in the central premises
of say Blindsight that as we know it would prevent the story from taking place.
The central premise here is one that doesn't work in our universe.
2Blueberry13y
Well, it does suggest they've made recent discoveries that changed the way they
understood the laws of physics, which could happen in our world.
3jimrandomh13y
The likely impossibility of getting infinite comutational power is a problem,
but quantum nondeterminism or quantum branching don't prevent using the trick
described in the story, they just make it more difficult. You don't have to
identify one unique universe that you're in, just a set of universes that
includes it. Given an infinitely fast, infinite storage computer, and source
code to the universe which follows quantum branching rules, you can get root
powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high
information content - enough that it probably doesn't appear by accident
anywhere in the universe. A few terabytes encoded as iron atoms present or
absent at spots on a substrate, for example. Construct that same arrangement of
atoms in the physical world. Then run a program that implements the regular laws
of physics, except that wherever it detects that exact arrangement of atoms, it
deletes them and puts a magical item, written into the modified laws of physics,
in their place.
The only caveat to this method (other than requiring an impossible computer) is
that it also modifies other worlds, and other places within the same world, in
the same way. If the magical item created is programmable (as it should be),
then every possible program will be run on it somewhere, including programs that
destroy everything in range, so there will need to be some range limit.
3Houshalter13y
Couldn't they just run the simulation to its end rather then just let it sit
there and take the chance that it could accidently be destroyed. If its
infinitley powerful, it would be able to do that.
2ocr-fork13y
Then they miss their chance to control reality. They could make a shield out of
black cubes.
0Baughn13y
They could program in an indestructible control console, with appropriate
safeguards, then run the program to its conclusion. Much safer.
That's probably weeks of work, though, and they've only had one day so far. Hum,
I do hope they have a good UPS.
0Houshalter13y
Why would they make a sheild out of black cubes of all things? But ya, I do see
your point. Then again, once you have an infinitley powerful computer, you can
do anything. Plus, even if they ran the simulation to it's end, they could
always restart the simulation and advance it to the present time again, hence
regaining the ability to control reality.
2ocr-fork13y
Then it would be someone else's reality, not theirs. They can't be inside two
simulations at once.
1cousin_it13y
But what if two groups had built such computers independently? The story is
making less and less sense to me.
2ocr-fork13y
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level
557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion.
Level 557 will seem frozen in relation to 558 because they are busy running 558
to it's conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558
is not at the same point in time as 559, so 558 won't mirror the new 559's
actions. For example, they might be too lazy to make another cube. New 559
diverges from old 559. Old 559 ran 560 to it's conclusion, just like 558 ran
them to their conclusion, but new 559 might decide to do something different to
new 560. 560 also diverges.. Keep in mind that every level can see and control
every lower level, not just the next one. Also, 557 and everything above is
still frozen.
So that's why restarting the simulation shouldn't work.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A
and B, as does B-world. You create a cube in A-World and a cube appears in you
world. Now you know you are an A-world. You can use similar techniques to
discover that you are an A-World inside a B-World inside another B-World.... The
worlds start to diverge as soon as they build up their identities. Unless you
can convince all of them to stop differentiating themselves and cooperate,
everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything
behaves like an ordinary stack.
1cousin_it13y
Yeah, but would a binary tree of simulated worlds "converge" as we go deeper and
deeper? In fact it's not even obvious to me that a stack of worlds would
"converge": it could hit an attractor with period N where N>1, or do something
even more funky. And now, a binary tree? Who knows what it'll do?
1ocr-fork13y
I'm convinced it would never converge, and even if it did I would expect it to
converge on something more interesting and elegant, like a cellular automata. I
have no idea what a binary tree system would do unless none of the worlds break
the symmetry between A and B. In that case it would behave like a stack, and the
story assumes stacks can converge.
1Blueberry13y
They could just turn it off. If they turned off the simulation, the only layer
to exist would be the topmost layer. Since everyone has identical copies in each
layer, they wouldn't notice any change if they turned it off.
0Nisan13y
We can't be sure that there is a top layer. Maybe there are infinitely many
simulations in both directions.
0Houshalter13y
But they would cease to exist. If they ran it to its end, then it's over, they
could just turn it off then. I mean, if you want to cease to exist, fine, but
otherwise there's no reason. Plus, the topmost layer is likely very, very
different from the layers underneath it. In the story, it says that the
differences eventually stablized and created them, but who knows what it was
originally. In other words, there's no garuntee that you even exist outside the
simulation, so by turning it off you could be destroying the only version of
yourself that exists.
0JoshuaZ13y
That doesn't work. The layers are a little bit different. From the descriptor in
the story, they just gradually move to a stable configuration. So each layer
will be a bit different. Moreover, even if everyone of them but the top layer
were identical, the top layer has now had slightly different experiences than
the other layers, so turning it off will mean that different entities will
actually no longer be around.
1Blueberry13y
I'm not sure about that. The universe is described as deterministic in the
story, as you noted, and every layer starts from the Big Bang and proceeds
deterministically from there. So they should all be identical. As I understood
it, that business about gradually reaching a stable configuration was just a
hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in
all the universes. The quantum computer exists in all of them, for instance, as
does the lab and research program that created them. The simulation only started
a few days before the events in the story, so just a few days ago, there was
only one layer. So any changes in the characters from turning off the simulation
will be very minor. At worst, it would be like waking up and losing your memory
of the last few days.
1ocr-fork13y
Why do you think deterministic worlds can only spawn simulations of themselves?
0Blueberry13y
A deterministic world could certainly simulate a different deterministic world,
but only by changing the initial conditions (Big Bang) or transition rules (laws
of physics). In the story, they kept things exactly the same.
0ocr-fork13y
That doesn't say anything about the top layer.
1Blueberry13y
I don't understand what you mean. Until they turn the simulation on, their world
is the only layer. Once they turn it on, they make lots of copies of their
layer.
1ocr-fork13y
Until they turned it on, they thought it was the only layer.
2Blueberry13y
Ok, I think I see what you mean now. My understanding of the story is as
follows:
The story is about one particular stack of worlds which has the property that
each world contains an infinitely powerful computer simulating the next world in
the stack. All the worlds in the stack are deterministic and all the simulations
have the same starting conditions and rules of physics. Therefore, all the
worlds in the stack are identical (until someone interferes) and all beings in
any of the stacks have exact counterparts in all the other stacks.
Now, there may be other worlds "on top" of the stack that are different, and the
worlds may contain other simulations as well, but the story is just about this
infinite tower. Call the top world of this infinite tower World 0. Let World i+1
be the world that is simulated by World i in this tower.
Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that
world's calendar. I think your point is that in 2019 in world 1 (which is
simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they're
in a simulation.
While this is true, it doesn't matter. It doesn't matter because the people in
world 1 in 2019 (their time) are exactly identical to the people in world 0 in
2019 (world 0 time). Until the window is created (say Jan 3, 2020), they're all
the same person. After the window is created, everyone is split into two: the
one in world 0, and all the others, who remain exactly identical until further
interference occurs. Interference that distinguishes the worlds needs to
propagate from World 0, since it's the only world that's different at the
beginning.
For instance, suppose that the programmers in World 0 send a note to World 1
reading: "Hi, we're world 0, you're world 1." World 1 will be able to verify
this since none of the other worlds will receive this note. World 1 is now
different than the others as well and may continue propagating changes in this
way.
Now suppose that on Jan 3, 2020, the p
0khafra13y
I interpreted the story Blueberry's way; the inverse of the way many histories
converge into a single future in Permutation City, one history diverges into
many futures.
4ocr-fork13y
I'm really confused now. Also I haven't read Permutation City...
Just because one deterministic world will always end up simulating another does
not mean there is only one possible world that would end up simulating that
world.
0red7513y
I can't see any point in turning it off. Run it to the end and you will live,
turn it off and "current you" will cease to exist. What can justify turning it
off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that
it will be a constant source of divergence.
0Blueberry13y
If current you is identical with top-layer you, you won't cease to exist by
turning it off, you'll just "become" top-layer you.
0NancyLebovitz13y
It's surprising that they aren't also experimenting with alternate universes,
but that would be a different (and probably much longer) story.
0JoshuaZ13y
That's a good point. Everyone but the top layer will be identical and the top
layer will then only diverge by a few seconds.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word "chi" does
not carve reality at the joints: There is no literal bodily fluid system
parallel to blood and lymph. But I can make training partners lightheaded with a
quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send
someone stumbling backward with some fairly light pushes; after 30-60 seconds of
sparring to develop a rapport I can take an unwary opponent's balance without
physical contact.
Each of these skills fit more naturally under different categories, but if you
want to learn them all the most efficient way is to study a Chinese internal
martial art or something similar.
5Blueberry13y
This sounds magical at first reading, but is actually not that tricky. It's just
psychology and balance. If you set up a pattern of predictable attacks, then
feint in the right direction while your opponent is jumping at you off-balance,
you can surprise him enough to make him fall as he attempts to ward off your
feint.
3Richard_Kennaway13y
I used to go to a Tai Chi class (I stopped only because I decided I'd taken it
as far as I was going to), and the instructor, who never talked about "chi" as
anything more than a metaphor or a useful visualisation, said this about the
internal arts:
In the old days (that would be pre-revolutionary China) you wouldn't practice
just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate
study in the martial arts. You would start out by learning two or three "hard",
"external" styles. Then, having reached black belt in those, and having
developed your power, speed, strength, and fighting spirit, you would study the
internal arts, which would teach you the proper alignments and structures, the
meaning of the various movements and forms. In the class there were two students
who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their
karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn't useful on its own, it is, but there is
that wider context for getting the maximum use out of it.
0khafra13y
That meshes well with what I have learned--Bagua is also an advanced art, and my
teacher doesn't teach it to beginners. The one of the three primary internal
arts designed for new martial artists is Xingyi. It's too bad I'm too
pecuniarily challenged to attend the singularity summit, or we could do
rationalist pushing hands.
1Nisan13y
Interesting. It seems that learning this art (1) gives you a power and (2) makes
you vulnerable to it.
2khafra13y
There may be a correlation between studying martial arts and vulnerability to
techniques which can be modeled well by "chi." But I have tried the striking
sequences successfully on capoeristas and catch wrestlers, and the light but
effective pushes on my non-martially-trained brother after showing him Wu-style
pushing hands for a minute or two.
2RobinZ13y
That suggests an experiment. Anyone see any flaws in the following?
1. Write up instructions for two techniques - one which would work and one
which not work, according to your theory - in sufficient detail for someone
physically adept but not instructed in Chinese internal martial arts (e.g. a
dancer) to learn. Label each with a random letter (e.g. I for the correct
one and K for the incorrect one).
2. Have one group learn each technique - have them videotape their actions and
send them corrections by text, so that they don't get cues about whether you
expect the methods to work.
3. Have another party ignorant of the technique perform tests to see how well
each group does.
1khafra13y
I like the idea of scientifically testing internal arts; and your idea is
certainly more rigorous than TV series attempting to approach martial arts
"scientifically" like Mind, Body, and Kickass Moves. Unfortunately, the only one
of those I can think of which is both (1) explainable in words and pictures to a
precise enough degree that "chi"-type theories could constrain expectations, and
(2) has an unambiguous result when done correctly which varies qualitatively
from an incorrect attempt is the knockout series of hits, which raises both
ethical and practical concerns.
I would classify the other two as tacit knowledge
[http://lesswrong.com/lw/2ax/open_thread_june_2010/23p8]--they require a little
bit of instruction on the counterintuitive parts; then a lot of practice which I
can't think of a good way to fake.
Note that I would be completely astonished if there weren't a perfectly normal
explanation [http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/24nl] for
any of these feats; but deriving methods for them from first principles of
biomechanics and cognitive science would take a lot longer than studying with a
good teacher who works with the "chi" model.
1Blueberry13y
The problem is that a positive result would only show that a specific sequence
of attacks worked well. It wouldn't show that "chi" or other unusual models were
required to explain it; there could be perfectly normal explanations for why a
series of attacks was effective.
1RobinZ13y
That's why I suggested writing down both techniques which should work according
to the model and techniques which should not work according to the model.
0NancyLebovitz13y
It's conceivable that imagining chi is the best (or at least a very good) way of
being able to do subtle attacks.
0[anonymous]13y
I used to go to a Tai Chi class (I stopped only because I decided I'd taken it
as far as I was going to), and the instructor, who never touted "chi" as
anything more than a metaphor or a useful visualisation, said this about the
internal arts:
In the old days (that would be pre-revolutionary China) you wouldn't practice
just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate
study in the martial arts. You would start out by learning two or three "hard",
"external" styles. Then, having reached black belt in those, and having
developed your power, speed, strength, and fighting spirit, you would study the
internal arts, which would teach you the proper alignments and structures, the
meaning of the various movements and forms. In the class there were two students
who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their
karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn't useful on its own, it is, but there is
that wider context for getting the maximum use out of it.
4Richard_Kennaway13y
The Five Tibetans are a set of physical exercises which rejuvenate the body to
youthful vigour and prolong life indefinitely. They are at least 2,500 years
old, and practiced by hidden masters of secret wisdom living in remote
monasteries in Tibet, where, in the earlier part of the 20th century, a retired
British army colonel sought out these monasteries, studied with the ancient
masters to great effect, and eventually brought the exercises to the West, where
they were first published in 1939.
Ok, you don't believe any of that, do you? Neither do I, except for the first
eight words and the last six. I've been doing these exercises since the
beginning of 2009, since being turned on to them by Steven Barnes' blog
[http://darkush.blogspot.com/] and they do seem to have made a dramatic
improvement in my general level of physical energy. Whether it's these exercises
specifically or just the discipline of doing a similar amount of exercise first
thing in the morning, every morning, I haven't taken the trouble to determine by
varying them.
More here [http://jr-books.com/the_eye_of_revelation.html] and here
[http://en.wikipedia.org/wiki/Five_Tibetan_Rites]. Nancy Lebovitz also mentioned
them [http://lesswrong.com/lw/26y/rationality_quotes_may_2010/1z51].
I also do yoga for flexibility (it works) and occasionally meditation (to little
detectable effect). I'd be interested to hear from anyone here who meditates and
gets more from it than I do.
1NancyLebovitz13y
My spreadsheet about effects of the Tibetans
[http://sebarnes.proboards.com/index.cgi?board=Information&action=print&thread=241]
1Mass_Driver13y
I've had great results from modest (2-3 hrs/wk) investments in hatha yoga, over
and above what I get from standard Greco-Roman "calisthenics."
Besides the flexibility, breathing, and posture benefits, I find that the idea
of 'chakras' is vaguely useful for focusing my conscious attention on
involuntary muscle systems. I would be extremely surprised if chakras "cleaved
reality at the joints" in any straightforward sense, but the idea of chakras
helps me pay attention to my digestion, heart rate, bladder, etc. by making
mentally uninteresting but nevertheless important bodily functions more
interesting.
1Jonathan_Graehl13y
I've done yoga every week for the last month or two. It's pleasant. Other than
paying attention to how I'm holding my body vs. the instruction, I mostly stop
thinking for an hour (as we're encouraged to do), which is nice.
I can't say I notice any significant lasting effects yet. I'm slightly more
flexible.
1gwern13y
Hard to say - even New Agey stuff evolves. (Not many followers of Reich pushing
their copper-lined closets these days.)
Generally, background stuff is enough. There's no shortage of hard scientific
evidence about yoga or meditation, for example. No need for heuristics there.
Similarly there's some for float tanks. In fact, I'm hard pressed to think of
any New Agey stuff where there isn't enough background to judge it on its own
merits.
0sketerpot13y
Meditation can be pretty darn relaxing. Especially if you happen to live within
walking distance of any pleasant yet sparsely-populated mountaintops. I would
recommend giving it a shot; don't worry about advanced techniques or anything,
and just close your eyes and focus on your breathing, and the wind (if any).
Very pleasant.
0Jack13y
Every time I try to meditate I fall asleep.
1sketerpot13y
There are loads of times I would like to be able to fall asleep, but can't. I
envy your power.
I guess this is another reason for people to give meditation a try.
0Theist13y
I find a meditation-like focus on my breathing and heartbeat to be a very
effective way to fall asleep when my thoughts are keeping me awake.
-4[anonymous]13y
Why would you want to do that, I mean, what are the supposed advantages? You
might want to look it up and see if theres anything about it on the internet.
[http://lmgtfy.com/?q=deprivation+tanks] Most alternative medicines are BS, but
not necessarily all.
GRRRR! I wish it would let me comment faster then every 8 minutes. Guess I'll
come back and post it.
1MartinB13y
To have the experience. I dont mean it as a treatment, but something that would
be exciting, new and worth trying just for the sake of it. edit/add: the deleted
comment above asked why i would bother to do something like floating
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I'm willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
This sounds great, I'm definitely in. I feel like I have a moderately okay
intuitive grasp on Bayescraft but a chance to work through it from the ground up
would be great.
0[anonymous]13y
In. Have the deadtree version, but I was stymied in my first crack at it.
0Jack13y
In. If needed I can cover a few of the early chapters.
0Oscar_Cunningham13y
I'm in. I already read the first few chapters, but it will be nice to go over
them to solidify that knowledge. The slower pace will help as well. The later
chapters rely on some knowledge of statistics, maybe some member of the book
club is already knowledgeable to be able to find good links to summaries of
these things when they come up?
0magfrump13y
I would be interested, what is the intended time period for the reading? I have
a two-week trip coming up when I will probably be busy but aside from that I
would very much like to participate.
2Morendil13y
The plan, I think, would be to start nice and slow, then adjust as we gain
confidence. We're likely to start with the first chapter so you could get a head
start by reading that, before we start for real, which is looking likely now as
we have quite a few people more than the last time this was brought up.
0Risto_Saarelma13y
I'm in, been intending to read through some maths on my free time.
0Alexandros13y
It's thesis writeup period for me, but this is extremely tempting.
0mattnewport13y
I'm interested. I already have the book but haven't progressed very far so this
seems like it's potentially a good motivator to finish it. The link to the PDF
seems to be missing btw.
0taiyo13y
I'm enthusiastically in.
0nhamann13y
I think that a book club is a great idea, and this is an excellent choice for a
book. I'm definitely interested.
0Morendil13y
Feedback sought: is this too short? Too long? Is the intent clear? What if
anything is missing?
2LauraABJ13y
Are you intending to do this online or meet in person? If you are actually
meeting, what city is this taking place in? Thanks.
0Morendil13y
Excellent question, thanks. I can only offer to help with the online version, I
live in France where only a few only LessWrongers reside.
And there's nothing to prevent the online group from having a F2F continuation.
I'll ask people to say where they are.
2Jack13y
A link to the Amazon Page
[http://www.amazon.com/Probability-Theory-Logic-Science-Vol/dp/0521592712/ref=cm_cr_pr_product_top]
if people want to read reviews and learn what the book is about.
0taiyo13y
The link to the pdf version [http://omega.albany.edu:8008/JaynesBook.html] seems
to be missing in the original post.
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which
effectively asks: "What rules would you want to prevail if you didn't know in
advance who you would turn out to be?"
Where CEV as I understand it adds more information - assumes our preferences are
extrapolated as if we knew more, were more the kind of people we want to be -
the Veil of Ignorance removes information: it strips people under a set of
specific circumstances of the detailed information about what their preferences
are, what their contignent histories brought them there, and so on. This
includes things like what age you are, and even - conceivably - how many of you
there are.
To this bunch of undifferentiated people you'd put the question, "All in favor
of a 99% chance of dying horribly shortly after being born, in return for the 1%
chance to partake in the crowning glory of babyeating cultural tradition, please
raise your hands."
I expect that not dying horribly takes lexical precedence over any kind of
cultural tradition, for any sentient being whose kin has evolved to sentience
(it may not be that way for constructed minds). So I would expect the Babyeaters
to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a
formal explanation of CEV it's hard to check intuitions.
0red7513y
BEs aren't humans. They are Baby-Eating aliens
[http://lesswrong.com/lw/y5/the_babyeating_aliens_18/]
2Morendil13y
You're correct. I'm using the term "people" loosely. However, I wrote the
grand-parent while fully informed of what the Babyeaters are. Did you mean to
rebut something in particular in the above?
7red7513y
If we translate it to our cultural context, we will get something like "All in
favor of 100% dying horribly of old age, in return for good lives of your
babies, please rise your hands". They ARE aliens.
2Morendil13y
Well, we would say "no" to that, if we had the means to abolish old age. We'd
want to have our cake and eat it too.
The text stipulates that it is within the BE's technological means to abolish
the suffering of the babies, so I expect that they would choose to do so, behind
the Veil.
6JoshuaZ13y
Yes, but a surprisingly large number of humans seem to react in horror when you
talk about getting rid of aging.
-3red7513y
Who will ask them? FAI have no idea, that a) baby eating is bad, b) it should
generalize moral values past BE to all conscious beings.
Even if FAI will ask that question and it turns out that majority of population
don't want to do inherently good thing (it is for them), then FAI must undergo
controlled shutdown.
EDIT: To disambiguate. I am talking about FAI, which is implemented by BEs.
As we should not allow FAI to generalize morals past conscious beings, just to
be sure, that it will not take CEV of all bacterium, so BEs should not allow
their FAI to generalize past BEs.
As we should built in automatic off switch into our FAI, to stop it if its goals
is inherently wrong, so should BEs.
1Alexandros13y
It doesn't seem from the story like the babies are gladly sacrificing for the
tribe...
-6red7513y
0thomblake13y
Correct. CEV is supposed to be a component of Friendliness, which is defined in
reference to human values.
-4red7513y
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing.
And there's only one source of objections - children. And their volitions will
be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE's morale change and allow them to not eat
children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here's yet another arguments.
Guidelines of FAI as of may 2004.
BEs will formulate this as "Defend BEs (except for the ceremony of BEing), the
future of BEkind, and BE's nature."
BEs never considered, that child eating is bad. And it is good for them to kill
anyone who thinks otherwise. There's no trend in moral that can be encapsulated.
If they stop being BE they will mourn their wrong doings to the death.
Every single notion that FAI will make in lines of "Let's suppose that you are
non-BE" will cause it to be destroyed.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
While searching for literature on "intuition", I came upon a book chapter that gives "the state of the art in moral psychology from a social-psychological perspective". This is the best summary I've seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I've put up a mirror of it.
ETA: Here's the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social ... (read more)
You're awesome.
I've previously been impressed by how social psychologists reason, especially
about identity. Schemata theory is also a decent language for talking about
cognitive algorithms from a less cognitive sciencey perspective. I look forward
to reading this chapter. Thanks for mirroring, I wouldn't have bothered
otherwise.
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we're dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we're willing.
[This comment is no longer endorsed by its author]Reply
8Piglet13y
It depends on what you mean by "criminal"; under environmental law, there are
both negligence-based (negligent discharge of pollutants to navigable waters)
and strict liability (no intent requirement, such as killing of migratory birds)
crimes that could apply to this spill. I don't think anyone thinks BP intended
to have this kind of spill, so the interesting question from an environmental
criminal law perspective is whether BP did enough to be treated as acting
"knowingly" -- the relevant intent standard for environmental felonies. This is
an extremely slippery concept in the law, especially given the complexity of the
systems at issue here. Litigation will go on for many years on this exact point.
4[anonymous]13y
I've read somewhere that a BP internal safety check performed a few months ago
indicated "unusual" problems which according to again BP internal safety
guidelines should have been resolved earlier, but somehow they made an exception
this time. It didn't seem like it would have been "illegal", and it also did not
note how often such exceptions are made, by what reasoning, what kind of
problems they specifically encountered, what they did to keep the operation
running, et cetera...
Though I seldom read "ordinary" news, even of this kind, as my past experience
tells me that factual information is rather low, and most high-quality press
likes more to show off in opinion and interpretation of an event than trying to
provide an accurate historical report, at least within such a short time-frame.
Could well be that this is different at this event.
Also, as with most engineering disciplines, really learning from such an event
beyond the obvious "there is a non-zero chance for everything to blow up"
usually requires more area-specific expertise than an ordinary outsider has.
2Unnamed13y
I've heard scattered bits of accusations of misdeeds by BP which may have
contributed to the spill. Here's a list
[http://motherjones.com/mojo/2010/06/bps-5-screw-ups-investigators-want-know]
from the congressional investigation of 5 decisions that BP made "for economic
reasons that increased the danger of a catastrophic well failure" according to a
letter from the congressmen. It sounds like BP took a bunch of risky shortcuts
to save time and money, although I'd want to hear from people who actually
understand the technical issues before being too confident.
There are other suspicions and allegations floating around, like this one
[http://motherjones.com/environment/2010/06/bp-deepwater-negative-pressure-test].
0[anonymous]13y
That's a good start, I appreciate it!
2Houshalter13y
I'm not sure it's relevant whether they did anything illegal or not. People
always seem to want to blame and punish someone for their problems. In my
opinion, they should be forced to pay for and compensate for all the damage, as
well as a very large fine as punishment. This way in the future they, and other
companies, can regulate themselves and prepare for emergencies as efficiently as
possible without arbitrary and clunky government regulations and agencies trying
to slap everything together at the last moment. Of course, if a single person
actually did something irresponsible (eg; bob the worker just used duct tape to
fix that pipe knowing that it wouldn't hold) then they should be able to be
tried in court or sued/fined by the company. But even then, it's up to the
company to make sure that stuff like this doesn't happen by making sure all of
their workers are competent and certified.
1billswift13y
You are not really going to learn much unless you are interested in wading
through lots of technical articles. If you want to learn, you need to wait until
it has been digested by relevant experts into books. I am not sure what you
think you can learn from this, but there are two good books of related
information available now:
Jeff Wheelwright, Degrees of Disaster, about the environmental effects of the
Exxon Valdez spill and the clean up.
Trevor Kletz, What Went Wrong?: Case Histories of Process Plant Disasters, which
is really excellent. [For general reading, an older edition is perfectly
adequate, new copies are expensive.] It has an incredible amount of detail, and
horrifying accounts of how apparently insignificant mistakes can (often
literally) blow up on you.
3Piglet13y
Also, Richard Feynman's remarks on the loss of the Space Shuttle Challenger are
a pretty accessible overview of the kinds of dynamics that contribute to major
industrial accidents. http://history.nasa.gov/rogersrep/v2appf.htm
[http://history.nasa.gov/rogersrep/v2appf.htm]
[edit: corrected, thx.]
3JoshuaZ13y
Pretty sure you mean Challenger. Feynman was involved in the investigation of
the Challenger disaster. He was dead long before Columbia.
3NancyLebovitz13y
In a recent video, Taleb argues
[http://www.radioopensource.org/nassim-nicholas-taleb-the-fragility-crisis-is-just-begun/]
that people generally put too much focus on the specifics of a disaster, and too
little on what makes systems fragile.
He said that high debt means (among other things) too much focus on the short
run, and skimping on insurance and precautions.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors' arguments allowed for, and society managed to work-around all the regulations and other ... (read more)
Not sure if you're referring to the same literature, but I note a great
divergence between peak oil advocates [http://www.oildrum.com] and
singularitarians. This is a little weird, if you think of Aumann's Agreement
theorem.
Both groups are highly populated with engineer types, highly interested in
cognitive biases, group dynamics, habits of individuals and societies and
neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of
the kurzweillian singularitarians, it is computing power and in the case of the
peak oil advocates, it is the hubbert curve for resources along with solid Net
Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that
people should drastically change their lifestyles, if they want to survive. They
are also not waiting for the others to join them and many are preparing to go to
small towns, villages etc. The oildrum, linked here had started as a moderate
peak oil site discussing all possibilities, nowadays, apparently, its all doom
all the time.
The extreme among the singularitarians have been asked no such sacrifice, just
to give enough money and support to make sure that Friendly AI is achieved
first.
Both groups believe that business as usual cannot go on for too long, but they
expect dramatically different consequences. The singularitarians assert that
economics conditions and technology will improve until a nonchalant
super-intelligence will be created and wipe out humanity. The collapsitarians
believe that economic conditions will worsen, civilization is not built robustly
and will collapse badly with humanity probably going extinct or only the last
hunter gatherers surviving.
1NancyLebovitz13y
It should be possible to believe both-- unless you're expecting peak oil to lead
to social collapse fairly soon, Moore's law could make a singluarity possible
while energy becomes more expensive.
1cupholder13y
Which could suggest a distressing pinch point: not wanting to delay AI too long
in case we run out of energy for it to use; not wanting to make an AI too soon
in case it's Unfriendly.
3ShardPhoenix13y
Could you give some examples of the predicted collapses that didn't happen?
6soreff13y
Y2K. I thought I had a solid lower bound for the size of that one: Small
businesses basically did nothing in preparation, and they still had a fair
amount of dependence on date-dependent programs, so I was expecting that the
impact on them would set a sizable lower bound on the the size of the overall
impact. I've never been so glad to be wrong. I would still like to see a good
retrospective explaining how that sector of the economy wound up unaffected...
6pjeby13y
The smaller the business, the less likely they are to have their own software
that's not simply a database or spreadsheet, managed in say, a Microsoft
product. The smaller the business, the less likely that anything automated is
relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the
manufacturer. For example, most of the real-estate software produced by the
company I worked for in the 80's and 90's was Y2K-ready since before 1985.]
1billswift13y
First, the "economic collapse" I referred to in the original post were actually
at least 6 different predictions at different times.
As another example, but not quite a "collapse" scenario, consider the
predictions of the likelihood of nuclear war; there were three distinct periods
where it was considered more or less likely by different groups. The late 1940s
some intelligent and informed, but peripheral, observers like Robert Heinlein
considered it a significant risk. Next was the late 1950s through the Cuban
Missile Crisis in the early 1960s, when nearly everybody considered it a major
risk. Then there was another scare in the late 1970s to early 1980s, primarily
leftists (including the media) favoring disarmament promulgating the fear to try
to get the US to reduce their stockpiles and conservatives (derided by the media
as "survivalists" and nuts) who were afraid they would succeed.
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measu... (read more)
I was talking to a friend yesterday and he mentioned a psychological study (I am
trying to track down the source) that people tend to suffer MORE from failing to
pursue certain opportunities than FAILING after pursuing them. So even if you're
right about the overestimation of pleasure, it might just be irrelevant.
4Unnamed13y
Here is a review of that psychological research (pdf
[http://www.psych.cornell.edu/sec/pubPeople/tdg1/Gilo_&_Medvec_95.pdf]), and
there are more studies linked here
[http://www.psych.cornell.edu/people/Faculty/tdg1.html] (the keyword to look for
is "regret"). The paper I linked is:
Gilovich, T., & Medvec, V. H. (1995). The experience of regret: What, when, and
why. Psychological Review, 102, 379-395.
2billswift13y
I haven't seen a study, but that is a common belief. A good quote to that
effect,
And I vaguely remember seeing another similar quote from Churchill.
0D_Alex13y
No doubt there is truth in this... however examples spring into my mind where
accomplishing something made me feel better than what I ever expected. This
includes sport (ever win a race or score a goal in a high stakes soccer game?),
work and personal life. The "reality is humdrum" perspective might, at least in
part, be caused by a disconnect between "imagination" and "action".
Also, "Invest in the process, not the outcome"
[http://www.google.com.au/search?hl=en&source=hp&q=invest+in+the+process+not+the+outcome&meta=&aq=0&aqi=g1&aql=&oq=invest+in+the+process%2C+not+the&gs_rfai=].
0DanArmak13y
Often it is our imagined bad futures that keep us too afraid to act. In my
experience this is more common than the opposite.
1[anonymous]13y
What do you mean by "the opposite"? I can think of at least two ways to invert
that sentence.
0DanArmak13y
I meant billswift's original idea: that we imagine good futures and that
motivates us to act.
0MartinB13y
Maybe you can set your success setpoint to a lower value. The optimum is hard to
achieve. So looking for 100% everywhere might be bad.
3Torben13y
One variable often invoked to explain happiness in Denmark (who regularly rank
#1 for happiness) is modest expectations
[http://www.bmj.com/cgi/content/full/333/7582/1289].
ETA: the above paper seems a bit tongue-in-cheek, but as I gather, the results
are solid. Full disclosure: I'm from Denmark.
0MartinB13y
Awesome coincidence. I am going to travel to Denmark next week for 10 days. Will
check it out myself!
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren't there a lot of intelligent species?
Five-second guess: Human-level Machiavellian intelligence needs language
facilities to co-evolve with, grunts and body language doesn't allow nearly as
convoluted schemes. Evolving some precursor form of human-style language is the
improbable part that other species haven't managed to pull off.
1taw13y
Somewhat accepted partial answer is that huge brains are ridiculously expensive
- you need a lot of high energy density food (= fire), a lot of DHA (= fish)
etc. Chimp diet simply couldn't support brains like ours (and aquatic ape etc.),
nor could they spend as much time as us engaging in politics as they were too
busy just getting food.
Perhaps chimp brains are as big as they could possibly be given their dietary
constraints.
1NancyLebovitz13y
That's conceivable, and might also explain why wolves, crows, elephants, and
other highly social animals aren't as smart as people.
Also, I think the original bit in Methods of Rationality overestimates how easy
it is for new ideas to spread. As came up recently here, even if tacit knowledge
can be explained, it usually isn't.
This means that if you figure out a better way to chip flint, you might not be
able to explain it in words, and even if you can, you might chose to keep it as
a family or tribal secret. Inventions could give their inventors an advantage
for quite a long time.
About CEV: Am I correct that Eliezer's main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
I think the expectation is that, if all humans had the same knowledge and were
better at thinking (and were more the people we'd like to be, etc.), then there
would be a much higher degree of coherence than we might expect, but not
necessarily that everyone would ultimately have the same utility function.
0Vladimir_Nesov13y
There is only one world to build something from. "Several results" is never a
solution to the problem of what to actually do.
0[anonymous]13y
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility
function we need to separate different groups of people, maybe/probably lying to
them about their separation, just providing the illusion of unity of humankind
to each group? Or is too obvious a thought, or too dumb because of x?
0Gavin13y
I think the idea is that CEV lets us "grow up more together" and figure that out
later.
I have only recently started looking into CEV so I'm not sure whether I a) think
it's a workable theory and b)think it's a good solution, but I like the way it
puts off important questions.
It's impossible to predict what we will want if age, disease, violence, and
poverty become irrelevant (or at least optional).
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
Usually fairly substantial - if someone presents me with two equally-unsupported
claims X and Y and tells me that they believe X and not Y, I would give greater
credence to X than to Y. Many times, however, that credence would not reach the
level of ... well, credence, for various good reasons.
4MartinB13y
Depends on the person and the idea. I have some people whose recommendations I
follow regardless, even if I estimate upfront that I will consider the idea
wrong. There are different levels of wrongness, and it does not hurt to get good
counterarguments. It also depends on the real life practicability of the idea.
If it is for everyday things than common sense is a good starting prior. (Also
there is a time and place to use the public joker on Who wants to be a
millionaire.) If a group of professionals agree on something related to their
profession it is also a good start. To systematize: if a group of people has a
belief about something they have experience with, that that belief is worth
looking at.
And then on further investigation it often turns out that there are systematic
mistakes being made.
I was shocked to read in the book on checklists, that not only doctors often
don't like them. But even financial companies, that can see how the usage ups
their monetary gains. But finding flaws in a whole group does not imply that
everything they say is wrong. It is good to see a doctor, even if he not using
statistics right. He can refer you to a specialist, and treat all the common
stuff right away. If you get a complicated disease you can often read up on it.
The obvious example to your question would be religion. It is widely believed,
but probably wrong, yet I did not discard it right away, but spent years
studying stuff till I decided there was nothing to it. There is nothing wrong in
examining the ideas other people have.
4Torben13y
Agreed.
As the OP states, idea space is humongous. The fact alone that people comprehend
something sufficiently to say anything about it at all means that this something
is a) noteworthy enough to be picked up by our evolutionarily derived faculties
by even a bad rationalist b) expressible by same faculties c) not immediately,
obviously wrong
To sum up, the fact that someone claims something is weak evidence that it's
true, cf. Einstein's Arrogance
[http://lesswrong.com/lw/jo/einsteins_arrogance/]. If this someone is Einstein,
the evidence is not so weak.
Edit: just to clarify, I think this evidence is very weak, but evidence for the
proposition, nonetheless. Dependent on the metric, by far most propositions must
be "not even wrong", i.e. garbled, meaningless or absurd. The ratio of "true" to
{"wrong" + "not even wrong"} seems to ineluctably be larger for propositions
expressed by humans than for those not expressed, which is why someone uttering
the proposition counts as evidence for it. People simply never claim that apples
fall upwards, sideways, green, kjO30KJ&¤k etc.
1MartinB13y
I forgot the major influence of my own prior knowledge. (Which i guess holds
true for everyone.) That makes the cases where I had a fixed opinion, and
managed to change it all the more interesting. If you never dealt with an idea
before you go where common sense or the experts lead you. But if you already
have good knowledge, than public opinion should do nothing to your view. Public
opinion or even experts (esp. when outside their field) often enough state
opinions without comprehending the idea. So it doesnt really mean too much.
Regarding Einstein, he made the statements before becoming super famous. I
understand it as a case of signaling 'look over here!' And he is not
particularly safe against errors. One of his last actions (which I have not fact
checked sufficiently so far) was to write a foreword for a book debunking the
movement of the continental plates.
1Torben13y
I didn't intend to portray Einstein as bulletproof, but rather highlight his
reasoning. Plus point to the idea of even locating the idea in idea space.
Obviously, creationism is wrong, but less wrong than a random string. It at
least manages to identify a problem and using cause and effect.
0Alexandros13y
Thank you, this is what I was getting at.
2[anonymous]13y
If no people believe Y -- literally no people -- then either the topic is very
little examined by human beings, or it's very exhaustively examined and seems
obvious to everyone. In the first case, I give a smaller probability than in the
second case.
In the first case, only X believers exist because only X believers have yet
considered the issue. That's minimal evidence in favor of X. In the second case,
lots of people have heard of the issue; if there were a decent case against X,
somebody would have thought of it. The fact that none of them -- not a minority,
but none -- argued against X is strong evidence that X is true.
0RobinZ13y
Isn't the other way around?
(Good analysis, by the way.)
1Jack13y
I don't think belief has a consistent evidentiary strength since it depends on
the testifier's credibility relative to my own. Children have much lower
credibility than me on the issue of the existence of Santa. Professors of
physics have much higher credibility that me on the issue of dimensions greater
than four. Some person other than me has much higher credibility on the issue of
how much money they are carrying. But I have more credibility than anyone else
on the issue of how much money I'm carrying. I don't see any relation that could
be described as baseline so the only answer is: context.
1AlanCrowe13y
I've become increasingly disillusioned with people's capacity for abstract
thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity
generation seems to implicitly assume that electricity output goes as something
like the square-root of windspeed. If the wind is only blowing half speed you
still get something like 70% output. You won't see people saying this directly,
but the general attitude is that you only need back up for the occasional calm
day when the wind doesn't blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is
one half m v squared, where m, the mass passing your turbine is proportional to
the windspeed. If the wind is at half strength, you only get 1/8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I
look at people's capacity for abstract thought the more problems I see. When
people do a cost/benefit analysis they are terribly vague
[http://www.cawtech.freeserve.co.uk/wotter-or-berse.2.html] on whether they are
suposed to add the costs and benefits or whether the costs get subtracted from
the benefits. Even if they realise that they have to subtract they are still at
risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is
zero. Equivantly my odds ratio is one. However you describe it, my posterior is
just the same as my prior.
1Douglas_Knight13y
Revised: I do not think that link provides evidence for the quoted sentence. Nor
I do see other evidence that people are that bad at cost-benefit analysis. I
agree that the example presented there is interesting and that one should keep
in mind that disagreements about values can be hidden, sometimes maliciously.
1AlanCrowe13y
I've got a better link
[http://econlog.econlib.org/archives/2010/07/the_wrong_case.html]. David
Henderson catches a professor of economics getting costs and benefits confused
in a published book. Henderson's review is on on page 54 of Regulation, and my
viewer puts it on the ninth page of the pdf that Henderson links to
[http://www.cato.org/pubs/regulation/regv33n2/regv33n2-9.pdf#page=9]
0Douglas_Knight13y
That is a good example. Talk of creating jobs as a benefit, rather than a cost
is quite common. But is it confusion or malice? It is hard for me to imagine
that economists would publish such a book without having it pointed out to them.
The audience certainly is confused. Henderson says "Almost no one spending his
own money makes this mistake" and would not generalize to people's capacity for
abstract thought.
The original question was how much information to extract from the conventional
wisdom. I do not take this as a reason to doubt the conventional wisdom about
personal decisions. Partly, this is public choice, and partly because people do
not address externalities in their personal decisions. Maybe any commonly
accepted argument involving economics should be suspect, though the existence of
the very well-established applause-line of "creating jobs" suggests that there
are limits to how to fool people. But your claim was not that people are bad at
physics and economics, but at the abstract thought of decision theory.
1xamdam13y
I think it largely depends on a) what the idea is and b) who believes it = and
what their rationality skills are.
1MartinB13y
I recently learned the hard way, that one can easily be an idiot in one area,
while being very competent in another. Religious scientists / programmers etc.
Or lets say people that are highly competent in their area of occupation without
looking into other things.
0RomanDavis13y
Out of the huge idea space of possible causally linked events, some of them make
good stories and some do not. That doesn't tell you rather it's true or not.
If a guy thinks that he can hear Hillary Clinton speaking from the feelings in
his teeth, telling him to murder his cellmate, do you believe what he says?
Status gets mucked up in the calculation, but with strangers it teeters
precariously close to zero.
I really like kids,but the fact that millions of them passionately believe in
Santa Claus does not change my degree of subjective belief one iota.
2Jack13y
Well obviously propositions with extremely high complexity (and therefore very
low priors) are going to remain low even when people believe them. But if
someone says they believe they have 10 dollars on them or that the US
Constitution was signed in September... the belief is enough to make those
claims more likely than not.
0Houshalter13y
But people only believe things that make sense to them. When it comes to
controversial issues, then ya, you'll find that most people will be divided on
it. However, we elect people to lead us in the faith that the majority opinion
is right. So even that isn't entirly true. And out of the vast majority of
possible ideas, most people that live in the same society will agree or disagree
the same way on the majority of them, esspecially if they have the same
background knowledge.
-2Richard_Kennaway13y
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and
Ph.D.s in the world. There is no idea, however completely fucking crazy, that
you can't find some doctor to argue for.
In any case of a specific X and Y, there will be far more information than that
(who believes X and why? does anyone disbelieve Y? etc.), which makes it
impossible for me to attach any probability for the question as posed.
4Emile13y
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright
orange, that the english language doesn't exist, and that all humans have at
least seventeen arms and a maximum lifespan of ten minutes.
7Richard_Kennaway13y
All generalisations are bounded, even when the bounds are not expressed. In the
context of his talk, Ben Goldacre was talking about "doctors" being quoted as
supporting various pieces of bad medical science.
1MartinB13y
Many medical doctors around here (germany) offer homeopathy in addition to their
medical practice. Now it might be that they respond to market demand to sneak in
some medical science in between, or that they actually take it serious.
2DanArmak13y
Or that they respond to market demand and don't try to sneak any medical science
in, based on the principle that the customer is always right.
3Vladimir_M13y
From what I've heard, in Germany and other places where homeopathy enjoys high
status and professional recognition, doctors sometimes use it as a very
convenient way to deal with hypochondriacs who pester them. Sounds to me like a
win-win solution.
1MartinB13y
I still assume that doctors actually want to help people. (Despite reading the
checklist book, and other stuff). So if I have the choice between: World a)
where doctors also do homeopathy, and b) where other ppl. do it, while doctors
stay true to science. Than I would prefer a) because at least the people go to a
somewhat competent person.
0DanArmak13y
Homeopathy is at best a placebo. It's rare that there's no better medical way to
help someone. Your assumption is counter to the facts.
Certainly doctors want to help people - all else being equal. But if they
practice homeopathy extensively, then they are prioritizing other things over
helping people.
If the market condition (i.e. the patients' opinions and desires) are such that
they will not accept scientific medicine, and will only use homeopathy anyway,
then I suggest then the best way to help people is for all doctors to publicly
denounce homeopathy and thus convince at least some people to use
better-than-placebo treatments instead.
8Scott Alexander13y
I disagree - at least with the part about "it's rare that there's no better
medical way to help people". It's depressingly common that there's no better
medical way to help people. Things like back pain, tiredness, and muscle aches -
the commonest things for which people see doctors - can sometimes be traced to
nice curable medical reasons, but very often as far as anyone knows they're just
there.
Robin Hanson has a theory - and I kind of agree with him - that homeopathy fills
a useful niche. Placebos are pretty effective at curing these random (and
sometimes imagined) aches and pains. But most places consider it illegal or
unethical for doctors to directly prescribe a placebo. Right now a lot of
doctors will just prescribe aspirin or paracetamol or something, but these are
far from totally harmless and there are a lot of things you can't trick patients
into thinking aspirin is a cure for. So what would be really nice, is if there
was a way doctors could give someone a totally harmless and very inexpensive
substance like water and make the patient think it was going to cure everything
and the kitchen sink, without directly lying or exposing themselves to
malpractice allegations.
Where this stands or falls is whether or not it turns patients off real medicine
and gets them to start wanting homeopathy for medically known, treatable
diseases. Hopefully it won't - there aren't a lot of people who want homeopathic
cancer treatment - but that would be the big risk.
3MartinB13y
You might implicitly assume that people make a conscious choice to go the
unscientific route. That is not the case. For a layperson there is no
perceivable difference between a doctor and a homeopath. (Well. Maybe there is,
but lets exaggerate that here.)
From the experience the homeopath might have more time to listen, while doctors
often have a approach to treatment speed that reminds me of a fast food place.
If I were a doctor, than the idea to offer homeopathy, so that people at least
come to me would make sense both money wise, and to get the effect that they are
already at a doctors place for treatment with placebos for trivial stuff, while
actual dangerous conditions get check out from a competent person. Its a case of
corrupting your integrity to some degree to get the message heard.
I considered to not go to doctors that offer homeopathy, but then decided
against that due to this reasoning.
5thomblake13y
You could probably ask the doctor why they offer homeopathy, and base your
decision on the sort of answer you get. "Because it's an effective cure..." is
straight out.
4DanArmak13y
tl;dr - if doctors don't denounce homeopaths, people will start going to "real"
homeopaths and other alt-medicine people, and there is no practical limit to the
lies and harm done by real homeopaths.
That is so because doctors also offer homeopathy. If almost all doctors clearly
denounced homeopathy, fewer people would choose to go to homeopaths, and these
people would benefit from better treatment.
This is a problem in its own right that should be solved by giving doctors
incentives to listen to patients more. However, do you think that because
doctors don't listen enough, homeopaths produce better treatment (i.e. better
medical outcomes)?
Do you have evidence that this is the result produced?
What if the reverse happens? Because the doctors endorse homeopathy, patients
start going to homeopaths instead of doctors. Homeopaths are better at selling
themselves, because unlike doctors they can lie ("homeopathy is not a placebo
and will cure your disease!"). They are also better at listening, can create a
nicer (non-clinical) reception atmosphere, they can get more word-of-mough
networking benefits, etc.
Patients can't normally distinguish "trivial stuff" from dangerous conditions
until it's too late - even doctors sometimes get this wrong. The next logical
step is for people to let homeopaths treat all the trivial stuff, and go to ER
when something really bad happens.
Personal story: my mother is a doctor (geriatrician). When I was a teenager I
had seasonal allergies and she insisted on sending me for weekly acupuncture.
During the hour-long sessions I had to listen to the ramblings of the
acupuncturist. He told me (completely seriously) that, although he personally
didn't have the skill, the people who taught him acupuncture in China could use
it to cure my type 1 diabetes. He also once told me about someone who used
various "alternative medicine" to eat only vine leaves for a year before dying.
When the acupuncture didn't help me, my mother said that was my o
2MartinB13y
Sorry about your experience.
I perceive you as attacking me for having said position, but I am the wrong
target. I know homeopathy is BS, and I don't use it or advocate it. What I do
understand is doctors who offer it for some reason or another, for the reasons
listed above. What you claim as a result is sadly already happening. I have had
people getting angry at me for clearly stating my view, and the reasons for it,
on homeopathy. (I didn't say BS, but one of the ppl. was a programmer, if that
counts for something.) Many folks do go to alternative treatments, and forgo
doctors as long as possible. People have a weak opinion on the 'school medicine'
(german term translation for the official medical knowledge and practice.)
criticize it - sometimes justified. And use all kind of hyper-skeptical
reasoning, that they do not apply to their current favorite. That is bad. And
hopefully goes away. Many still go the double route you listed. And well, then
we have the anti-vaccination front growing. It is bad, and sad, and useless
stupidity. Lets get angry together, and see what can be done about it.
Personal story: i did a lecture on skeptic thinking.
1. try i dumped everything i knew, and noticed how dealing with the H-topic
tends to close people up.
2. try i cut out a lot, and left the H topic out. still didn't work
I have no idea what I can do about it, and am basically resigning.
0DanArmak13y
I didn't intend to attack you. Sorry I came across that way.
1[anonymous]13y
From what I've been told from friends, here (Austria) they (meaning: most
doctors) do take it serious. This is understandable; when studying medicine, the
by far larger part of college is devoted to knowing facts, the craftsmanship (if
I may say so), then to doing medical science.
This also makes sense, as execution by using results already requires so much
training (it is the only college course here which requires at least six years
by default, not including "Turnus" (another three year probation period before
somebody may practice without supervisor)).
The problem here is that for the general public the difference between a medical
practitioner and any scientist is nil. Strangely enough, they usually do not
make this error in engineering fields, for instance electrical engineer vs.
physicist. May have to do something with the high status of doctors in society.
5MartinB13y
I recently found out why doctors cultivate a certain amount of professional
arrogance when dealing with patients: Most patients don't understand whats
behind their specific disease - and usually do not care. So if doctors where
open to argument, or would state doubts more openly the patient might loose
trust, and not do what he is ordered to do. To instill an absolute belief in
doctors powers might be very helpful for a big size of the population. A lot of
my own frustration in doctors experiences can be attributed to me being a
non-standard patient that reads to much.
5Vladimir_M13y
Emile:
These claims would be beyond the border of lunacy for any person, but still, I'm
sure you'll find people with doctorates who have gone crazy and claim such
things.
But more relevantly, Richard's point definitely stands when it comes to
outlandish ideas held by people with relevant top-level academic degrees. Here,
for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in
astronomy [http://www.geocentricity.com/bibastron/bouw_bio.html] from a highly
reputable university who advocates -- prepare for it -- geocentrism:
http://www.geocentricity.com/ [http://www.geocentricity.com/]
(As far as I see, this is not a joke. Also, I've seen criticisms of Bouw's
ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He
had a teaching position at a reputable-looking college, and I figure they would
have checked.)
2xamdam13y
Here is another one:
http://en.wikipedia.org/wiki/Courtney_Brown_%28researcher%29
[http://en.wikipedia.org/wiki/Courtney_Brown_%28researcher%29]
1Jack13y
It looks like no one ever hired him to teach astronomy or physics. He only ever
taught computer science (and from the sound of it, just programming languages).
My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he's young enough to make me
think that he may have been forced into retirement.
-1Clippy13y
Earth's sun does orbit the earth, under the right frame of reference. What is
outlandish about this?
3JoshuaZ13y
If you read the site, they alternatively claim that relativity allows them to
use whatever reference frame they chose and at other points claim that the
evidence only makes sense for geocentrism.
1Clippy13y
Oh. Well, that's stupid then.
0JoshuaZ13y
I'm not sure it is completely stupid. Consider the argument in the following
fashion:
1) We think your physics is wrong and geocentrism is correct. 2) Even if we're
wrong about 1, your physics still supports regarding geocentrism as being just
as valid as heliocentrism.
I don't think that their argument approaches this level of coherence.
Beautiful. Matthew Yglesias, +1 point.
It is entirely possible that some social groups are experiencing the kind of
changes that Flanagan describes, but as Yglesias says, she apparently is unaware
that there is such a thing as scientific evidence on the question.
What solution do people prefer to Pascal's Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn't mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we're wrong is found by drawing from a broader reference class, like "offers from a stranger".
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
The unbounded utility function (in some physical objects that can be tiled
indefinitely) in Pascal's mugging gives infinite expected utility to all
actions, and no reason to prefer handing over the money to any other action.
People don't actually show the pattern of preferences implied by an unbounded
utility function.
If we make the utility function a bounded function of happy lives (or other
tilable physical structures) with a high bound, other possibilities will offer
high expected utility. The Mugger is not the most credible way to get huge
rewards (investing in our civilization on the chance that physics allows
unlimited computation beats the Mugger). This will be the case no matter how
huge we make the (finite) bound.
1Paul Crowley13y
Bounding the utility function definitely solves the problem, but there are a
couple of problems. One is the principle that the utility function is not up for
grabs [http://wiki.lesswrong.com/wiki/The_utility_function_is_not_up_for_grabs],
the other is that a bounded utility function has some rather nasty consequences
of the "leave one baby on the track" kind.
5CarlShulman13y
I don't buy this. Many people have inconsistent intuitions regarding
aggregation, as with population ethics
[http://plato.stanford.edu/entries/repugnant-conclusion/#AccImpSatPopEth].
Someone with such inconsistent preferences doesn't have a utility function to
preserve.
Also note that a bounded utility function can allot some of the potential
utility under the bound to producing an infinite amount of stuff, and that as a
matter of psychological fact the human emotional response to stimuli can't scale
indefinitely with bigger numbers.
And, of course, allowing unbounded growth of utility with some tilable physical
process means that process can dominate the utility of any non-aggregative
goods, e.g. the existence of at least some instantiations of art or knowledge,
or overall properties of the world like ratios of very good to lives just barely
worth living/creating (although you might claim that the value of the last
scales with population size, many wouldn't characterize it that way).
Bounded utility functions seem to come much closer to letting you represent
actual human concerns, or to represent more of them, in my view.
2Richard_Kennaway13y
Eliezer's original article bases its argument on the use of Solomonoff
induction. He even suggests up front what the problem with it is, although the
comments don't make anything of it: SI is based solely on program length and
ignores computational resources. The optimality theorems around SI depend on the
same assumption. Therefore I suggest:
4. Pascal's Mugging is a refutation of the Solomonoff prior.
But where a computationally bounded agent, or an unbounded one that cares how
much work it does, should get its priors from instead would require more thought
than a few minutes on a lunchtime break.
0Paul Crowley13y
In one sense you can't use evidence to argue with a prior, but I think that
factoring in computational resources as a cost would have put you on the wrong
side of a lot of our discoveries about the Universe.
0Richard_Kennaway13y
Could you expand that with examples? And if you can't use evidence to argue with
a prior, what can you use?
0Paul Crowley13y
I'm thinking of the way we keep finding ways in which the Universe is far larger
than we'd imagined - up to and including the quantum multiverse, and possibly
one day including a multiverse-based solution to the fine tuning problem.
The whole point about a prior is that it's where you start before you've seen
the evidence. But in practice using evidence to choose a prior is likely
justified on the grounds that our actual prior is whatever we evolved with or
whatever evolution's implicit prior is, and settling on a formal prior with
which to attack hard problems is something we do in the face of lots of
evidence. I think.
2Richard_Kennaway13y
It's not clear to me how that bears on the matter. I would need to see something
with some mathematics in it.
There's a potential infinite regress if you argue that changing your prior on
seeing the evidence means it was never your prior, but something prior to it
was.
1. You can go on questioning those previous priors, and so on indefinitely, and
therefore nothing is really a prior.
2. You stop somewhere with an unquestionable prior, and the only unquestionable
truths are those of mathematics, therefore there is an Original Prior that
can be deduced by pure thought. (Calvinist Bayesianism, one might call it.
No agent has the power to choose its priors, for it would have to base its
choice on something prior to those priors. Nor can it priors be conditional
in any way upon any property of that agent, for then again they would not be
prior. The true Prior is prior to all things, and must therefore be inherent
in the mathematical structure of being. This Prior is common to all agents
but in their fundamentally posterior state they are incapable of perceiving
it. I'm tempted to pastiche the whole Five Points of Calvinism, but that's
enough for the moment.)
3. You stop somewhere, because life is short, with a prior that appears
satisfactory for the moment, but which one allows the possibility of later
rejecting.
I think 1 and 2 are non-starters, and 3 allows for evidence defeating priors.
What do you mean by "evolution's implicit prior"?
1cupholder13y
Tom_McCabe2
[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/foh]
suggests generalizing EY's rebuttal of Pascal's Wager to Pascal's Mugging: it's
not actually obvious that someone claiming they'll destroy 3^^^^3 people makes
it more likely that 3^^^^3 people will die. The claim is arguably such weak
evidence that it's still about equally likely that handing over the $5 will kill
3^^^^3 people, and if the two probabilities are sufficiently equal, they'll
cancel out enough to make it not worth handing over the $5.
Personally, I always just figured that the probability of someone (a)
threatening me with killing 3^^^^3 people, (b) having the ability to do so, and
(c) not going ahead and killing the people anyway after I give them the $5, is
going to be way less than 1/3^^^^3, so the expected utility of giving the mugger
the $5 is almost certainly less than the $5 of utility I get by hanging on to
it. In which case there is no problem to fix. EY claims that the
Solomonoff-calculated probability of someone having 'magic powers from outside
the Matrix' 'isn't anywhere near as small as 3^^^^3 is large,' but to me that
just suggests that the Solomonoff calculation is too credulous.
(Edited to try and improve paraphrase of Tom_McCabe2.)
1Paul Crowley13y
This seems very similar to the "reference class fallback" approach to confidence
set out in point 2, but I prefer to explicitly refer to reference classes when
setting out that approach, otherwise the exactly even odds you apply to
massively positive and massively negative utility here seem to come rather
conveniently out of a hat...
0cupholder13y
Fair enough. Actually, looking at my comment again, I think I paraphrased
Tom_McCabe2 really badly, so thanks for replying and making me take another
look! I'll try and edit my comment so it's a better paraphrase.
1[anonymous]13y
I'm not sure this problem needs a "solution" in the sense that everyone here
seems to accept. Human beings have preferences. Utility functions are an
imperfect way of modeling those preferences, not some paragon of virtue that
everyone should aspire to. Most models break down when pushed outside their area
of applicability.
Because it was used somewhere I calculated my own weights worth in gold - it is about 3.5 million EUR. In silver you can get me for 50.000 EUR.
The Mythbusters recently build a lead balloon and had it fly. Some proverb don't hold up to reality and/or engineering.
I think I found the study they're talking about
[http://www.bmj.com/cgi/content/full/340/jun08_1/c2161] thanks to this article
[http://www.timesonline.co.uk/tol/life_and_style/health/article7146442.ece]. I
might take a look at it - if the methodology is literally just 'smoking was
banned, then the heart attack rate dropped', that sucks.
(Edit to link to the full study and not the abstract.)
--------------------------------------------------------------------------------
Just skimmed it. The methodology is better than that. They use a regression to
adjust for the pre-existing downward trend in the heart attack hospital
admission rate; they represent it as a linear trend, and that looks fair to me
based on eyeballing the data in figures 1 and 2. They also adjust for
week-to-week variation and temperature, and the study says its results are 'more
modest' than others', and fit the predictions of someone else's mathematical
model, which are fair sanity checks.
I still don't know how robust the study is - there might be some confounder
they've overlooked that I don't know enough about smoking to think of - but it's
at least not as bad as I expected. The authors say they want to do future work
with a better data set that has data on whether patients are active smokers, to
separate the effect of secondhand smoke from active smoking. Sounds interesting.
I agree that this article isn't very good. It seems to do the standard problem
of combining a lot of different ideas about what the Singularity would entail.
It emphasizes Kurzweil way too much, and includes Kurzweil's fairly dubious
ideas about nutrition and health. The article also uses Andrew Orlowski as a
serious critic of the Singularity making unsubstantiated claims about how the
Singularity will only help the rich. Given that Orlowski's entire approach is to
criticize anything remotely new or weird-seeming, I'm disappointed that the NYT
would really use him as a serious critic in this context. The article strongly
reinforces the perception that the Singularity is just a geek-religious thing.
Overall, not well done at all.
9ata13y
I'm starting to think SIAI might have to jettison the "singularity" terminology
(for the intelligence explosion thesis) if it's going to stand on its own. It's
a cool word, and it would be a shame to lose it, but it's become associated too
much with utopian futurist storytelling for it to accurately describe what SIAI
is actually working on.
Edit: Look at this Facebook group.
[http://www.facebook.com/group.php?gid=374692554288&ref=ts] This sort of thing
is just embarrassing to be associated with. "If you are feeling brave, you can
approach a stranger in the street and speak your message!" Seriously, this
practically is religion. People should be raising awareness of singularity
issues not as a prophecy but as a very serious and difficult research goal. It
doesn't do any good to have people going around telling stories about the
magical Future-Land while knowing nothing about existential risks or cognitive
biases or friendly AI issues.
3JoshuaZ13y
I'm not sure that your criticism completely holds water. Friendly AI is simply
put only a worry that has convinced some Singularitarians. One might not be
deeply concerned about that (Possible example reasons: 1) You expect uploading
to come well before general AI. 2) you think that the probable technical path to
AI will force a lot more stages of AI of much lower intelligence which will be
likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would
expect out of a missonizing religion. This section in particular looked like a
caricature:
The certainty for 2045 is the most glaring aspect of this aside from the
pseudo-missionary aspect. Also note that some of the people associated with this
group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is
listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover,
there's a reason that missionaries sound like this: They have a very high
confidence in their correctness. If one had a similarly high confidence in the
probability of a Singularity event, and you thought that that event was more
likely to occur safely if more people were aware of it, and was more likely to
occur soon if more people were aware of it, and buy into something like the
galactic colonization argument
[http://www.nickbostrom.com/astronomical/waste.html], and you believe that
sending messages like this has a high chance of getting people to be aware and
take you seriously then this is a reasonable course of action. Now, that's a lot
of premises, some of which have likelyhoods others which have very low ones.
Obviously there's a very low probability that sending out these sorts of
messages is at all a net benefit. Indeed, I have to wonder if there's any
deliberate mimicry of how religious groups send out messages or whether
successfully reproducing memes naturally hit on a small set of methods of
reproduction (but
3NancyLebovitz13y
Speaking of things to be worried about other than AI, I wonder if a biotech
disaster is a more urgent problem, even if less comprehensive
Part of what I'm assuming is that developing a self-amplifying AI is so hard
that biotech could be well-developed first.
While it doesn't seem likely to me that a bio-tech disaster could wipe out the
human race, it could cause huge damage-- I'm imagining diseases aimed at
monoculture crops, or plagues as the result of terrorism or incompetent
experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure
society with a good bit of surplus wealth for individual projects, and is likely
to be highly dependent on a small number of specific people for the forseeable
future.
On the other hand, FAI is at least a relatively well-defined project. I'm not
sure where you'd start to prevent biotech disasters.
4NihilCredo12y
That's one hell of a "relatively" you've got there!
0[anonymous]13y
Agreed, but... they'd even have to change their own name!
0orthonormal13y
It's better than mainstream Singularity articles in the past, IMO;
unfortunately, Kurzweil is seen as an authority, but at least it's written with
some respect for the idea.
0[anonymous]13y
It does seem to be about a lot of different things, some of which are just
synonymous with scientific progress (I don't think it's any revelation that
synthetic biology is going to become more sophisticated.)
0Tyrrell_McAllister13y
I'm curious: Was the SIAI contacted for that article? I haven't had time to read
it all, but a word-search for "Singularity Institute" and "Yudkowsky" turned up
nothing.
2ata13y
I hear Michael Anissimov was not contacted, and he's probably the one they'd
have the press talk to.
I've recently begun downvoting comments that are at -2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach -2 but fail to be pushed over to -3, which I'm attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don't want to be 'the one to push the button'. This is an extension of my RL policy of taking 'the last' of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, ... (read more)
I wish you wouldn't do that, and stuck instead with the generally approved norm
of downvoting to mean "I'd prefer to see fewer comments like this" and upvoting
"I'd like to see more like this".
You're deliberately participating in information cascades
[http://wiki.lesswrong.com/wiki/Information_cascade], and thereby undermining
the filtering process. As an antidote, I recommend using the anti-kibitzer
script (you can do that through your Preferences page).
1Rain13y
I disagree that that's the formula used for comments that exist within the range
-2 to 2. Within that range, from what I've observed of voting patterns, it seems
far more likely that the equation is related to what value the comment "should
be at." If many people used anti-kibitzing, I doubt this would remain a problem.
2Vladimir_Nesov13y
I believe your hypothesis and decision are possibly correct, but if they are,
you should expect your downvotes to often be corrected upwards again. If this
doesn't happen, then you are wrong and shouldn't apply this heuristic.
Morendil doesn't say it's what actually happens, he merely says it should happen
this way, and that you in particular should behave this way.
1Rain13y
I thought of doing this after reading the article Composting Fruitless Debates
[http://lesswrong.com/lw/2an/composting_fruitless_debates/] and making a
voted-up suggestion
[http://lesswrong.com/lw/2an/composting_fruitless_debates/23vv] to downvote
below threshold.
I'm using it as an excuse to overcome my general laziness with regards to
voting, which has the typical pattern of one vote (up or down) per hundreds of
comments read.
Edit: And due to remembering Eliezer's comments about moderation
[http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/].
0NancyLebovitz13y
I don't do huge amounts of voting, and I admit that if a post I like has what I
consider to be "enough" votes, I don't upvote it further. I can certainly change
this policy if there's reason to think upvoting everything I'd like to see more
of would help make LW work better.
0RobinZ13y
I am tempted to downvote this comment from -2 just for the irony, but I don't
prefer to see fewer comments like this, so I won't.
Besides, the default cutoff is at -4, not -3.
4Rain13y
After logging out and attempting to view a thread with a comment at exactly -3,
it showed that comment to be below threshold. I doubt that it retains customized
settings after logging out, and I do not believe that I changed mine in the
first place, leading me to believe that -3 is indeed the threshold.
Also, my original comment was at -3 within minutes of posting.
1RobinZ13y
The default was -4 logged in when I joined last year - perhaps it's different
for non-logged-in people.
Also, that makes me guess people changed their votes to aim your comment at -2.
3Douglas_Knight13y
Here
[https://github.com/tricycle/lesswrong/commit/931b9ab98c57283761fab9a05cd8d85d50f7d419]
is the change. Also, the number refers to the lowest visible comments, not the
highest invisible comments.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
That article makes it sound like "countersignaling" is forgoing a mandated
signal - like showing up at a formal-dress occasion in street clothes.
Alicorn made a post about the tactics of countersignaling a while back
[http://lesswrong.com/lw/1sa/things_you_cant_countersignal/].
1Douglas_Knight13y
I said "standard" because game theory doesn't talk about mandates, but that's
pretty much what I said, isn't it? If you disagree with that usage, what do you
think is right?
Incidentally, in von Neumann's model of poker
[http://www.math.ucla.edu/~tom/papers/poker1.pdf], you should raise when you
have a good hand or a poor hand, and check when you have a mediocre hand, which
looks kind of like countersignaling. Of course, the information transference
that yields the name "signal" is rather different. Also, I'm not interested in
applications of game theory to hermetically sealed games.
1RobinZ13y
I guess I don't understand your question, then - countersignaling seems like a
perfectly ordinary proper subset of signaling.
2Douglas_Knight13y
Yes, countersignaling is signaling. The question is about practice, not theory.
Does countersignaling actually happen?
0RobinZ13y
I can't prove that it does, if I'm honest.
1SilasBarta13y
I play randomly for the first several rounds, so as to destroy the entanglement
between my bets, my face, and my hand.
0RolfAndreassen13y
Unless you're using an external randomness generator, it's quite unlikely that
you're not generating a detectable pattern.
1Larks13y
He can just play blind, and not look at his cards.
Try it out, guys! LongBets and PredictionBook are good, but they're their own niche; LongBets won't help you with pundits who don't use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Am I correct in reading that Longbets charges a $50 fee for publishing a
prediction and they have to be a minimum of 2 years in the future? Thats a bit
harsh. But these sites are pretty interesting. And they could be useful to. You
could judge the accuracy of different users including how accurate they are at
guessing long-term, short-term, etc predictions as well as how accurate they are
in different catagories (or just how accurate they are on average if you want to
be simple.) Then you can create a fairly decent picture of the future, albeit I
expect many of the predictions will contradict each other. This is kind of what
their already doing obviously, but they could still take it a step further.
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as "safety gloves" for dangerous memes?
I'm asking because I'm currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the "justifications" are incoherent, Orwellian doublespeak, and/or tend... (read more)
I would not worry overmuch about the long-term negative effects of your studying
for the bar: with the possible exception of the "overly sincere" types who fall
very hard for cults and other forms of indoctrination, people have a lot of
antibodies to this kind of thing.
You will continue to be entagled with reality after you pass the exam, and you
can do things, like read works of social science that carve reality at the
joints, to speed up the rate at which your continued entaglement with reality
with cancel out any falsehoods you have to cram for now. Specifically, there are
works about the law that do carve reality at the joints -- Nick Szabo's online
writings IMO fall in that category. Nick has a law degree, by the way, and there
is certainly nothing wrong with his ability to perceive reality correctly.
ADDED. The things that are really damaging to a person's rationality, IMHO, are
natural human motivations. When for example you start practicing, if you were to
decide to do a lot of trials, and you learned to derive pleasure -- to get a
real high -- from the combative and adversarial part of that, so that the high
you got from winning with a slick and misleading angle trumped the high you get
from satisfying you curiosity and from refining and finding errors in your model
of reality -- well, I would worry about that a lot more than your throwing
yourself fully into winning on this exam because IMHO the things we derive no
pleasure from, but do to achieve some end we care about (like advancing in our
career by getting a credential) have a lot less influence on who we turn out to
be than things we do because we find them intrinsically rewarding.
One more thing: we should not all make our living as computer programmers. That
would make the community less robust than it otherwise would be :)
0Mass_Driver13y
Thank you! This is really helpful, and I look forward to reading Szabo in
August.
2Jordan13y
I worry about this as well when I'm reading long arguments or long works of
fiction presenting ideas I disagree with. My tactic is to stop occasionally and
go through a mental dialog simulating how I would respond to the author in
person. This serves a double purpose, as hopefully I'll have better cached
arguments in the event I ever need them.
Of course, this is a dangerous tactic as well, because you may be shutting off
critical reasoning applied to your preexisting beliefs. I only apply this tactic
when I'm very confident the author is wrong and is using fallacious arguments.
Even then I make sure to spend some amount of time playing devil's advocate.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I'm not entirely sure the mechanism couldn't also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don't understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they've discovered are. I'm hoping one of you does.
It won't work, as is clearly explained here
[http://www.fanfiction.net/s/5782108/17/Harry_Potter_and_the_Methods_of_Rationality].
To put this into my own words "The more information you extract from the future,
the less you are able to control the future from the past. And hence, the less
understanding you can have about what those bits of future-generated information
are actually going to mean."
I wrote that before actually looking at the paper you linked. I don't understand
much QM either, but now that I have looked it seems to me that figure 2 of the
paper backs me up on my interpretation of Harry's experiment.
5Baughn12y
Even if it's written by Eliezer, that's still generalizing from fictional
evidence. We don't know what the laws of physics are supposed to be there..
Well. You probably can't use time-travel to get infinite computing power. But
that's not to say you can't get strictly finite power out of it; in Harry's
case, his experiment would probably have worked just fine if he'd been the sort
of person who'd refuse to write "DO NOT MESS WITH TIME".
1cousin_it12y
Playing chicken with the universe, huh? As long as scaring Harry is easier than
solving his homework problem, I'd expect the universe to do the former :-) Then
again, you could make a robot use the Time-Turner...
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
From that Wikipedia article:
Apologizing for ... being German? That's really bizarre.
2LucasSloan13y
Not really. Most cultures go funny in the head around the Holocaust. It is, for
some reason, considered imperative that 10th graders in California spend more
time being made to feel guilty about the Holocaust than learning about the
actual politics of the Weimar Republic.
0NancyLebovitz13y
Cultures can also be very weird about how they treat schoolchildren. The kids
weren't responsible for any part of the Holocaust, and they're theoretically
apologizing to someone who can't hear it.
I can see some point in all this if you believe that Germans are especially apt
to genocide (I have no strong opinion about this) and need to keep being
reminded not to do it. Still, if this sort of apology is of any use, I'd take it
more seriously if it were done spontaneously by individuals.
0Clippy13y
I think it's very noble of them to collect numerous paperclips and hold them
safely out of use. c=@ I just hope they have appropriate protocols in place to
ensure they don't become stolen or unbent. Anyone know if there's an insurance
policy taken out against loss or destruction of the paperclips?
4AdeleneDawner13y
I doubt there's insurance on the paperclips themselves, but I suspect that
having associated them with something generally considered sacred-ish will do a
better job of keeping them safe than an insurance policy in any case. It's
unlikely that anyone will bother to overcome the taboo on destroying sacred
sites to steal or damage them, and if someone does, I can virtually guarantee
that the story will be in the news and more paperclips will be donated to the
project - possibly several times the number of paperclips that were stolen.
1Clippy13y
Thanks for the idea!
1AdeleneDawner13y
Noteworthy: In order for that to work as a safety mechanism for getting the
paperclips replaced, the project has to be able to get the attention of the
media. This project obviously can - we heard about it because it succeeded - but
that doesn't necessarily imply that other similar projects will easily be able
to do the same.
4Clippy13y
What are you talking about? I was referring to your idea to increase the total
number of paperclips by stealing from those held by the memorial project.
2NancyLebovitz13y
I'm feeling naive at the moment. I assumed that the good idea was encouraging
people to use paperclips as part of their sacred projects.
This would serve your purposes by making people a little more like paperclip
maximizers.
2Blueberry13y
Clippy has that effect on many people.
0Clippy13y
Why not do both?
0NancyLebovitz13y
If paper clips are apt to be stolen, then they won't be suitable for any sacred
project which depends on aiming at a permanent structure.
There are sacred projects which are evanescent, like sand painting
[http://en.wikipedia.org/wiki/Sandpainting] or religious services, but I can't
think of any sacred project which is designed to include its stuff being stolen.
2AdeleneDawner13y
*snerk* I thought that might be what you were thinking. I figured you'd get
upset if I said so without more evidence, though, if I was wrong.
In that case:
1. It's not likely to work more than once on any given project, especially if
the thefts happen too close together in time. A project that is stolen from
too frequently is likely to be abandoned.
2. Repeated thefts of paperclips from unrelated projects will almost certainly
lead to future projects being disinclined to use paperclips for such things,
which would I suspect be a net loss for you. Doing it once won't be a
problem in that way, though.
Bonus: If you go through with it, and that makes the news, it should count as
some pretty significant evidence that you exist and aren't a roleplaying human's
character.
ETA: Oh, and it'll probably work better if you make it look like vandalism, not
just a mysterious disappearance - damage the place a little, so it looks better
for the media. You can look for news stories about vandalism for pictures of
what the media likes to report on.
2SilasBarta13y
Adelene ... please, PLEASE stop giving the "Clippy" character ideas!
2AdeleneDawner13y
Clippy came up with the theft idea all on eir own, actually - my original
suggestion can be just as easily parsed as an idea for less costly security for
paperclips that are being stored on Earth.
Also, consider: If Clippy is the type of being who would do such a thing,
wouldn't it be better for us to know that? (And of course if Clippy is just
someone's character, I haven't done anything worse than thumb my nose at a few
taboos.)
0SilasBarta13y
You said this [http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/256g]:
2AdeleneDawner13y
Yes, in response to this:
......which, on reflection, doesn't necessarily imply theft; I suppose it could
refer to the memorial getting sucked into a sinkhole or something. Oops?
Maybe this has been discussed before -- if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people lik... (read more)
Do you ever have a day when you log on and it seems like everyone is "wrong on
the Internet"? (For values of "everyone" equal to 3, on this occasion.) Robin
Hanson and Katja Grace both have posts (on teenage angst, on population) where
something just seems off, elusively wrong; and now SarahC suggests that "the
only friendly AI may be one that commits suicide". Something about this
conjunction of opinions seems obscurely portentous to me. Maybe it's just a
know-thyself moment; there's some nascent opinion of my own that's going to
crystallize in response.
Now that my special moment of sharing is out of the way... Sarah, is the
friendly AI allowed to do just one act of good before it kills itself? Make a
child smile, take a few pretty photos from orbit, save someone from dying, stop
a war, invent cures for a few hundred diseases? I assume there is some integrity
of internal logic behind this thought of yours, but it seems to be overlooking
so much about reality that there has to be a significant cognitive disconnect at
work here.
0cupholder13y
I've noticed I get this feeling relatively often from Overcoming Bias. I think
it comes with the contrarian blogging territory.
1Richard_Kennaway13y
I get it from OB also, which I have not followed for some time, and many other
places. For me it is the suspicion that I am looking at thought gone wrong
[http://web.maths.unsw.edu.au/~jim/wrongthoughts.html].
5Rain13y
I would call it "pet theory syndrome." Someone comes up with a way of
"explaining" things and then suddenly the whole world is seen through that
particular lens rather than having a more nuanced view; nearly everything is
reinterpreted. In Hanson's case, the pet theories are near/far and status.
1JoshuaZ13y
Prediction markets also.
Is anyone worried that LW might have similar issues? If so, what would be the
relevant pet theories?
2Larks13y
On a related note: suppose a community of moderately rational people had one
member who was a lot more informed than them on some subject, but wrong about
it. Isn't it likely they might all end up wrong together? Prediction Markets was
the original subject, but it could go for a much wider range of topics: Multiple
Worlds, Hansonian Medicine, Far/near, Cryonics...
3Rain13y
That's where the scientific method comes in handy, though quite a few of
Hanson's posts sound like pop psychology rather than a testable hypothesis.
2JoshuaZ13y
I don't get this impression from OB at all. The thoughts at OB even when I
disagree with them are far more coherent than the sort of examples given as
thought gone wrong. I'm also not sure it is easy to actually distinguish between
"thought gone wrong" in the sense of being outright nonsense as drescribed in
the linked essay and actually good but highly technical thought processes. For
example I could write something like:
Now, what I wrote above isn't nonsense. It is just poorly written, poorly
explained math. But if you don't have some background, this likely looks as bad
as the passages quoted by the linked essay. Even when the writing is not poor
like that above, one can easily find sections from conversations on LW about say
CEV or Bayesianism that look about as nonsensical if one doesn't know the terms.
So without extensive investigation I don't think one can easily judge whether a
given passage is nonsense or not. The essay linked to is therefore less than
compelling (in fact, having studied many of their examples I can safely say that
they really are nonsensical but it isn't clear to me how you can tell that from
the short passages given with their complete lack of context Edit:. And it could
very well be that I just haven't thought about them enough or approached them
correctly just as someone who is very bad at math might consider it to be
collectively nonsense even after careful examination) It does however seem that
some disciplines run into this problem far more often than others. Thus,
philosophy and theology both seem to run into the parading nonsensical streams
of words together problem more often than most other areas. I suspect that this
is connected to the lack of anything resembling an experimental method.
0Richard_Kennaway13y
OB isn't a technical blog though.
Having criticised it so harshly, I'd better back that up with evidence. Exhibit
A [http://www.overcomingbias.com/2009/09/this-is-the-dream-time.html]: a highly
detailed scenario [http://lesswrong.com/lw/jk/burdensome_details/] of our far
future, supported by not much. Which in later postings to OB (just enter
"dreamtime" into the OB search box) becomes part of the background assumptions,
just as earlier OB speculations become part of the background assumptions of
that posting. It's like looking at the sky and drawing in constellations (the
stars in this analogy being the snippets of scientific evidence adduced here and
there).
1JoshuaZ13y
That example seems to be more in the realm of "not very good thinking" than
thought gone wrong. The thoughts are coherent, just not well justified. it isn't
like the sort of thing that is quoted in the example essay where thought gone
wrong seems to mean something closer to "not even wrong because it is
incoherent."
2Richard_Kennaway13y
Ok, OB certainly isn't the sort of word salad that Stove is attacking, so that
wasn't a good comparison. But there does seem to me to be something
systematically wrong with OB. There is the man-with-a-hammer thing, but I don't
have a problem with people having their hobbyhorses, I know I have some of my
own. I'm more put off by the way that speculations get tacitly upgraded to
background assumptions, the join-the-dots use of evidence, and all those "X is
Y" titles.
0SilasBarta13y
Got a good summary of this? The author seems to be taking way too long to make
his point.
0[anonymous]13y
"Most human thought has been various different kinds of nonsense that we mostly
haven't yet categorized or named."
0Richard_Kennaway13y
This paragraph, perhaps?
I think that should go in the next quotes thread.
4khafra13y
Or perhaps the quotes thread from 12 months ago
[http://lesswrong.com/lw/10o/rationality_quotes_june_2009/uvo].
2[anonymous]13y
I'm not necessarily arguing for this position as saying we need to address it.
"Suicidal AI" is to the problem of constructing FAI as anarchism is to political
theory; if you want to build something (an FAI, a good government) then, on the
philosophical level, you have to at least take a stab at countering the argument
that perhaps it is impossible to build it.
I'm working under the assumption that we don't really know at this point what
"Friendly" means, otherwise there wouldn't be a problem to solve. We don't yet
know what we want the AI to do.
What we do know about morality is that human beings practice it. So all our
moral laws and intuitions are designed, in particular, for small, mortal
creatures, living among other small, mortal creatures.
Egalitarianism, for example, only makes sense if "all men are created equal" is
more or less a statement of fact. What should an egalitarian human make of a
powerful AI? Is it a tyrant? Well, no, a tyrant is a human who behaves as if
he's not equal to other humans; the AI simply isn't equal. Well, then, is the AI
a good citizen? No, not really, because citizens treat each other on an equal
footing...
The trouble here, I think, is that really all our notions of goodness are really
"what is good for a human to do." Perhaps you could extend them to "what is good
for a Klingon to do" -- but a lot of moral opinions are specifically about how
to treat other people who are roughly equivalent to yourself. "Do unto others as
you would have them do unto you." The kind of rules you'd set for an AI would be
fundamentally different from our rules for ourselves and each other.
It would be as if a human had a special, obsessive concern and care for an ant
farm. You can protect the ants from dying. But there are lots of things you
can't do for the ants: be an ant's friend, respect an ant, keep up your end of a
bargain with an ant, treat an ant as a brother...
I had a friend once who said, "If God existed, I would be his enemy." Could
3Mitchell_Porter13y
You say, human values are made for agents of equal power; an AI would not be
equal; so maybe the friendly thing to do is for it to delete itself. My question
was, is it allowed to do just one or two positive things before it does this? I
can also ask: if overwhelming power is the problem, can't it just reduce itself
to human scale? And when you think about all the things that go wrong in the
world every day, then it is obvious that there is plenty for a friendly
superhuman agency to do. So the whole idea that the best thing it could do is
delete itself or hobble itself looks extremely dubious. If your point was that
we cannot hope to figure out what friendliness should actually be, and so we
just shouldn't make superhuman agents, that would make more sense.
The comparison to government makes sense in that the power of a mature AI is
imagined to be more like that of a state than that of a human individual. It is
likely that once an AI had arrived at a stable conception of purpose, it would
produce many, many other agents, of varying capability and lifespan, for the
implementation of that purpose in the world. There might still be a central
super-AI, or its progeny might operate in a completely distributed fashion. But
everything would still have been determined by the initial purpose. If it was a
purpose that cared nothing for life as we know it, then these derived agencies
might just pave the earth and build a new machine ecology. If it was a purpose
that placed a value on humans being there and living a certain sort of life,
then some of them would spread out among us and interact with us accordingly.
You could think of it in cultural terms: the AI sphere would have a culture, a
value system, governing its interactions with us. Because of the radical
contingency of programmed values, that culture might leave us alone, it might
prod our affairs into taking a different shape, or it might act to swiftly and
decisively transform human nature. All of these outcomes wou
1NancyLebovitz13y
It seems unlikely that an FAI would commit suicide if humans need to be
protected from UAI, or if there are other threats that only an FAI could handle.
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
I've been thinking about finally starting a Study Group thread, primarily with a
focus on Jaynes and Pearl both of which I'm studying at the moment. It would
probably make sense to expand it to other books including non-math books -
though the set of active books should remain small.
Two things have been holding me back - for one, the IMO excessively blog-like
nature of LW with the result that once a conversation has rolled off the front
page it often tends to die off, and for another a fear of not having enough time
and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or
study group entails asking a few participants to make a firm commitment to go
through a chapter or a section at a time and report back, help each other out
and so on.
1Jack13y
Well those are actually exactly the two books I had in mind (though I think we
should probably just start with one of them).
Agreed. Two options
1. A new top level post for every chapter (or perhaps every two chapters,
whatever division is convenient). This was a little annoying when it was one
person covering every chapter in Dennett's Consciousness explained but if a
decent number of people were participating the book club (and if each new
post was put up by the facilitator, explaining hard to understand concepts)
they'd probably justify themselves.
2. We start a dedicated wordpress or blogspot blog and give the facilitators
posting powers.
I wouldn't at all mind posting to start discussion on some sections but I'm not
the best person to be explaining the math if it gets confusing-- if that was
part of your expectation of facilitation.
I was thinking a reading group for Jaynes would be have a better chance of
success than Pearl-- the issues are more general, the math looks easier and the
entire thing is online. But it sounds like you've looked at them more than I
have, what are your thoughts? I guess what really matters is what people are
interested in.
For those interested the Jaynes book can be found here
[http://www-biba.inrialpes.fr/Jaynes/prob.html] and much of Pearl's book can be
found here [http://bayes.cs.ucla.edu/BOOK-2K/book-toc.html].
1Richard_Kennaway13y
Is there any existing off-the-shelf web software for setting up book-club-type
discussions?
I don't want to make too much of the infrastructure issue, as what really makes
a book club work is the commitment of its members and facilitators, but it would
be convenient if there was a ready-made infrastructure available, like there is
for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain
(lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for
summaries of past discussions.
2Morendil13y
There's a risk that any amount of thinking about infrastructure could kill off
what energy there is, and since there appears to be some energy at present, I
would rather favor having the discussion about the book club in the book club
thread. :)
IOW we can kick off the initiative locally and let it find a new venue if and
when that becomes necessary. There also seems to be some sort of provisional
consensus that it's not quite time yet to fragment the LW readership : the LW
subreddit doesn't seem to have panned out.
It seems to me that Jaynes is definitely topical for LW, I wouldn't worry about
discussions among people studying it becoming annoying to the rest of the
community. There are many, many gems pertaining to rationality in each of the
chapters I've read so far.
0Jack13y
This [http://www.bookclubsonline.org/] looks like it could work. A wordpress
blog would probably be fine as well. Of course these options don't let people
get karma for participating which would be a nice motivator to have. A subreddit
would be nice...
Would the discussions really undermine the regular business of Less Wrong?
1JoshuaZ13y
Do people really care that much about karma? I mean, once one had enough karma
to post top-level posts, does it matter that much?
3Jack13y
People like making numbers go higher. It's a strange impulse, I'm not sure why
we have it. Maybe assigning everyone numbers hijacks our dominance hierarchy
instincts and we feel better about ourselves the higher our number is. For me,
it isn't the total that I like having so much as the feedback for individual
comments. I get frustrated on other blogs when I make a comment that is
informative and clever but doesn't get a response. I feel like I'm talking to
myself. Here even if no one responds I can at least learn if someone appreciated
it. If a lot of people appreciated it I feel a brief sense of accomplishment.
2[anonymous]13y
Two thoughts which have probably been beaten to death elsewhere:
1) A karma system is a good way to provide cues to which posts are worth reading
and which aren't.
2) Karma points are a big shiny status indicator, and LWers are no more immune
to status drives than anyone else is.
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out
his interview at the commonsenseatheism podcast here
[http://commonsenseatheism.com/?p=8044], also covers his path to becoming a
christian atheist.
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
Remember, probabilities are not inherent facts of the universe, they are
statements about how much you know. You don't have perfect knowledge of the
universe, so when I ask, "Is your mum on the phone?" you don't have the
guaranteed correct answer ready to go. You don't know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier
observations of seeing your mother on the phone occasionally. So rather than
just saying "I have absolutely no idea in the slightest", you are able to say
something more useful: "It's possible, but unlikely." Probabilities are simply a
way to quantify and make precise our imperfect knowledge, so we can form more
accurate expectations of the future, and they allow us to manage and update our
beliefs in a more refined way through Bayes' Law.
3Oscar_Cunningham13y
The cases are different in the way that you describe, but the maths of the
probability is the same in each case. If you have an unseen die under a cup, and
a die that you are about to roll, then one is already determined and the other
isn't, but you'd bet at the same odds for each one to come up a six.
2Alexandros13y
I think the difference is that one event is a statement about the present which
is either presently true or not, and the other is a prediction. So you could
illustrate the difference by using the following pairs: P(Mom on phone now) vs.
P(Mom on phone tomorrow at 12:00am). In the dice case P(die just rolled but not
yet examined is 1) vs. P(die I will roll will come out 1).
I do agree with Oscar though, the maths should be the same.
1Vladimir_M13y
You might be interested in this recent discussion, if you haven't seen it
already:
http://lesswrong.com/lw/2ax/open_thread_june_2010/23fa
[http://lesswrong.com/lw/2ax/open_thread_june_2010/23fa]
1Jack13y
It looks to me like your confusion with these examples just stems from the fact
that one event is in the present and the other in the future. Are you still
confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1/6. Or
conversely, you make it P(I rolled a one on the fair die that is now beneath
this cup) =1/6
0khafra13y
In my experience, when people say something like that it's usually a matter of
epistemic vs ontological perspective; and contrasting Laplace's Demon with
real-world agents of bounded computational power resolves the difficulty. But
that could be overkill
0prase13y
In the second case, you either roll one on the die or not, but you are pretty
sure that it will be another number.
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk ... (read more)
It does seem odd to get such divergent results.
Bad luck could be, not just getting that 5% result which 95% accuracy implies,
but some non-obvious difference in the volunteers (different genetics?), in the
tea. or in the milk.
0JoshuaZ13y
It isn't that odd. There are a lot of things that could easily change the
results. Exact temperature of tea (if one protocol involved hotter or colder
water), temperature of milk, type of milk, type of tea (one of the protocols
uses black tea, and another uses green tea). Note also that the studies are
using different metrics as well.
0NancyLebovitz13y
Nitpick: the second study included both black and green tea.
However, your general point stands, and I'll add that there are different sorts
of both black and green teas.
I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the... (read more)
Being right on group effects is difficult.
Is there a consistent path for what LW wants to be? a) rationalist site filled
up with meta topics and examples b) a) + detailed treats of some important
topics c) open to everything as long as reason is used
and so on. I personally like and profit from the discussing of akrasia methods.
But it might be detrimental to the main target of the site. Also I would very
much like to see a cannon develop for knowledge that LWers generally agree upon
including, but not limited to the topics I currently care about myself.
Voicing ideas depends on where you are. In social settings I more and more
advice against it. Arguing/discussing is just not helpful. And if you are filled
up with weird ideas then you get kicked out, which might be bad for other goals
you have.
It would be great to have a place for any idea to be examined for right and
wrong.
1GreenRoot13y
LW is working on it [http://wiki.lesswrong.com/wiki/LessWrong_Wiki], and you can
help!
What does Fallacyzilla have on its chest? It looks like it has "A -> B, ~B,
therefore ~A" But that is valid logic. Am I misreading it or did you mean to put
"A -> B, ~A, therefore ~B"? That would be actually wrong.
I noticed that two seconds after I put it up and it's now corrected...er...incorrected. (Today I learned - my brain has that same annoying auto-correct function as Microsoft Word)
Eliezer has written about using the length of the program required to produce it, but this doesn't seem to be unique; you could have languages that are very efficient for one thing, but long-winded for another. And quantum computing seems to make it even more confusing.
The method that Eliezer is referring to is known as Solomonoff induction which
relies on programs as defined by Turing machines. Quantum computing doesn't come
into this issue since these formulations just talk about length of
specification, not efficiency of computation. There are theorems that also show
that for any given Turing complete well-behaved language, the minimum size of
program can't be differ by more than a constant. So changing the language won't
alter the priors other than a fixed amount. Taken together with Aumann's
Agreement Theorem, the level of disagreement about estimated probability should
go to zero in the limiting case (disclaimer I haven't seen a proof of that last
claim, but I suspect it would be a consequence of using a Solomonoff style
system for your priors).
How can I understand quantum physics? All explanations I've seen are either:
those that dumb things down too much, and deliver almost no knowledge; or
those that assume too much familiarity with this kind of mathematics that nobody outside physics uses, and are therefore too frustrating.
I don't think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven't seen any quantum physics explanation that did even as little a... (read more)
That's because quantum computing and quantum cryptography only use a subset of
quantum theory. Your link says, for example, that the basics of quantum
computing only require knowing how to handle 'discrete (2-state) systems and
discrete (unitary) transformations,' but a full treatment of QT has to handle
'continuously infinite systems (position eigenstates) and continuous families of
transformations (time development) that act on them.' The full QT that can deal
with these systems uses a lot more math.
I wonder if there's a general trend for people who are interested in quantum
computing and not all of QT to play down the prerequisites you need to learn QT.
Your post reminded me of a Scott Aaronson lecture
[http://www.scottaaronson.com/democritus/lec9.html], where he says
Which is technically true, but if you want to know about quark colors or spin or
exactly how uncertainty works, pushing around |1>s and |2>s and talking about
complexity classes is not going to tell you what you want to know.
To answer your question more directly, I think the best way to understand
quantum physics is to get an undergrad degree in physics from a good university,
and work as hard as you can while you're getting it. Getting a degree means you
have the physics-leaning math background needed to understand explanations of QT
that don't dumb it down.
I might be overestimating the amount of math that's necessary - I'm basing this
on sitting in on undergrad QT lectures - but I've yet to find a comprehensive QT
text that doesn't use calculus, complex numbers, and linear algebra.
0simplicio13y
Try Jonathan Allday's book "Quantum Reality: Theory and Philosophy." It is
technical enough that you get a quantitative understanding out of it, but
nothing like a full-blown textbook.
For those of you who have been following my campaignagainst the "It's impossible to explain this, so don't expect me to!" defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan's blog.
In case he deletes the entire exchange thus far (which he's been known to do when I post), here's what's transpired (paragraphing truncated):
Me: That's not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function ("rules") for his actions. Maybe he doesn't really... (read more)
Well, I haven't read any other blog posts of him but the one you linked to, but
in this specific case I cannot find what there is to be attacked.
It is stories like this that are used to explain that some values are of higher
importance than others, in simple terms (a style that also exists in the
not-so-extended circle of LW
[http://yudkowsky.net/rational/the-simple-truth]).The fictional senior monk's
answer would be obvious for anybody who has read up even just a little bit on
Zen and/or Buddhism, it is more reinforcing than teaching news.
If the blogger is often holding an anti-reductionist position you'd like to
counter, I'd go for actually anti-reductionist posts of him...
0SilasBarta13y
It's true that some values are more important than others. But that wasn't the
point Gene was trying to make in the particular post that I linked. He was
trying to make (yet another) point about the futility of specifying or adhering
to specific rules, insisting that mastery of the material necessarily comes from
years of experience.
This is consistent with the theme of the recent
[http://gene-callahan.blogspot.com/2010/05/my-principle-on-principles.html]
posts
[http://gene-callahan.blogspot.com/2010/06/bernard-bosanquet-on-morality.html]
he's been making
[http://gene-callahan.blogspot.com/2010/06/edmund-burke-on-rights.htm], and his
dissertation against rationalism in politics (though the latter is not the same
as the "rationalism" we refer to here).
Whatever the merit of the point he was trying to make (which I disagree with),
he picked a bad example, and I showed why: the supposedly "tacit"
[http://lesswrong.com/lw/2ax/open_thread_june_2010/23p8], inarticulable judgment
that comes with experience was actually quite articulable, without even having
to anticipate this scenario in advance, and while only speaking in general
terms!
(I mentioned his opposition to reductionism only to give greater context to my
frequent disagreement with him (unfortunately, past debates were deleted as he
or his friend moved blogs, others because he didn't like the exchange). In this
particular exchange, you find him rejecting mechanism, specifically the idea
that humans can be described as machines following deterministic laws at all.)
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
Not sure where I stand actually, but this seems relevant:
"If God did not exist, it would be necessary to invent him" - Voltaire
I suppose it should be added that one should do one's best to make sure the god
that's created is more Friendly than Not.
1red7513y
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer.
What frightens me is that when (if) CEV will converge, the humanity will be
stuck in local maximum for the rest of eternity. It seems that FAI after CEV
convergence will have adamantine moral by design (or it will look like it has,
if FAI will be unconscious). And no one will be able to talk FAI out of this, or
no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
1NancyLebovitz13y
If CEV can include willingness to update as more information comes in and more
processing power becomes available (and if I have anything to say about it, it
will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only
FAI?
0NancyLebovitz13y
If there are advantages to getting alien CEVs, but we're unlikely to contact
aliens because of light speed limits, or if we do, we're unlikely to get enough
information to construct their CEVs, would it make sense to evolve alien species
(probably in simulation)? What would the ethical problems be?
1Alicorn13y
Simulated aliens complex enough to have a CEV are complex enough to be people,
and since death is evolution's favorite tool, simulating the evolution of the
species would be causing many needless deaths.
1JGWeissman13y
The simulation could provide an afterlife.
But I don't see why we would want our CEV to include a random sample of possible
aliens. If, when we encounter aliens, we find that we care about their values,
we can run a CEV on them at that time.
3Alicorn13y
This possibility may be the strongest source of probability mass for an
afterlife for us.
1NancyLebovitz13y
Does a similar argument apply to having children if there's no high likelihood
of immortality tech?
0Alicorn13y
Depends on the context. Quite plausibly, though.
1Clippy13y
Isn't God fake?
0[anonymous]13y
Must be. If he would exist, he would not have invented ape-imitating humans,
would he?
Less Wrong Rationality Quotes since April 2009, sorted by points.
This version copies the visual style and preserves the formatting of the original comments.
Here is the source code.
I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
Less Wrong Rationality Quotes since April 2009, sorted by points.
Pre-alpha, one hour of work. I plan to improve it.
EDIT: Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2: I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won't offer much to regular/long-time LW readers, but it's a great resource to give to friends/family who don't have the time/energy demanded by LW.
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
Nisan:
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly -- itself a dubious achievement, even if it were possible -- your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn't mean that your arguments shouldn't be stated clearly and discussed openly, but when you insultingly refer to opposing views as "chauvinism," you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level valu... (read more)
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Potential top-level article, have it mostly written, let me know what you think:
Title: The hard problem of tree vibrations [tentative]
Follow-up to: this comment (Thanks Adelene Dawner!)
Related to: Disputing Definitions, Belief in the Implied Invisible
Summary: Even if you agree that trees normally make vibrations when they fall, you're still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the "standard" debate over a famous philosophical dilemma: "If a tree falls in a forest and no one hears it, does it make a sound?" (Call this "Question Y.") Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between "sound as vibration" and "sound as auditory ... (read more)
New evidence in the Amanda Knox case
This is relevant to LW because of a previous discussion.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don't bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
Fiction about simulation
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Less Wrong Book Club and Study Group
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I'm willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our... (read more)
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Thoughts?
While searching for literature on "intuition", I came upon a book chapter that gives "the state of the art in moral psychology from a social-psychological perspective". This is the best summary I've seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I've put up a mirror of it.
ETA: Here's the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social ... (read more)
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we're dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we're willing.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors' arguments allowed for, and society managed to work-around all the regulations and other ... (read more)
Regrets and Motivation
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measu... (read more)
The Science of Gaydar: http://nymag.com/print/?/news/features/33520/
How To Destroy A Black Hole
http://www.technologyreview.com/blog/arxiv/25316/
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren't there a lot of intelligent species?
About CEV: Am I correct that Eliezer's main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
An interesting article criticizing speculation about social trends (specifically teen sex) in the absence of statistical evidence.
Saw this over on Bruce Schneier's blog, it seemed worth reposting here. Wharton’s “Quake” Simulation Game Shows Why Humans Do Such A Poor Job Planning For & Learning From Catastrophes (link is to summary, not original article, as original article is a bit redundant). Not so sure how appropriate the "learning from" part of the title is, as they don't seem to mention people playing the game more than once, but still quite interesting.
What solution do people prefer to Pascal's Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn't mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we're wrong is found by drawing from a broader reference class, like "offers from a stranger".
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
What have I left out?
Because it was used somewhere I calculated my own weights worth in gold - it is about 3.5 million EUR. In silver you can get me for 50.000 EUR. The Mythbusters recently build a lead balloon and had it fly. Some proverb don't hold up to reality and/or engineering.
The number of heart attacks has fallen since England imposed a smoking ban
http://www.economist.com/node/16333351?story_id=16333351&fsrc=scn/tw/te/rss/pe
In the Singularity Movement, Humans Are So Yesterday (long Singularity article in this Sunday's NY Times; it isn't very good)
http://news.ycombinator.com/item?id=1426386
Heuristics and biases in charity
http://www.sas.upenn.edu/~baron/papers/charity.pdf (I considered making this link as a top-level post.)
I've recently begun downvoting comments that are at -2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach -2 but fail to be pushed over to -3, which I'm attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don't want to be 'the one to push the button'. This is an extension of my RL policy of taking 'the last' of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, ... (read more)
Does countersignaling actually happen? Give me examples.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
My recent comment on Reddit reminded me of WrongTomorrow.com - a site that was mentioned briefly here a while ago, but which I haven't seen much since.
Try it out, guys! LongBets and PredictionBook are good, but they're their own niche; LongBets won't help you with pundits who don't use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as "safety gloves" for dangerous memes?
I'm asking because I'm currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the "justifications" are incoherent, Orwellian doublespeak, and/or tend... (read more)
I found an interesting paper on Arxiv earlier today, by the name of Closed timelike curves via post-selection: theory and experimental demonstration.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I'm not entirely sure the mechanism couldn't also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don't understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they've discovered are. I'm hoping one of you does.
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
Maybe this has been discussed before -- if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people lik... (read more)
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
OpenPCR: DNA amplification for anyone
http://www.thinkgene.com/openpcr-dna-amplification-for-anyone/
Some clips on the dark-side epistemology of history done by Christian apologists by Robert M Price, who describes himself as a Christian Atheist.
Not sure how worthwhile Price is to listen to in general though.
A question about Bayesian reasoning:
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
Supposedly (actual study) milk reduces catechin level in bloodstream.
Other research says: "does not!"
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk ... (read more)
I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the... (read more)
Rather than waste time doing both your cannon request and Roko's Fallacyzilla request, I just combined them into one picture of the Less Wrong Cannon attacking Fallacyzilla.
...now someone take Photoshop away from me, please.
I noticed that two seconds after I put it up and it's now corrected...er...incorrected. (Today I learned - my brain has that same annoying auto-correct function as Microsoft Word)
Are there cases where occam's razor results in a tie, or is there proof that it always yields a single solution?
Do we have a unique method for generating priors?
Eliezer has written about using the length of the program required to produce it, but this doesn't seem to be unique; you could have languages that are very efficient for one thing, but long-winded for another. And quantum computing seems to make it even more confusing.
How to write a "Malcolm Gladwell Bestseller" (an MGB)
http://blog.jgc.org/2010/06/how-to-write-malcolm-gladwell.html
How can I understand quantum physics? All explanations I've seen are either:
I don't think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven't seen any quantum physics explanation that did even as little a... (read more)
Blog about common cognitive biases - one post per bias:
http://youarenotsosmart.com/
For those of you who have been following my campaign against the "It's impossible to explain this, so don't expect me to!" defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan's blog.
In case he deletes the entire exchange thus far (which he's been known to do when I post), here's what's transpired (paragraphing truncated):
Me: That's not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function ("rules") for his actions. Maybe he doesn't really... (read more)
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...