Sharing my Christmas (totally non-supernatural) miracle:
My theist girlfriend on Christmas Eve: "For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like 'oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.' I almost laughed."
This made me say "Awwwwwwww..."
Did LW play a part, or was she just browsing the Internet?
3Jack13y
Well I think I was the first vocal atheist she had ever met so arguments with me
and me making fun of superstition while not being a bad person were probably
crucial. Some Less Wrong stuff probably got to her through me, though. I should
find something to introduce the site to her, though I doubt she would ever spend
a lot of time here.
I'm looking for a particular fallacy or bias that I can't find on any list.
Specifically, this is when people say "one more can't hurt;" like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can't find a name. I would expect it to be called the "Lost Cause Fallacy" or the "Fallacy of Futility" or something, but neither seems to be recognized anywhere. Does this have a standard name that I don't know, or is it so obvious that no one ever bothered to name it?
Your first example sounds related to the broken window theory
[http://en.wikipedia.org/wiki/Fixing_Broken_Windows], but I've never seen a name
for the underlying bias. (The broken window fallacy
[http://en.wikipedia.org/wiki/Parable_of_the_broken_window] is something else
altogether.)
2Eliezer Yudkowsky13y
Bee-sting theory of poverty is the closest I've heard. You're right, this is
real and deserves a name, but I don't know what it would be.
0Sniffnoy13y
This seems like a special case of the more general "just one can't hurt"
(whatever the current level) way of thinking. I don't know any name for this but
I guess you could call it something like the "non-Archimedean bias"?
0Zack_M_Davis13y
"Sunk cost fallacy"
4Eliezer Yudkowsky13y
No, that's different. That's pursuing a reward so as to not acknowledge a loss.
This is ignoring a penalty because of previous losses.
0orthonormal13y
Informally, "throwing good money after bad"? I agree that this is a real and
interesting phenomenon.
What are the implications to FAI theory of Robin's claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with "status" as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
I think it's kinda like inclusive genetic fitness: It's the reason you do
things, but you're (usually) not conciously striving for an increased amount of
it. So I don't think it could be called a terminal value, as such...
5Wei_Dai13y
I had thought of that, but, if you consider a typical human mind as a whole
instead of just the conscious part, it seems clear that it is striving for
increased status. The same cannot be said for inclusive fitness, or at least the
number of people who do not care about having higher status seems much lower
than the number of people who do not care about having more offspring.
I think one of Robin's ideas is that unconscious preferences, not just conscious
ones, should matter in ethical considerations. Even if you disagrees with that,
how do you tell an FAI how to distinguish between conscious preferences and
unconscious ones?
0dlrlw6y
no, no, no, you should be comparing the number of people who want to have great
sex with a hot babe with the number of people who want to gain higher status.
The answer for most everyone would be yes!! both! Because both were selected for
by increased inclusive fitness.
4wedrifid13y
If it went that far it would also go the next step. It would end up with
"getting laid".
The conversion techniques
[http://changingminds.org/techniques/conversion/conversion_techniques.htm] page
is fascinating. I'll put this to use good in further spreading the word of
Bayes.
Does anyone here think they're particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall---it all just feels like I chose to do what I did out of my magical free will. Which doesn't explain anything... (read more)
My suggestion is focussing your introspection on working out what you really
want. That is, keep investigating what you really want until such time as the
phrase 'me behaving poorly' and 'being good' sound like something that is in a
foreign language, that you can understand only by translating.
You may be thinking "clearly something has gone horribly wrong with my brain"
but your brain is thinking "Something is clearly wrong with my consciousness. It
is trying to make me do all this crazy shit. Like the sort of stuff we're
supposed to pretend we want because that is what people 'Should' want.
Consciousnesses are the kind of things that go around believing in God and
sexual fidelity. That's why I'm in charge, not him. But now he's thinking he's
clever and is going to find ways to manipulate me into compliance. F@#@ that
s#!$. Who does he think he is?"
When trying to work effectively with people empathy is critical. You need to be
able to understand what they want and be able to work with each other for mutual
benefit. Use the same principle with yourself. Once your brain believes you
actually know what it (ie. you) want and are on approximately the same page it
may well start trusting you and not feel obliged to thwart your influence. Then
you can find a compromise that allows you to get that 'simple thing' you want
without your instincts feeling that some other priority has been threatened.
3Alicorn13y
People who watch me talking about myself sometimes say I'm good at
introspection, but I think about half of what I do is making up superstitions so
I have something doable to trick myself into making some other thing, previously
undoable, doable. ("Clearly, the only reason I haven't written my paper is that
I haven't had a glass of hot chocolate, when I'm cold and thirsty and want
refined sugar." Then I go get a cup of cocoa. Then I write my paper. I have to
wrap up the need for cocoa in a fair amount of pseudoscience for this to work.)
This is very effective at mood maintenance for me - I was on antidepressants and
in therapy for a decade as a child, and quit both cold turkey in favor of
methods like this and am fine - but I don't know which (if, heck, any) of my
conclusions that I come to this way are "really true" (that is, if the hot
chocolate is a placebo or not). They're just things that pop into my head when I
think about what my brain might need from me before it will give back in the
form of behaving itself.
You have to take care of you brain for it to be able to take care of you. If it
won't tell you what it wants, you have to guess. (Or have your iron levels
[http://lesswrong.com/lw/15w/experiential_pica/] checked :P)
3whpearson13y
I tend to think of my brain as a thing with certain needs. Companionship,
recognition, physical contact, novelty, etc. Activities that provide these tend
to persist. Figure out what your dysfunctional actions provide you in terms of
your needs. Then try and find activities that provide these but aren't so bad
and try and replace the dysfunctional bits. Also change the situation you are in
so that the dysfunctional default actions don't automatically trigger.
My dream is to find a group of like minded people that I can socialise and work
with. SIAI is very tempting in that regard.
1orthonormal13y
One thing that has worked for me lately is the following: whenever I do
something and don't really know why I did it (or am uncomfortable with the
validity of my rationalizations), I try and think of the action in Outside View
terms. I think of (or better, write out) a short external description of what I
did, in its most basic form, and its probable consequences. Then I ask what goal
this action looks optimized for; it's usually something pretty simple, but which
I might not be happy consciously acknowledging (more selfish than usual, etc).
That being said, even more helpful than this has been discussing my actions with
a fairly rational friend who has my permission to analyze it and hypothesize
freely. When they come up with a hypothesis that I don't like, but which I have
no good counterarguments against, we've usually hit paydirt.
I don't think of this as something wrong with my brain, so much as it
functioning properly in maintaining a conscious/unconscious firewall, even
though this isn't as adaptive in today's world as it once was. It's really
helped me in introspection to not judge myself, to not get angry with my
revealed preferences.
Thanks for posting this (I didn't know the videos were up), though you've posted
it in the December 2009 open thread.
(The current open thread
[http://lesswrong.com/r/discussion/lw/38j/less_wrong_open_thread_december_2010/]
is in the Discussion section, though this may be worth its own Discussion post.)
1timtyler12y
Oops - perhaps someone else can harvest the karma for spreading it, then...
Just thought I'd mention this: as a child, I detested praise. (I'm guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it's affected my overall development.
Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.
Can I interpret that as an invitation to send you a friend request on Facebook?
>.>
1Zack_M_Davis13y
Um, sure [http://www.facebook.com/zmdavis]?
2anonym13y
Fascinating. As a child, I also detested praise, and I have always had something
bordering on an obsession for symmetry and an aversion to asymmetry.
I hadn't heard of the Thue-Morse sequence until now, but it is quite similar to
a sequence I came up with as a child and have tapped out (0 for left
hand/foot/leg, 1 for right hand/foot/leg) or silently hummed (or just thought)
whenever I was bored or was nervous.
My sequence is:
[0, 1, 1, 0] [1001, 0110, 0110, 1001] [0110 1001 1001 0110, 1001 0110 0110 1001,
1001 0110 0110 1001, 0110 1001 1001 0110] ...
[commas and brackets added to make the pattern obvious]
As a kid, I would routinely get the pattern up into the thousands as I passed
the time imagining sounds or lights very quickly going off on either the left
(0) or right (1) side.
2[anonymous]13y
Every finite subsequence of your sequence is also a subsequence of the
Thue-Morse sequence and vice versa. So in a sense, each is a shifted version of
the other; it's just that they're shifted infinitely much in a way that's
difficult to define.
0Yorick_Newsome13y
I spent much of my childhood obsessing over symmetry. At one point I wanted to
be a millionaire solely so I could buy a mansion, because I had never seen a
symmetrical suburban house.
I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it's still subject to revision.
Note: The protagonist's name is "Key". Key, and one other character, receive Spivak pronouns, which can make either Key's name or eir pronouns look like some kind of typo or formatting error if you don't know it's coming. If this annoys enough people, I may change Key's name or switch to a different genderless pronoun system. I'm curious if anyone finds that they think of Key and the other Spivak... (read more)
I love the new gloss on "What do you want to be when you grow up?"
Don't. Spivak is easy to remember because it's just they/them/their with the ths
lopped off. Nonstandard pronouns are difficult enough already without trying to
get people to remember sie and hir.
0anonym13y
Totally agreed. Spivak pronouns are the only ones I've seen that took almost no
effort to get used to, for exactly the reason you mention.
1Blueberry13y
Looks like I'm in the minority for reading Key as slightly male. I didn't get a
gender for Trellis. I also read the librarian as female, which I'm kind of sad
about.
I loved the story, found it very touching, and would like to know more about the
world it's in. One thing that confused me: the librarian's comments to Key
suggested that some actual information was withheld from even the highest levels
available to "civilians". So has someone discovered immortality, but some ruling
council is keeping it hidden? Or is it just that they're blocking research into
it, but not hiding any actual information? Are they hiding the very idea of it?
And what's the librarian really up to?
Were you inspired by Nick Bostrom's "Fable of the Dragon"? It also reminded me a
little of Lois Lowry's "The Giver".
Thanks so much for sharing it with us!
0Alicorn13y
Lace is female - why are you sad about reading her that way?
Yaaaay! I'll answer any setting questions you care to pose :)
Nobody has discovered it yet. The communities in which Key's ilk live suppress
the notion of even looking for it; in the rest of the world they're working on
it in a few places but aren't making much progress. The librarian isn't up to a
whole lot; if she were very dedicated to finding out how to be immortal she'd
have ditched the community years ago - she just has a few ideas that aren't like
what the community leaders would like her to have and took enough of a shine to
Key that she wanted to share them with em. I have read both "Fable of the
Dragon" and "The Giver" - the former I loved, the latter I loved until I re-read
it with a more mature understanding of worldbuilding, but I didn't think of
either consciously when writing.
You are most welcome for the sharing of the story. Have a look at my other stuff
[http://alicorn.elcenia.com], if you are so inclined :)
1LucasSloan13y
For me, both of the characters appeared female.
Sbe zr gur fgbel fbeg bs oebxr qbja whfg nf Xrl'f sevraq jnf xvyyrq. Vg frrzrq
gbb fbba vagb gur aneengvir gb znxr fhpu n znwbe punatr. Nyfb, jvgu erfcrpg gb
gur zbeny, vg frrzrq vafhssvpvragyl fubja gung lbh ernyyl ertneq cnva nf
haqrfvenoyr - vg frrzrq nf gubhtu lbh pbhyq or fnlvat fbzrguvat nybat gur yvarf
bs "gurl whfg qba'g haqrefgnaq." Orpnhfr bs gung nf jryy nf gur ehfurq srry bs
gur raqvat, vg fbeg bs pnzr bss nf yrff rzbgvbanyyl rssrpgvir guna vg pbhyq.
1ChrisPine13y
I liked it. :)
Part of the problem that I had, though, was the believability of the kids: kids
don't really talk like that: "which was kind of not helpful in the not confusing
me department, so anyway"... or, in an emotionally painful situation:
Key looked suspiciously at the librarian. "You sound like you're trying not to
say something."
Improbably astute, followed by not seeming to get the kind of obvious moral of
the story. At times it felt like it was trying to be a story for older kids, and
at other times like it was for adults.
The gender issue didn't seem to add anything to the story, but it only bothered
me at the beginning of the story. Then I got used to it. (But if it doesn't add
to the story, and takes getting used to... perhaps it shouldn't be there.)
Anyway, I enjoyed it, and thought it was a solid draft.
1Blueberry13y
I actually have to disagree with this. I didn't think Key was "improbably
astute". Key is pretty clearly an unusual child (at least, that's how I read
em). Also, the librarian was pretty clearly being elliptical and a little
patronizing, and in my experience kids are pretty sensitive to being patronized.
So it didn't strike me as unbelievable that Key would call the librarian out
like that.
-1Alicorn13y
You've hit on one of my writing weaknesses: I have a ton of trouble writing
people who are just plain not very bright or not very mature. I have a number of
characters through whom I work on this weakness in (unpublished portions of)
Elcenia, but I decided to let Key be as smart I'm inclined to write normally for
someone of eir age - my top priority here was finishing the darn thing, since
this is only the third short story I can actually claim to have completed and I
consider that a bigger problem.
1[anonymous]13y
Gur qrngu qvqa'g srry irel qrngu-yvxr. Vg frrzrq yvxr gur rzbgvba fheebhaqvat vg
jnf xvaq bs pbzcerffrq vagb bar yvggyr ahttrg gung V cerggl zhpu fxvzzrq bire. V
jnf nyfb rkcrpgvat n jbeyq jvgubhg qrngu, juvpu yrsg zr fhecevfrq. Va gur erny
jbeyq, qrngu vf bsgra n fhecevfr, ohg yvxr va gur erny jbeyq, fhqqra qrngu va
svpgvba yrnirf hf jvgu n srryvat bs qvforyvrs. Lbh pbhyq unir orra uvagvat ng
gung qrngu sebz gur svefg yvar.
Nyfb, lbh xvaq bs qevir evtug cnfg gur cneg nobhg birecbchyngvba. V guvax
birecbchyngvba vf zl zbgure'f znva bowrpgvba gb pelbavpf.
0gwern13y
Alicorn goes right past it probably because she's read a fair bit of cyronics
literature herself and has seen the many suggestions (hence the librarian's
invitation to think of 'a dozen solutions'), and it's not the major issue
anyway.
1rwallace13y
You traded off a lot of readability for the device of making the protagonist's
gender indeterminate. Was this intended to serve some literary purpose that I'm
missing? On the whole the story didn't seem to be about gender.
I also have to second DanArmak's comment that if there was an overall point, I'm
missing that also.
1Alicorn13y
Key's gender is not indeterminate. Ey is actually genderless. I'm sorry if I
didn't make that clear - there's a bit about it in eir second conversation with
Trellis.
3Liron13y
Your gender pronouns just sapped 1% of my daily focusing ability.
0gwern13y
I thought it was pretty clear. The paragraph about 'boy or girl' make it
screamingly obvious to me, even if the Spivak or general gender-indeterminacy of
the kids hadn't suggested it.
0Kaj_Sotala13y
Finally got around reading the story. I liked it, and finishing it gave me a
wild version of that "whoa" reaction you get when you've doing something
emotionally immersive and then switch to some entirely different activity.
I read Key as mostly genderless, possibly a bit female because the name sounded
feminine to me. Trellis, maybe slightly male, though that may also have been
from me afterwards reading the comments about Trellis feeling slightly male and
those contaminating the memory.
I do have to admit that the genderless pronouns were a bit distracting. I think
it was the very fact that they were shortened version of "real" pronouns that
felt so distracting - my mind kept assuming that it had misread them and tried
to reread. In contrast, I never had an issue with Egan's use of ve / ver / vis /
vis / verself.
0Larks13y
I got used to the Spivak after a while, and while it'd be optimal for an
audience used to it, it did detract a little at first. On the whole I'd say it's
necessary though (if you were going to use a gender'd pronoun, I'd use female
ones)
I read Key as mainly female, and Trellis as more male- it would be interesting
to know how readers' perceptions correlated with their own gender.
The children seemed a little mature, but I thought they'd had a lot better
education, or genetic enhancement or something. I think spending a few more
sentences on the important events would be good though- otherwise one can simply
miss them.
I think you were right to just hint at the backstory- guessing is always fun,
and my impression of the world was very similar to that which you gave in one of
the comments.
0arundelo13y
Great story!
I kept thinking of Key as female. This may be because I saw some comments here
that saw em as female, or because I know that you're female.
The other character I didn't assign a sex to.
0NancyLebovitz13y
I enjoyed the story-- it was an interesting world. By the end of the story, you
were preaching to a choir I'm in.
None of the characters seemed strongly gendered to me.
I was expecting opposition to anesthesia to include religiously based opposition
to anesthesia for childbirth, and for the whole idea of religion to come as a
shock. On the other hand, this might be cliched thinking on my part. Do they
have religion?
The neuro couldn't be limited to considered reactions-- what about the very
useful fast reflexive reaction to pain?
Your other two story links didn't open.
0Alicorn13y
Religion hasn't died out in this setting, although it's uncommon in Key's
society specifically. Religion was a factor in historical opposition to
anesthesia (I'm not sure of the role it plays in modern leeriness about
painkillers during childbirth) but bringing it up in more detail would have
added a dimension to the story I didn't think it needed.
Reflexes are intact. The neuro just translates the qualium into a bare awareness
that damage has occurred. (I don't know about everyone, but if I accidentally
poke a hot burner on the stove, my hand is a foot away before I consciously
register any pain. The neuro doesn't interfere with that.)
I will check the links and see about fixing them; if necessary, I'll HTMLify
those stories too. ETA: Fixed; they should be downloadable now.
0Richard_Kennaway13y
At 3800 words, it's too long for the back page of Nature, but a shorter version
might do very well there.
0Eliezer Yudkowsky13y
Replied in PM, in case you didn't notice (click your envelope).
PS: My mind didn't assign a sex to Key. Worked with me, anyway.
0Jack13y
Cool. I also couldn't help reading Key as female. My hypothesis would be that
people generally have a hard time writing characters of the opposite sex. Your
gender may have leaked in. The Spivak pronouns were initially very distracting
but were okay after a couple paragraphs. If you decide to change it Le Guin
pretty successfully wrote a whole planet of androgyns using masculine pronouns.
But that might not work in a short story without exposition.
1DanArmak13y
In Left Hand of Darkness, the narrator is an offplanet visitor and the only real
male in the setting. He starts his tale by explicitly admitting he can't
understand or accept the locals' sexual selves (they become male or female for
short periods of time, a bit like estrus). He has to psychologically assign them
sexes, but he can't handle a female-only society, so he treats them all as
males. There are plot points where he fails to respond appropriately to the
explicit feminine side of locals.
This is all very interesting and I liked the novel, but it's the opposite of
passing androgyns as normal in writing a tale. Pronouns are the least of your
troubles :-)
0NancyLebovitz13y
Later, LeGuin said that she was no longer satisfied with the male pronouns for
the Gethenians.
0Jack13y
Very good points. It has been a while since I read it.
1CronoDAS13y
I think Key's apparent femininity might come from a lack of arrogance. Compare
Key to, say, Calvin from "Calvin and Hobbes". Key is extremely polite, willing
to admit to ignorance, and seems to project a bit of submissiveness. Also, Key
doesn't demonstrate very much anger over Trellis's death.
I probably wouldn't have given the subject a second thought, though, if it
wasn't brought up for discussion here.
0Alicorn13y
Everyone's talking about Key - did anyone get an impression from Trellis?
2CronoDAS13y
If I had to put a gender on Trellis, I'd say that Trellis was more masculine
than feminine. (More like Calvin than like Suzie.) Overall, though, it's fairly
gender-neutral writing.
0gwern13y
I too got the 'dull sidekick' vibe, and since dull sidekicks are almost always
male these days...
0[anonymous]13y
0Alicorn13y
I do typically have an easier time writing female characters than male ones. I
probably wouldn't have tried to write a story with genderless (human) adults,
but in children I figured I could probably manage it. (I've done some genderless
nonhuman adults before and I think I pulled them off.)
0DanArmak13y
The main feeling I came away with is... so what? It didn't convey any ideas or
viewpoints that were new to me; it didn't have any surprising twists or
revelations that informed earlier happenings. What is the target audience?
The Spivak pronouns are nice; even though I don't remember encountering them
before I feel I could get used to them easily in writing, so (I hope) a
transition to general use isn't impossible.
The general feeling I got from Key is female. I honestly don't know why that is.
Possibly because the only other use
[http://en.wikipedia.org/wiki/Key_the_Metal_Idol] of Key as a personal name that
comes to mind is a female child? Objectively, the society depicted is different
enough from any contemporary human society to make male vs. female differences
(among children) seem small in comparison.
0Alicorn13y
Target audience - beats me, really. It's kind of set up to preach to the choir,
in terms of the "moral". I wrote it because I was pretty sure I could finish it
(and I did), and I sorely need to learn to finish stories; I shared it because I
compulsively share anything I think is remotely decent.
Hypotheses: I myself am female. Lace, the only gendered character with a
speaking role, is female. Key bakes cupcakes at one point in the story and a
stereotype is at work. (I had never heard of Key the Metal Idol.)
0DanArmak13y
Could be. I honestly don't know. I didn't even consciously remember Key baking
cupcakes by the time the story ended and I asked myself what might have
influenced me.
I also had the feeling that the story wasn't really about Key; ey just serves as
an expository device. Ey has no unpredictable or even unusual reactions to
anything that would establish individuality. The setting should then draw the
most interest, and it didn't do enough that, because it was too vague. What is
the government? How does it decide and enforce allowed research, and allowed
self-modification? How does sex-choosing work? What is the society like? Is Key
forced at a certain age to be in some regime, like our schools? If not, are
there any limits on what Key or her parents do with her life?
As it is, the story presented a very few loosely connected facts about Key's
world, and that lack of detail is one reason why these facts weren't
interesting: I can easily imagine some world with those properties.
-2Alicorn13y
Small communities, mostly physically isolated from each other, but
informationally connected and centrally administered. Basically meritocratic in
structure - pass enough of the tests and you can work for the gubmint.
Virtually all sophisticated equipment is communally owned and equipped with
government-designed protocols. Key goes to the library for eir computer time
because ey doesn't have anything more sophisticated than a toaster in eir house.
This severely limits how much someone could autonomously self-modify, especially
when the information about how to try it is also severely limited. The
inconveniences are somewhat trivial, but you know what they say about trivial
inconveniences. If someone got far enough to be breaking rules regularly, they'd
make people uncomfortable and be asked to leave.
One passes some tests, which most people manage between the ages of thirteen and
sixteen, and then goes to the doctor and gets some hormones and some surgical
intervention to be male or female (or some brand of "both", and some people go
on as "neither" indefinitely, but those are rarer).
Too broad for me to answer - can you be more specific?
Education is usually some combination of self-directed and parent-encouraged.
Key's particularly autonomous and eir mother doesn't intervene much. If Key did
not want to learn anything, eir mother could try to make em, but the government
would not help. If Key's mother did not want em to learn anything and Key did,
it would be unlawful for her to try to stop em. There are limits in the sense
that Key may not grow up to be a serial killer, but assuming all the necessary
tests get passed, ey can do anything legal ey wants.
Thank you for the questions - it's very useful to know what questions people
have left after I present a setting! My natural inclination is massive
data-dump. This is an experiment in leaving more unsaid, and I appreciate your
input on what should have been dolloped back in.
2DanArmak13y
Reminds me of old China...
That naturally makes me curious about how they got there. How does a government,
even though unelected, go about impounding or destroying all privately owned
modern technology? What enforcement powers have they got?
Of course there could be any number of uninteresting answers, like 'they've got
a singleton' or 'they're ruled by an AI that moved all of humanity into a
simulation world it built from scratch'.
And once there, with absolute control over all communications and technology,
it's conceivable to run a long-term society with all change (incl. scientific or
technological progress) being centrally controlled and vetoed. Still, humans
have got a strong economical competition drive, and science & technology
translate into competitive power. Historically, eliminating private economic
enterprise takes enormous effort - the big Communist regimes in USSR, and I
expect in China as well, never got anywhere near success on that front. What do
these contended pain-free people actually do with their time?
0Alicorn13y
It was never there in the first place. The first inhabitants of these
communities (which don't include the whole planet; I imagine there are a double
handful of them on most continents - the neuros and the genderless kids are more
or less universal, though) were volunteers who, prior to joining under the
auspices of a rich eccentric individual, were very poor and didn't have their
own personal electronics. There was nothing to take, and joining was an
improvement because it came with access to the communal resources.
Nope. No AI.
What they like. They go places, look at things, read stuff, listen to music,
hang out with their friends. Most of them have jobs. I find it a little puzzling
that you have trouble thinking of how one could fill one's time without
significant economic competition.
2DanArmak13y
Oh. So these communities, and Key's life, are extremely atypical of that world's
humanity as a whole. That's something worth stating because the story doesn't
even hint at it.
I'd be interested in hearing about how they handle telling young people about
the wider world. How do they handle people who want to go out and live there and
who come back one day? How do they stop the governments of the nations where
they actually live from enforcing laws locally? Do these higher-level
governments not have any such laws?
Many people can. I just don't find it convincing that everyone could without
there being quite a few unsatisfied people around.
2Zack_M_Davis13y
The exchange above reminds me of Robin Hanson's criticism
[http://www.ofb.net/~phoenix/polymath/polyarc/0448.html] of the social science
in Greg Egan's works.
0Kaj_Sotala13y
I disagree: it doesn't matter for the story whether the communities are typical
or atypical for humanity as a whole, so mentioning it is unnecessary.
-2Alicorn13y
The relatively innocuous information about the wider world is there to read
about on the earliest guidelists; less pleasant stuff gets added over time.
You can leave. That's fine. You can't come back without passing more tests.
(They are very big on tests.)
They aren't politically components of other nations. The communities are all
collectively one nation in lots of geographical parts.
They can leave. The communities are great for people whose priorities are being
content and secure. Risk-takers and malcontents can strike off on their own.
0DanArmak13y
I wish our own world was nice enough for that kind of lifestyle to exist (e.g.,
purchasing sovereignity over pieces of settle-able land; or existing towns
seceding from their nation)... It's a good dream :-)
0Alicorn13y
It was the first thing.
0[anonymous]13y
The exchange above reminds me of Robin Hanson's criticism
[http://www.ofb.net/~phoenix/polymath/polyarc/0448.html] of the social science
in Greg Egan's works.
-1Emily13y
I enjoyed it. I made an effort to read Key genderlessly. This didn't work at
first, probably because I found the Spivak pronouns quite hard to get used to,
and "ey" came out as quite male to me, then fairly suddenly flipped to female
somewhere around the point where ey was playing on the swing with Trellis. I
think this may have been because Trellis came out a little more strongly male to
me by comparison (although I was also making a conscious effort to read ey
genderlessly). But as the story wore on, I improved at getting rid of the gender
and by the end I no longer felt sure of either Key or Trellis.
Point of criticism: I didn't find the shift between what was (to me) rather
obviously the two halves of the story very smooth. The narrative form appeared
to take a big step backwards from Key after the words "haze of flour" and never
quite get back into eir shoes. Perhaps that was intentional, because there's
obviously a huge mood shift, but it left me somewhat dissatisfied about the
resolution of the story. I felt as though I still didn't know what had actually
happened to the original Key character.
Congratulations! I guess people will believe everything you say now.
1Cyan13y
I certainly hope so!
0CannibalSmith13y
Wear a lab coat for extra credibility.
3Cyan13y
I was thinking I'd wear a stethoscope and announce, "Trust me! I'm a doctor!
(sotto voce)... of philosophy."
0Aurini13y
Congrats! My friend recently got his Master's in History, and has been informing
every telemarketer who calls that "Listen cupcake, it's not Dave - I'm not going
to hang at your crib and drink forties; listen here, pal, I have my own office!
Can you say that? To you I'm Masters Smith."
I certainly hope you wear your new title with a similar air of pretention,
Doctor Cyan. :)
1Cyan13y
I'll do my best!
Sincerely,
Cyan, Ph.D.
0gwern13y
Is 'Masters' actually a proper prefix (akin to the postfix Ph.D) for people with
a Master's degree? I don't think I've ever seen that before.
0Daniel_Burfoot13y
Congratulations!
Why not post an introduction to your thesis research on LW?
1Cyan13y
Because I'd need to preface it with a small deluge of information about protein
chemistry, liquid chromatography, and mass spectrometry. I think I'd irritate
folks if I did that.
0[anonymous]13y
Wear a lab coat for extra credibility.
0SilasBarta13y
With a doctorate in ...?
2Cyan13y
Biomedical engineering. My thesis concerned the analysis of proteomics data by
Bayesian methods.
0SilasBarta13y
Isn't that what they normally use to analyze proteomics data? <\naive>
1Cyan13y
Not always, or even usually. It seems to me that by and large, scientists invent
ad hoc methods [http://lesswrong.com/lw/o7/searching_for_bayesstructure/] for
their particular problems, and that applies in proteomics as well as other
fields.
0gwern13y
Ah ha - So you were the last Cyan!
0Jack13y
I briefly thought this was a Battlestar Galactica pun.
0gwern13y
It was!
/me wonders what you then interpreted it as
0Jack13y
I was going back and forth between Zion and Cylon, lol.
If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I'm working through Pearl's Causality and am having trouble deriving something... or say I've stared at the wikipedia pages for ages and STILL don't get the difference between Minimum Description Length and Minimum Message Length... is LW an appropriate place to go "please help me understand this", and if so, should I request it in a top level post or in an open thread or...
More generally: LW is about developing human rationalit... (read more)
Most posts here are written by someone who understands an aspect of rationality,
to explain it to those who don't. I see no reason not to ask questions in the
open thread. I think they should be top-level posts only if you anticipate a
productive discussion around them; most already-solved questions can be answered
with a single comment and that would be that, so no need for a separate post.
0Psy-Kosh13y
Okay, thanks. In that case, as I replied to Kaj Sotala, I am indeed asking about
the difference between MML and MDL
2Kaj_Sotala13y
I think that kind of a question is fine in the Open Thread.
0Psy-Kosh13y
Okay, thanks. In that case, I am asking indeed about the difference between MML
and MDL. I've stared at the wikipedia pages, including the bits that supposedly
explain the difference, and I'm still going "huh?"
David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.
56% of target faculty responding favor (i.e. accept or lean toward) physicalism, while 27% favor nonphysicalism (for respondents as a whole, the figure is 54:29). A priori knowledge is favored by 71-18%, an analytic-synthetic distinction is favored by 65-27%, Millianism is favored over Fregeanism by 34-29%, while the view that zombies are conceivable but not metaphysically possible is favored over metaphysical possibility and conceivability respectively by 35-23-16% respectively.
This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I'd be interested to see more discussion of it.
The problem with the specific scenario given, with experimental
modification/duplication, rather than careful proof based modification, is that
is liable to have the same problem that we have with creating systems this way.
The copies might not do what the agent that created them want.
Which could lead to a splintering of the AI, and in-fighting over computational
resources.
It also makes the standard assumptions that AI will be implemented on and stable
on the von Neumann style computing architecture.
0Nick_Tarleton13y
Of course, if it's not, it could port itself to such if doing so is
advantageous.
0whpearson13y
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate
(from emulating neurons, or just random experimentation at some level) as humans
and giving it access to its own machine code and low-level memory would not be a
good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of
preventing/discouraging the copied AI in the x86 based arch from playing with
the ultimate mind expanding/destroying drug that is machine code modification.
This is what I meant about stability.
0[anonymous]13y
Er, I can't really give a better rebuttal than this:
http://www.singinst.org/upload/LOGI//levels/code.html
[http://www.singinst.org/upload/LOGI//levels/code.html]
0whpearson13y
What point are you rebutting?
0[anonymous]13y
The idea that a greater portion of possible changes to a human-style mind are
bad than changes of a equal magnitude to a Von Neumann-style mind.
0whpearson13y
Most random changes to a von Neumann-style mind would be bad as well.
Just a von-Neumann-style mind is unlikely to make the random mistakes that we
can do, or at least that is Eliezer's contention.
0[anonymous]13y
I can't wait until there are uploads around to make questions like this
empirical.
0Johnicholas13y
Let me point out that we (humanity) does actually have some experience with this
scenario. Right now, mobile code that spreads across a network without effective
controls on the bounds of its expansion by the author is worms. If we have
experience, we should mine it for concrete predictions and countermeasures.
General techniques against worms might include: isolated networks, host
diversity, rate-limiting, and traffic anomaly detection.
Are these low-cost/high-return existential reduction techniques?
6Wei_Dai13y
No, these are high-cost/low-return existential risk reduction techniques. Major
corporations and governments already have very high incentive to protect their
networks, but despite spending billions of dollars, they're still being
frequently penetrated by human attackers, who are not even necessarily
professionals. Not to mention the hundreds of millions of computers on the
Internet that are unprotected because their owners have no idea how to do so, or
they don't contain information that their owners consider especially valuable.
I got into cryptography partly because I thought it would help reduce the risk
of a bad Singularity. But while cryptography turned out to work relatively well
(against humans anyway), the rest of the field of computer security is in
terrible shape, and I see little hope that the situation would improve
substantially in the next few decades.
0whpearson13y
What do you think of the object-capability model
[http://en.wikipedia.org/wiki/Object-capability_model]? And removing ambient
authority [http://en.wikipedia.org/wiki/Ambient_authority] in general.
0Wei_Dai13y
That's outside my specialization of cryptography, so I don't have too much to
say about it. I do remember reading about the object-capability model and the E
language years ago, and thinking that it sounded like a good idea, but I don't
know why it hasn't been widely adopted yet. I don't know if it's just inertia,
or whether there are some downsides that its proponents tend not to publicize.
In any case, it seems unlikely that any security solution can improve the
situation enough to substantially reduce the risk of a bad Singularity at this
point, without a huge cost. If the cause of existential-risk reduction had
sufficient resources, one project ought to be to determine the actual costs and
benefits of approaches like this and whether it would be feasible to implement
(i.e., convince society to pay whatever costs are necessary to make our networks
more secure), but given the current reality I think the priority of this is
pretty low.
0whpearson13y
Thanks. I just wanted to know if this was the sort of thing you had in mind, and
whether you knew any technical reasons why it wouldn't do what you want.
This is one thing I keep a close-ish eye on. One of the major proponents
[http://www.osnews.com/story/21262/Jonathan_Shapiro_of_Coyotos_BitC_Joins_Microsoft]
of this sort of security has recently gone to work for Microsoft on their
research operating systems. So it might come a long in a bit.
As to why it hasn't caught on, it is partially inertia and partially it requires
more user interaction/understanding of the systems than ambient authority. Good
UI and metaphors can decrease that cost though.
The ideal would be to have a self-maintaining computer system with this sort of
security system. However a good self-maintaining system might be dangerously
close to a self-modifying AI.
4ChrisHibbert13y
There's also a group of proponents of this style working on Caja
[http://en.wikipedia.org/wiki/Caja_project] at Google, including Mark Miller,
the designer of E. And some [http://www.hpl.hp.com/people/marc_d_stiegler/]
people [http://www.hpl.hp.com/personal/Alan_Karp/] at HP.
Actually, all these people talk to one another regularly. They don't have a
unified plan or a single goal, but they collaborate with one another frequently.
I've left out several other people who are also trying to find ways to push in
the same direction. Just enough names and references to give a hint. There are
several mailing lists [http://www.eros-os.org/mailman/listinfo/e-lang] where
these issues are discussed. If you're interested, this
[http://www.eros-os.org/mailman/listinfo/cap-talk] is probably the one to start
with.
0Paul Crowley13y
Sadly, I suspect this moves things backwards rather than forwards. I was really
hoping that we'd see Coyotos one day, which now seems very unlikely.
2whpearson13y
I meant it more as an indication that Microsoft are working in the direction of
better secured OSes already, rather than his being a pivotal move. Coyotos might
get revived when the open source world sees what MS produces and need to play
catch up.
0gwern13y
That assumes MS ever goes far enough that the FLOSS world feels any gap that
could be caught up.
MS rarely does so; the chief fruit of 2 decades of Microsoft Research
sponsorship of major functional language researchers like Simon Marlow or Simon
Peyton-Jones seems to be... C# and F#. The former is your generic quasi-OO
imperative language like Python or Java, with a few FPL features sprinkled in,
and the latter is a warmed-over O'Caml: it can't even make MLers feel like they
need to catch up, much less Haskellers or FLOSS users in general.
0whpearson13y
The FPL OSS community is orders of magnitude more vibrant than the OSS secure
operating system research. I don't know of any living projects that use the
object-capability model at the OS level (plenty of language level and higher
level stuff going on).
For some of the background, Rob Pike wrote an old paper on the state of system
level research [http://herpolhode.com/rob/utah2000.pdf].
0Vladimir_Nesov13y
I can't imagine having any return in protection against spreading of AI on the
Internet at any cost (even in a perfect world, AI can still produce value, e.g.
earn money online, and so buy access to more computing resources).
1Johnicholas13y
Your statement sounds a bit overgeneralized - but you probably have a point.
Still, would you indulge me in some idle speculation? Maybe there could be a
species of aliens that evolved to intelligence by developing special
microbe-infested organs (which would be firewalled somehow from the rest of the
alien themselves) and incentivizing the microbial colonies somehow to solve
problems for the host.
Maybe we humans evolved to intelligence that way - after all, we do have a lot
of bacteria in our guts. But then, all the evidence that we have pointing to
brains as information-processing center would have to be wrong. Maybe brains are
the firewall organ! Memes are sortof like microbes, and they're pretty well
"firewalled" (genetic engineering is a meme-complex that might break out of the
jail).
The notion of creating an ecology of entities, and incentivizing them to produce
things that we value, might be a reasonable strategy, one that we humans have
been using for some time.
1Vladimir_Nesov13y
I can't see how this comment relates to the previous one. It seems to start an
entirely new conversation. Also, the metaphor with brains and microbes doesn't
add understanding for me, I can only address the last paragraph, on its own.
The crucial property of AIs making them a danger is (eventual) autonomy, not
even rapid coming to power. Once the AI, or a society ("ecology") of AIs,
becomes sufficiently powerful to ignore vanilla humans, its values can't be
significantly influenced, and most of the future is going to be determined by
those values. If those values are not good, from human values point of view, the
future is lost to us, it has no goodness. The trick is to make sure that the
values of such an autonomous entity are a very good match with our own, at some
point where we still have a say in what they are.
Talk of "ecologies" of different agents creates an illusion of continuous
control. The standard intuitive picture has little humans at the lower end with
a network of gradually more powerful and/or different agents stretching out from
them. But how much is really controlled by that node? Its power has no way of
"amplifying" as you go through the network: if only humans and a few other
agents share human values, these values will receive very little payoff. This is
also not sustainable: over time, one should expect preference of agents with
more power to gain in influence (which is what "more power" means).
The best way to win this race is to not create different-valued competitors that
you don't expect being able to turn into your own almost-copies, which seems
infeasible for all the scenarios I know of. FAI is exactly about devising such a
copycat, and if you can show how to do that with "ecologies", all power to you,
but I don't expect anything from this line of thought.
-1Johnicholas13y
To explain the relation, you said: "I can't imagine having any return [...from
this idea...] even in a perfect world, AI can still produce value, e.g. earn
money online."
I was trying to suggest that in fact there might be a path to Friendliness by
installing sufficient safeguards that the primary way a software entity could
replicate or spread would be by providing value to humans.
3Vladimir_Nesov13y
In the comment above, I explained why what AI does is irrelevant, as long as
it's not guaranteed to actually have the right values: once it goes unchecked,
it just reverts to whatever it actually prefers, be it in a flurry of hard
takeoff or after a thousand years of close collaboration. "Safeguards", in every
context I saw, refer to things that don't enforce values, only behavior, and
that's not enough. Even the ideas for enforcement of behavior look infeasible,
but the more important point is that even if we win this one, we still lose
eventually with such an approach.
0Johnicholas13y
My symbiotic-ecology-of-software-tools scenario was not a serious proposal as
the best strategy to Friendliness. I was trying to increase the plausibility of
SOME return at SOME cost, even given that AIs could produce value.
I seem to have stepped onto a cached thought.
1Vladimir_Nesov13y
I'm afraid I see the issue as clear-cut, you can't get "some" return, you can
only win or lose (probability of getting there is of course more amenable to
small nudges).
0wedrifid13y
Making such a statement significantly increases the standard of reasoning I
expect from a post. That is, I expect you to be either right or at least a step
ahead of the one with whom you are communicating.
I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.
Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.
Specifically, the two hardest problems that I see are:
Writing an AI that can learn how to move units efficiently on its own. Either by playing against itself or just search
Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.
A good dataset is incredibly valuable. When starting to attack a problem - both the whole thing, and subproblems that will arise - build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.
Succeed "instantaneously" - and don't break it. Make getting to "victory" - a complete entry - your first priority and aim to be done with it in a day or a week. Often, there's temptation to do a lot of "foundational" work before getting something complete working, or a "big refactoring" that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you're not breaking it.
Great! That competition looks like a lot of fun, and I wish you the best of luck
with it.
As for advice, perhaps the best I can give you is to explain the characteristics
the winning program will have.
It will make no, or minimal, use of game tree search. It will make no, or
minimal, use of machine learning (at best it will do something like tuning a
handful of scalar parameters with a support vector machine). It will use
pathfinding, but not full pathfinding; corners will be cut to save CPU time. It
will not know the rules of the game. Its programmer will probably not know the
exact rules either, just an approximation discovered by trial and error. In
short, it will not contain very much AI.
One reason for this is that it will not be running on a supercomputer, or even
on serious commercial hardware; it will have to run in real time on a dinky
beige box PC with no more than a handful of CPU cores and a few gigabytes of
RAM. Even more importantly, only a year of calendar time is allowed. That is
barely enough time for nontrivial development. It is not really enough time for
nontrivial research, let alone research and development.
In short, you have to decide whether your priority is Starcraft or AI. I think
it should be the latter, because that's what has actual value at the end of the
day, but it's a choice you have to make. You just need to understand that the
reward from the latter choice will be in long-term utility, not in winning this
competition.
3CannibalSmith13y
That's disheartening, but do give more evidence. To counter: participants of
DARPA's Grand Challenge had just a year too, and their task was a notch harder.
And they did use machine learning and other fun stuff.
Also, I think a modern gaming PC packs a hell of a punch. Especially with the
new graphics cards that can run arbitrary code. But good catch - I'll inquire
about the specs of the machines the competition will be held on.
5rwallace13y
The Grand Challenge teams didn't go from zero to victory in one year. They also
weren't one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics
you really want to talk to someone who has written a real-time strategy game AI,
or at least worked in the games industry. I recommend doing a search for
articles or blog posts written by people with such experience. I also recommend
getting hold of some existing game AI code to look at. (You won't be copying the
code, but just to get a feel for how things are done.) Not chess or Go, those
use completely different techniques. Real-time strategy games would be ideal,
but failing that, first-person shooters or turn-based strategy games - I know
there are several of the latter at least available as open source.
Oh, and Johnicholas gives good advice, it's worth following.
1CannibalSmith13y
Stanford's team did.
Neither is mine.
I do not believe I can learn much from existing RTS AIs because their goal is
entertaining the player instead of winning. In fact, I've never met an AI that I
can't beat after a few days of practice. They're all the same: build a base and
repeatedly throw groups of units at the enemy's defensive line until run out of
resources, mindlessly following the same predictable route each time. This is
true for all of Command & Conquer series, all of Age of Empires series, all of
Warcraft series, and StarCraft too. And those are the best RTS games in the
world with the biggest budgets and development teams.
But I will search around.
1DanArmak13y
Was these games' development objective to make the best AI they could that would
win in all scenarios? I doubt that would be the most fun for human players to
play against. Maybe humans wanted a predictable opponent.
2ChrisPine13y
They want a fun opponent.
In games with many players (where alliances are allowed), you could make the
AI's more likely to ally with each other and to gang up on the human player.
This could make an 8-player game nearly impossible. But the goal is not to beat
the human. The goal is for the AI to feel real (human), and be fun.
As you point out, the goal in this contest is very different.
0rwallace13y
Ah, I had assumed they must have been working on the problem before the first
one, but their webpage confirms your statement here. I stand corrected!
Good, that will help.
Yeah. Personally I never found that very entertaining :-) If you can write one
that does better, maybe the industry might sit up and take notice. Best of luck
with the project, and let us know how it turns out.
2SilasBarta13y
Please fix this post's formatting. I beg you.
0rwallace13y
What's the recommended way to format quoted fragments on this site to
distinguish them from one's own text? I tried copy pasting CannibalSmith's
comment, but that copied as indentation with four spaces, which when I used it,
gave a different result.
2Jayson_Virissimo13y
Click on the reply button and then click the help link in the bottom right
corner. It explains how to properly format your comments.
0rwallace13y
Okay, thanks, fixed.
-3[anonymous]13y
The Grand Challenge teams didn't go from zero to victory in one year. They also
weren't one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics
you really want to talk to someone who has written a real-time strategy game AI,
or at least worked in the games industry. One thing I can say is, get hold of
some existing game AI code to look at. (You won't be copying the code, but just
to get a feel for how things are done.) Not chess or Go, those use completely
different techniques. Real-time strategy games would be ideal, but failing that,
first-person shooters or turn-based strategy games - I know there are several of
the latter at least available as open source.
Oh, and Johnicholas gives good advice, it's worth following.
2RobinZ13y
Strictly speaking, this reads a lot like advice to sell nonapples
[http://lesswrong.com/lw/vs/selling_nonapples/]. I'll grant you that it's
probably mostly true, but more specific advice might be helpful.
0ShardPhoenix13y
There's some discussion and early examples here:
http://www.teamliquid.net/forum/viewmessage.php?topic_id=105570
[http://www.teamliquid.net/forum/viewmessage.php?topic_id=105570]
You might also look at some of the custom AIs for Total Annihilation and/or
Supreme Commander, which are reputed to be quite good.
Ultimately though the winner will probably come down to someone who knows
Starcraft well enough to thoroughly script a bot, rather than more advanced AI
techniques. It might be easier to use proper AI in the restricted tournaments,
though.
I'm going to repeat my request (for the last time) that the most recent Open Thread have a link in the bar up top, between 'Top' and 'Comments', so that people can reach it a tad easier. (Possible downside: people could amble onto the site and more easily post time-wasting nonsense.)
I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:
If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn't the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?
Would it matter if we dropped "undetectable" from the proposed simulation? At what point would it begin to matter?
In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes' Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an appr... (read more)
An independent piece of evidence moves the log-odds a constant additive amount
regardless of the prior. Averaging log-odds amounts to moving 2/3 of that
distance if 2/3 of the people have the particular piece of evidence. It may
behave badly if the evidence is not independent, but if all you have are
posteriors, I think it's the best you can do.
It has been awhile since I have been around, so please ignore if this has been brought up before.
I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.
Firefox (or maybe one of the million extensions that I've downloaded and
forgotten about) has a feature where, if you mouseover a link, the URL linked to
will appear in the lower bar of the window. A different color would be easier,
though.
Ivan Sutherland (inventor of Sketchpad - the first computer-aided drawing program) wrote about how "courage" feels, internally, when doing research or technological projects.
"[...] When I get bogged down in a project, the failure of my courage to go on never
feels to me like a failure of courage, but always feels like something entirely dif-
ferent. One such feeling is that my research isn't going anywhere anyhow, it isn't
that important. Another feeling involves the urgency of something else. I have
come to recognize these feelings as “who cares” and “the urgent drives out the
important.” [...]"
I'm looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: "You can't really be sure evolution is true until you've listened to a creationist for five minutes."
" They told them that half the test generally showed gender differences (though they didn't mention which gender it favored), and the other half didn't.
Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind."
Big Edit: Jack formulated my ideas better, so see his comment. This was the original:
The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenari... (read more)
Restructuring since the fact that the universe hasn't been noticeably
paperclipped can't possible be considered evidence for (c).
The universe has either been paperclipped (1) or it hasn't been (2).
If (1):
(A) we have observed paperclipping and not realized it (someone was really into
stars, galaxies and dark matter)
(B) Our universe is the result of paperclipping (theists were right, sort of)
(C) Superintelligences tend not to optimize things that are astronomically
visible.
If (2)
(D) Super-intelligences are impossible.
(E) Quantum immortality true.
(F) No intelligent aliens.
(G) Some variety of simulation hypothesis is true.
(H) Galactic aliens exist but have never constructed a super-intelligence do to
a well enforced prohibition on AI construction/research, an evolved deficiency
in thinking about minds as a physical object (substance dualism is far more
difficult for them to avoid than it is for us), or some other reason that we
can't fathom.
(I) Friendliness is easy + Alien ethics doesn't include any values that lead to
us noticing them.
2wuwei13y
d) should be changed to the sparseness of intelligent aliens and limits to how
fast even a superintelligence can extend its sphere of influence.
0Jack13y
Some of that was probably needed to contextualize my comment.
0Yorick_Newsome13y
I'll replace it without the spacing so it's more compact. Sorry about that, I'll
work on my comment etiquette.
I like the color red. When people around me wear red, it makes me happy - when they wear any other color, it makes me sad. I crunch some numbers and tell myself, "People wear red about 15% of the time, but they wear blue 40% of the time." I campaign for increasing the amount that people wear red, but my campaign fails miserably.
"It'd be great if I could like blue instead of red," I tell myself. So I start trying to get myself to like blue - I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.
What just happened? Did a belief or a preference change?
By coincidence, two blog posts went up today that should be of interest to people here.
Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.
(Needless to say, I don't agree with either of these arguments, but they're great for application of yo... (read more)
The second link doesn't load; should be this
[http://www.marginalrevolution.com/marginalrevolution/2009/12/the-limits-of-good-vs-evil-thinking.html].
0SilasBarta13y
Thanks! Fixed.
1Matt_Simpson13y
That's not what he is saying. His argument is not that the hacked emails
actually should raise our confidence in AGW. His argument is that there is a
possible scenario under which this should happen, and the probability that this
scenario is true is not infinitesimal. The alternative possibility - that the
scientists really are smearing the opposition with no good reason - is far more
likely, and thus the net effect on our posteriors is to reduce them - or at
least keep them the same if you agree with Robin Hanson.
Here's (part of) what Tyler actually said:
0SilasBarta13y
Right -- that is what I called "giving a reason why the hacked emails..." and I
believe that characterization is accurate: he's described a reason why they
would raise our confidence in AGW.
This is reason why Tyler's argument for a positive Bayes factor is in error, not
a reason why my characterization was inaccurate.
I think we agree on the substance.
2Matt_Simpson13y
Tyler isn't arguing for a positive Bayes factor. (I assume that by "Bayes
factor" you mean the net effect on the posterior probability). He posted a
followup
[http://www.marginalrevolution.com/marginalrevolution/2009/12/from-the-comments.html]
because many people misunderstood him. Excerpt:
edited to add:
I'm not sure I understand you're criticism, so here's how I understood his
argument. There are two major possibilities worth considering:
and
Then the argument goes that the net effect of 1 is to lower our posteriors for
AGW while the net effect of 2 is to raise them.
Finally, p(2 is true) != 0.
This doesn't tell us the net effect of the event on our posteriors - for that we
need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else)
~ 0, but that's a side issue.
Is this how you read him? If so, which part do you disagree with?
0SilasBarta13y
I'm using the standard meaning [http://en.wikipedia.org/wiki/Bayes_factor]: for
a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It's easiest
to think of it as the factor you mutiply your prior odds to get posterior odds.
(Odds, not probabilities.) Which means I goofed and said "positive" when I meant
"above unity" :-/
I read Tyler as not knowing what he's talking about. For one thing, do you
notice how he's trying to justify why something should have p>0 under a Bayesian
analysis ... when Bayesian inference already requires p's to be greater than
zero?
In his original post, he was explaining a scenario under which seeing fraud
should make you raise your p(AGW). Though he's not thinking clearly enough to
say it, this is equivalent to describing a scenario under which the Bayes factor
is greater than unity. (I admit I probably shouldn't have said "argument for >1
Bayes factor", but rather, "suggestion of plausibility of >1 Bayes factor")
That's the charitable interpretation of what he said. If he didn't mean that, as
you seem to think, then he's presenting metrics that aren't helpful, and this is
clear when he think's its some profound insight to put p(fraud due to importance
of issue) greater than zero. Yes, there are cases where AGW is true despite this
evidence -- but what's the impact on the Bayes factor?
Why should we care about arbitrarily small probabilities
[http://lesswrong.com/lw/ml/but_theres_still_a_chance_right/]?
Tyler was not misunderstood: he used probability and Bayesian inference
incorrectly and vacuously, then tried to backpedal. ( My comment
[http://www.marginalrevolution.com/marginalrevolution/2009/12/from-the-comments/comments/page/2/#comments]
on page 2.)
Anyway, I think we agree on the substance:
* The fact that the p Tyler referred to is greater than zero is insufficient
information to know how to update.
* The scenario Tyler described is insufficient to give Climategate a Bayes
factor above 1.
(I was going t
2Matt_Simpson13y
I think we are arguing past each other, but it's about interpreting someone else
so I'm not that worried about it. I'll add one more bullet to your list to
clarify what I think Tyler is saying. If that doesn't resolve it, oh well.
* If we know with certainty that the secenario that Tyler described is true,
that is if we know that the scientists fudged things because they knew that
AGW was real and that the consequences were worth risking their reputations
on, then Climategate has a Bayes factor above 1.
I don't think Tyler was saying anything more than that. (Well, and P(his
scenario) is non-negligible)
I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?
Well, if we modified what we wanted, we wouldn't get what we originally wanted
because we wouldn't want to...
Can't think of anything else off the top of my head.
0MrHen13y
tut's quip holds the key:
But to expound on it a bit further, if I want to drive to Dallas to see a band
play I can (a) figure out a strategy to get there or (b) stop wanting to go.
Assuming that (b) is even possible, it isn't actually a solution to the problem
of how to get to Dallas. Applying the same principle to all Wants does not
provide you with a way to always get what you want. Instead, it helps you avoid
not getting what you want.
If you wanted nothing more than to avoid disappointment by not getting what you
want than the safest route is to never desire anything that isn't a sure thing.
Or simply not want anything at all. But a simpler route to this whole process is
to ditch that particular Want first. The summation is a bit wordy and annoying,
but it ends like this:
Instead of wanting to avoid not getting what you want by not wanting anything
else, simply forgo the want that is pushing you to avoid not getting what you
want.
With parens:
Instead of (wanting to avoid {not getting what you want} by not wanting anything
else), simply forgo (the want that is pushing you to avoid {not getting what you
want}).
In other words, you can achieve the same result that modifying your wants would
create by not getting too disappointed if you don't get what you want. Or, don't
take it so hard when things don't go your way.
Hopefully that made some sense (and I got it right.)
0byrnema13y
Thank you for your responses, but I guess my question wasn't clear. I was asking
about purpose. If there's no point in going to Dallas, why care about wanting to
go to Dallas?
This is my problem if there's no objective value (that I tried to address more
directly here [http://lesswrong.com/lw/sc/existential_angst_factory/1dfw]). If
there's no value to anything I might want, why care about what I want, much less
strive for what I want?
I don't know if there's anything to be done. Whining about it is pointless. If
anyone has a constructive direction, please let me know. I picked up a Sartre's
, "Truth and Existence" rather randomly, maybe it will lead in a different
(hopefully more interesting) direction.
7orthonormal13y
I second the comments above. The answer Alicorn and Furcas give sounds really
shallow compared to a Framework Of Objective Value; but when I become convinced
that there really is no FOOV, I was relieved to find that I still, you know,
wanted things, and these included not just self-serving wants, but things like
"I want my friends and family to be happy, even in circumstances where I
couldn't share or even know of their happiness", and "I want the world to become
(for example) more rational, less violent, and happier, even if I wouldn't be
around to see it— although if I had the chance, I'd rather be around to see it,
of course".
It doesn't sound as dramatic or idealistic as a FOOV, but the values and desires
encoded in my brain and the brains of others have the virtue of actually
existing; and realizing that these values aren't written on a stone tablet in
the heart of the universe [http://lesswrong.com/lw/rr/the_moral_void/] doesn't
rob them of their importance to the life I live.
5Alicorn13y
Because in spite of everything, you still want it.
Or: You can create value by wanting things. If things have value, it's because
they matter to people - and you're one of those, aren't you? Want things, make
them important - you have that power.
0byrnema13y
Maybe I wouldn't. There have been times in my life when I've had to struggle to
feel attached to reality, because it didn't feel objectively real. Now if value
isn't objectively real, I might find myself again feeling indifferent, like one
part of myself is carrying on eating and driving to work, perhaps socially
moral, perhaps not, while another part of myself is aware that nothing actually
matters. I definitely wouldn't feel integrated.
I don't want to burden anyone with what might be idiosyncratic sanity issues,
but I do mention them because I don't think they're actually all that
idiosyncratic.
2Alicorn13y
Can you pick apart what you mean by "objectively"? It seems to be a very
load-bearing word here.
0byrnema13y
I thought this was a good question, so I took some time to think about it. I am
better at recognizing good definitions than generating them, but here goes:
'Objective' and 'subjective' are about the relevance of something across
contexts.
Suppose that there is some closed system X. The objective value of X is its
value outside X. The subjective value of X is its value inside X.
For example, if I go to a party and we play a game with play money, then the
play money has no objective value. I might care about the game, and have fun
playing it with my friends, but it would be a choice whether or not to place any
subjective attachment to the money; I think that I wouldn't and would be rather
equanimous about how much money I had in any moment. If I went home and looked
carefully at the money to discover that it was actually a foreign currency, then
it turns out that the money had objective value after all.
Regarding my value dilemma, the system X is myself. I attach value to many
things in X. Some of this attachment feels like a choice, but I hazard that some
of this attachment is not really voluntary. (For example, I have mirror
neurons.) I would call these attachments 'intellectual' and 'visceral'
respectively.
Generally, I do not have much value for subjective experience. If something only
has value in 'X', then I have a tendency to negate that as a motivation. I'm not
altruistic, I just don't feel like subjective experience is very important. Upon
reflection, I realize that re: social norms, I actually act rather selfishly
when I think I'm pursuing something with objective value.
If there's no objective value, then at the very least I need to do a lot of goal
reorganization; losing my intellectual attachments unless they can be recovered
as visceral attachments. At the worst, I might feel increasingly like I'm a
meaningless closed system of self-generated values. At this point, though, I
doubt I'm capable of assimilating an absence of objective value on all lev
2AdeleneDawner13y
I know this wasn't your main point, but money doesn't have objective value,
either, by that definition. It only has value in situations where you can trade
it for other things. It's extremely common to encounter such situations, so the
limitation is pretty ignorable, but I suspect you're at least as likely to
encounter situations where money isn't tradeable for goods as you are to
encounter situations where your own preferences and values aren't part of the
context.
0byrnema13y
I used the money analogy because it has a convenient idea of value.
While debating about the use of that analogy, I had already considered it ironic
that the US dollar hasn't had "objective" value since it was disconnected from
the value of gold in 1933. Not that gold has objective value unless you use it
to make a conductor. But at the level, I start losing track of what I mean by
'value'. Anyway, it is interesting that the value of the US dollar is exactly an
example of humans creating value, echoing Alicorn's comment
[http://lesswrong.com/lw/1hs/open_thread_december_2009/1dq2].
Real money does have objective value relative to the party, since you can buy
things on your way home, but no objective value outside contexts where the money
can be exchanged for goods.
0Alicorn13y
If you are a closed system X, and something within system X only has objective
value inasmuch as something outside X values it, then does the fact that other
people care about you and your ability to achieve your goals help? They are
outside X, and while their first-order interests probably never match yours
perfectly, there is a general human tendency to care about others' goals qua
others' goals.
0byrnema13y
If you mean that I might value myself and my ability to achieve my goals more
because I value other people valuing that, then it does not help. My valuation
of their caring is just as subjective as any other value I would have.
On the other hand, perhaps you were suggesting that this mutual caring could be
a mechanism for creating objective value, which is kind of in line with what I
think. For that matter, I think that my own valuation of something, even without
the valuation of others, does create objective value -- but that's a FOOM. I'm
trying to imagine reality without that.
1Alicorn13y
That's not what I mean. I don't mean that their caring about you/your goals
makes things matter because you care if they care. I mean that if you're a
closed system, and you're looking for a way outside of yourself to find value in
your interests, other people are outside you and may value your interests
(directly or indirectly). They would carry on doing this, and this would carry
on conferring external value to you and your interests, even if you didn't give
a crap or didn't know anybody else besides you existed - how objective can you
get?
I don't think it's necessary - I think even if you were the only person in the
universe, you'd matter, assuming you cared about yourself - and I certainly
don't think it has to be really mutual. Some people can be "free riders" or even
altruistic, self-abnegating victims of the scheme without the system ceasing to
function. So this is a FOOV? So now it looks like we don't disagree at all -
what was I trying to convince you of, again?
0byrnema13y
I guess I'm really not sure. I'll have to think about it a while. What will
probably happen is that next time I find myself debating with someone asserting
there is no Framework of Objective Value, I will ask them about this case; if
minds can create objective value by their value-ing. I will also ask them to
clarify what they mean by objective value.
Truthfully, I've kind of forgotten what this issue I raised is about
[http://lesswrong.com/lw/1hs/open_thread_december_2009/1dzg], probably for a few
days or a week.
0[anonymous]13y
I'm either not sure what you're trying to do or why you're trying to do it. What
do you mean by FOOM here? Why do you want to imagine reality without it? How
does people caring about each other fall into that category?
0MrHen13y
Yeah, I think I can relate to that. This edges very close to an affective death
spiral [http://wiki.lesswrong.com/wiki/Affective_death_spiral], however, so
watch the feedback loops.
The way I argued myself out of mine was somewhat arbitrary and I don't have it
written up yet. The basic idea was taking the concepts that I exist and that at
least one other thing exists and, generally speaking, existence is preferred
over non-existence. So, given that two things exist and can interact and both
would rather be here than not be here, it is Good to learn the interactions
between the two so they can both continue to exist. This let me back into
accepting general sensory data as useful and it has been a slow road out of the
deep.
I have no idea if this is relevant to your questions, but since my original
response was a little off maybe this is closer?
0byrnema13y
This paragraph (showing how you argued yourself out of some kind of nihilism) is
completely relevant, thanks. This is exactly what I'm looking for.
What do you mean by, "existence is preferred over non-existence"? Does this mean
that in the vacuum of nihilism, you found something that you preferred, or that
it’s better in some objective sense?
My situation is that if I try to assimilate the hypothesis that there is no
objective value (or, rather, I anticipate trying to do so), then immediately I
see that all of my preferences are illusions. It's not actually any better if I
exist or don't exist, or if the child is saved from the tracks or left to die.
It's also not better if I choose to care subjectively about these things (and be
human) or just embrace nihilism, if that choice is real. I understand that
caring about certain sorts of these things is the product of evolution, but
without any objective value, I also have no loyalty to evolution and its goals
-- what do I care about the values and preferences it instilled in me?
The question is; how has evolution actually designed my brain; in the state
'nihilism' does my brain (a) abort intellectual thinking (there's no objective
value to truth anyway) and enter a default mode of material hedonism that acts
based on preferences and impulses just because they exist and that’s what I’m
programmed to do or (b) does it cling to its ability to think beyond that level
of programming, and develop this separate identity as a thing that knows that
nothing matters?
Perhaps I’m wrong, but your decision to care about the preference of existence
over non-existence and moving on from there appears to be an example of (a). Or
perhaps a component (b) did develop and maintain awareness of nihilism, but
obviously that component couldn’t be bothered posting on LW, so I heard a reply
from the part of you that is attached to your subjective preferences (and simply
exists).
0MrHen13y
Well, my bit about existence and non-existence stemmed from a struggle with
believing that things did or did not exist. I have never considered nihilism to
be a relevant proposal: It doesn't tell me how to act or what to do. It also
doesn't care if I act as if there is an objective value attached to something.
So... what is the point in nihilism?
To me, nihilism seems like a trap for other philosophical arguments. If those
arguments and moral ways lead them to a logical conclusion of nihilism, than
they cannot escape. They are still clinging to whatever led them there but say
they are nihilists. This is the death spiral: Believing that nothing matters but
acting as if something does.
If I were to actually stop and throw away all objective morality, value, etc
than I would except a realization that any belief in nihilism would have to go
away too. At this point I my presuppositions about the world reset and... what?
It is this behavior that is similar to my struggles with existence.
The easiest summation of my belief that existence is preferred over
non-existence is that existence can be undone and non-existence is permanent. If
you want more I can type it up. I don't know how helpful it will be against
nihilism, however.
0byrnema13y
Agreed. I find that often it isn't so much that I find the thought process
intrinsically pleasurable (affective), but that in thinking about it too much, I
over-stimulate the trace of the argument so that after a while I can't recall
the subtleties and can't locate the support. After about 7 comments back and
forth, I feel like a champion for a cause (no objective values RESULTS IN
NIHILISM!!) that I can't relate to anymore. Then I need to step back and not
care about it for a while, and maybe the cause will spontaneously generate
again, or perhaps I'll have learned enough weighting in another direction that
the cause never takes off again.
0AdeleneDawner13y
Feel free to tell me to mind my own business, but I'm curious. That other part:
If you gave it access to resources (time, money, permission), what do you expect
that it would do? Is there anything about your life that it would change?
0byrnema13y
Jack also wrote
[http://lesswrong.com/lw/sc/existential_angst_factory/1dla?context=3], "The next
question is obviously "are you depressed?" But that also isn't any of my
business so don't feel obligated to answer."
I appreciate this sensitivity, and see where it comes from and hy its justified,
but I also find it interesting that interrogating personal states is perceived
as invasive, even as this is the topic at hand.
However, I don't feel like its so personal and I will explain why. My goals here
are to understand how the value validation system works outside FOOM. I come
from the point of view that I can't do this very naturally, and most people I
know also could not. I try to identify where thought gets stuck and try to find
general descriptions of it that aren't so personal. I think feeling like I have
inconsistent pieces (i.e., like I'm going insane) would be a common response to
the anticipation of a non-FOOM world.
To answer your question, a while ago, I thought my answer to your question would
be a definitive, "no, this awareness wouldn't feel any motivation to change
anything". I had written in my journal that even if there was a child laying on
the tracks, this part of myself would just look on analytically. However, I felt
guilty about this after a while and I've seen repressed the experience of this
hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be "horrible" for the child to die on
the tracks. However, what is "horrible" about horrible? There's nothing actually
horrible about it. Without some terminal value behind the value (for example, I
don't think I ever thought a child dieing on the tracks was objectively
horrible, but that it might be objectively horrible for me not to feel like
horrible was horrible at some level of recursion) it seems that the value buck
doesn't get passed, it doesn't stop, it just disappears.
1AdeleneDawner13y
Actually, I practically never see it as invasive; I'm just aware that other
people sometimes do, and try to act accordingly. I think this is a common
mindset, actually: It's easier to put up a disclaimer that will be ignored
90-99% of the time than it is to deal with someone who's offended 1-10% of the
time, and generally not worth the effort of trying to guess whether any given
person will be offended by any given question.
I'm not sure how you came to that conclusion - the other sentences in that
paragraph didn't make much sense to me. (For one thing, one of us doesn't
understand what 'FOOM' means. I'm not certain it's you, though.) I think I know
what you're describing, though, and it doesn't appear to be a common response to
becoming an atheist or embracing rationality (I'd appreciate if others could
chime in on this). It also doesn't necessarily mean you're going insane - my
normal brain-function tends in that direction, and I've never seen any
disadvantage to it. (This [http://angelshelper81.livejournal.com/2037.html] old
log of mine might be useful, on the topic of insanity in general. Context
available on request; I'm not at the machine that has that day's logs in it at
the moment. Also, disregard the username, it's ooooold.)
My Buddhist friends would agree with that. Actually, I pretty much agree with it
myself (and I'm not depressed, and I don't think it's horrible that I don't see
death as horrible, at any level of recursion). What most people seem to forget,
though, is that the absence of a reason to do something isn't the same as the
presence of a reason not to do that thing. People who've accepted that there's
no objective value in things still experience emotions, and impulses to do
various things including acting compassionately, and generally have no reason
not to act on such things. We also experience the same positive feedback from
most actions that theists do - note how often 'fuzzies' are explicitly talked
about here, for example. It does all
0byrnema13y
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my
questions. In any case, it's really reassuring that others can form these
beliefs about reality, and retain things that I think are important (like sanity
and moral responsibility.)
0byrnema13y
Sorry! FOOV [http://lesswrong.com/lw/1hs/open_thread_december_2009/1drb]:
Framework Of Objective Value!
1AdeleneDawner13y
Okay, I went back and re-read that bit with the proper concept in place. I'm
still not sure why you think that non-FOOV value systems would lead to mental
problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of
variance. As you may've inferred, I tend to take a more nihilistic route than
most, assigning value to relatively few things, and I depend on impulses to an
unusual degree. I'm satisfied with the results of this system: I have a
lifestyle that suits my real preferences (resources on hand to satisfy most
impulses that arise often enough to be predictable, plus enough freedom and
resources to pursue most unpredictable impulses), projects to work on (mostly
based on the few things that I do see as intrinsically valuable), and very few
problems. It appears that I can pull this off mostly because I'm relatively
resistant to existential angst, though. Most value systems that I've seen
discussed here are more complex, and often very other-oriented. Eliezer is an
example of this, with his concept of coherent extrapolated value
[http://lesswrong.com/lw/yd/the_thing_that_i_protect/]. I've also seen at least
one case
[http://lesswrong.com/lw/10h/if_it_looks_like_utility_maximizer_and_quacks/sys]
of a person latching on to one particular selfish goal and pursuing that goal
exclusively.
0byrnema13y
I'm pretty sure I've over-thought this whole thing, and my answer may not have
been as natural as it would have been a week ago, but I don't predict
improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or
unwillingness to give priority to objective experience over subjective
experience) or as a failure mode of the brain in which adaptive behavior (with
respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which
assimilating a non-FOOV value system might result in mental problems -- one of
each type.
First, extreme apathy could result. True awareness that no state of the universe
is any better than any other state might extinguish all motivation to have any
effect upon empirical reality. Even non-theists might imagine that by virtue of
‘caring about goodness’, they are participating in some kind of cosmic fight
between good and evil. However, in a non-FOOV value system, there’s absolutely
no reason to ‘improve’ things by ‘changing’ them. While apathy might be
perfectly sane according to my definition above, it would be very maladaptive
from a human-being-in-the-normal-world point of view, and I would find it
troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no
objective value, and really understood that objective reality doesn’t matter
outside their subjective experience, they would have much less reason to value
objective truth over subjective truth. First, because there can be no value to
objective reality outside subjective reality anyway, and second because they
might more easily dismiss their moral obligation to assimilate objective reality
into their subjective reality. So that instead of actually saving people who are
drowning, they could just pretend the people were not drowning, and find this
mora
0AdeleneDawner13y
Ah. That makes more sense.
Also, a point I forgot to add in my above post: Some (probably the vast majority
of) atheists do see death as horrible; they just have definitions of 'horrible'
that don't depend on objective value.
1Furcas13y
There is objective value, you know. It is an objective fact about reality that
you care about and value some things and people, as do all other minds.
The point of going to Dallas is a function of your values, not the other way
around.
0byrnema13y
I'm not sure this question will make sense, but do you place any value on that
objective value?
0tut13y
For some things it is probably wise to change your desires to something you can
actually do. But in general the answer is another question: Why would you want
to do that?
This is a very interesting paper: "Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson." Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There's no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios -- en... (read more)
With Channukah right around the corner, it occurs to me that "Light One Candle" becomes a transhumanist/existential-risk-reduction song with just a few line edits.
Light one candle for all humankind's children With thanks that their light didn't die Light one candle for the pain they endured When the end of their world was surmised Light one candle for the terrible sacrifice Justice and freedom demand But light one candle for the wisdom to know When the singleton's time is at hand
Whether or not a singleton is the best outcome or not, I'm way too uncomfortable
with the idea to be singing songs of praise about it.
1Zack_M_Davis13y
Upvoted. I'm actually really uncomfortable with the idea, too. My comment above
is meant in a silly and irreverent manner (cf. last month
[http://lesswrong.com/lw/1dt/open_thread_november_2009/19ex]), and the
substitution for "peacemaker" was too obvious to pass up.
Is there a proof anywhere that occam's razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don't (immediately) see how to get from here to an actual number that you can plug into Baye's rule. Is this just something that is buried in textbook on information theory?
On that note, assuming someone had a strong background in statistics (phd level) and ... (read more)
I found Rob Zhara's comment
[http://lesswrong.com/lw/b3/groupthink_theism_and_the_wiki/79o] helpful.
0Matt_Simpson13y
thanks. I suppose a mathematical proof doesn't exist, then.
0Unknowns13y
Yes, there is a proof.
http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ljr
[http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ljr]
Not only is there no proof, there isn't even any evidence for it. Any effort to
collect evidence for it leaves you assuming what you're trying to prove. This is
the "problem of induction" and there is no solution; however, since you are
built to be incapable of not applying induction and you couldn't possibly make
any decisions without it.
0timtyler13y
Occam's razor is dependent on a descriptive language / complexity metric (so
there are multiple flavours of the razor).
Unless a complexity metric is specified, the first question seems rather vague.
0Jayson_Virissimo13y
I think you might be making this sound easier than it is. If there are an
infinite number of possible descriptive languages (or of ways of measuring
complexity) aren't there an infinite number of "flavours of the razor"?
0timtyler13y
Yes, but not all languages are equal - and some are much better than others - so
people use the "good" ones on applications which are sensitive to this issue.
0Paul Crowley13y
There's a proof that any two (Turing-complete) metrics can only differ by at
most a constant amount, which is the message length it takes to encode one
metric in the other.
0timtyler13y
Of course, the constant can be arbitrarily large.
However, there are a number of domains for which this issue is no big deal.
0[anonymous]13y
As far as I can tell, this is exactly zero comfort if you have finitely many
hypotheses.
0[anonymous]13y
This is little comfort if you have finitely many hypotheses — you can still find
some encoding to order them in any way you want.
0wedrifid13y
Bayes' rule.
0[anonymous]13y
Unless several people named Baye collectively own the rule, it's Bayes's rule.
:)
I'm interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don't really have anything like "values" beside "stay alive, get immediatly satisfying sensory input".
This, afaict, results to lip servive to "greater good", when people just select some nice values that they signal they want to promote, when in reality they haven't done the math by which these selected "values" derive from these "stay alive"... (read more)
I'm confused. Have you never seen long-term goal directed behavior?
2Jonii13y
I'm not sure, maybe, but more of a problem here is to select your goals. The
choice seems to be arbitrary, and as far as I can tell, human psychology doesn't
really even support having value systems that go deeper than that "lip service"
+ conformism state.
But I'm really confused when it comes to this, so I thought I could try
describing my confusion here :)
0Jack13y
I think you need to meet better humans. Or just read about some.
John Brown, Martin Luther, Galileo Galilei, Abraham Lincoln, Charles Darwin,
Mohandas Gandhi
Can you make sense of those biographies without going deeper than "lip service"
+ conformism state?
Edit: And I don't even necessarily mean that these people were supremely
altruistic or anything. You can add Adolph Hitler to the list too.
6Jonii13y
Dunno, haven't read any of those. But if you're sure that something like that
exists, I'd like to hear how is it achievable on human psychology.
I mean, paperclip maximizer is seriously ready to do anything to maximize
paperclips. It really takes the paperclips seriously.
On the other hand, there are no humans that seem to care about anything in
particular that's going on in the world. People are suffering and dying,
misfortune happens, animals go extinct, and relatively few do anything about it.
Many claim they're concerned and that they value human life and happiness, but
if doing something requires going beyond the safe zone of conformism, people
just don't do it. The best way I've figured out to overcome this is to
manipulate that safe zone to allow more actions, but it would seem that many
people think they know better. I just don't understand what.
1Jonii13y
I could go on and state that I'm well aware that the world is complicated. It's
difficult to estimate where our choices do lead us to, since net of causes and
effects is complex and requires a lot of thinking to grasp. Heuristics human
brain uses exist pretty much because of that. This means that it's difficult to
figure out how to do something beside staying in the safe zone that you know to
work at least somehow.
However, I still think there's something missing here. This just doesn't look
like a world where people do particularly care about anything at all. Even if it
was often useful to stay in a safe zone, there doesn't really seem to be any
easy way to snap them out of it. No magic word, no violation of any sort of
values makes anyone stand up and fight. I could literally tell people that
millions are dying in vain(aging) or that the whole world is at
stake(existential risks), and most people simply don't care.
At least, that's how I see it. I figure that rare exceptions to the rule there
can be explained as a cost of signalling something, requirements of the spot in
the conformist space you happen to be occupy or something like that.
I'm not particularly fond of this position, but I'm simply lacking a better
alternative.
0Jack13y
This is way too strong, isn't it? I also don't think the reason a lot of people
ignore these tragedies has as much to do with conformism as it does
self-interest. People don't want to give up their vacation money. If anything
there is social pressure in favor of sacrificing for moral causes. As for
values, I think most people would say that the fact they don't do more is a
flaw. "If I was a better person I would do x" or "Wow, I respect you so much for
doing x" or "I should do x but I want y so much." I think it is fair to
interpret these statements as second order desires
[http://lesswrong.com/lw/fv/wanting_to_want/] that represent values.
2Jonii13y
Remember what I said about "lip service"?
If they want to care about stuff, that's kinda implying that they don't actually
care about stuff (yet). Also, based on simple psychology, someone who chooses a
spot in the conformist zone that requires giving lip service to something
creates cognitive dissonance which easily produces second order desire to want
what you claim you want. But what is frightening here is how this choice of
values is arbitrary to the ultimate. If you'd judged another spot to be cheaper,
you'd need to modify your values in a different way.
On both cases though, it seems that people really rarely move any bit towards
actually caring about something.
0Jack13y
What is a conformist zone and why is it spotted?
Lip service is "Oh, what is happening in Darfur is so terrible!". That is
different from "If I was a better person I'd help the people of Darfur" or "I'm
such a bad person, I bought a t.v. instead of giving to charity". The first
signals empathy the second and third signal laziness or selfishness (and honesty
I guess).
Why do values have to produce first order desires? For that matter, why can't
they be socially constructed norms which people are rewarded for buying into?
When people do have first order desires that match these values we name those
people heroes. Actually sacrificing for moral causes doesn't get you ostracized
it gets you canonized.
Not true. The range of values in the human community is quite limited.
People are rarely complete altruists. But that doesn't mean that they don't care
about anything. The world is full of broke artists who could pay for more food,
drugs and sex with a real job. These people value art.
0Jonii13y
Both are hollow words anyway. Both imply that you care, when you really don't.
There are no real actions.
Because, uhm, if you really value something, you'd probably want to do
something? Not "want to want", or anything, but really care about that stuff
which you value. Right?
Sure they can. I expressed this as safe zone manipulation, attempting to modify
your envinroment so that your conformist behavior leads to working for some
value.
The point here is that actually caring about something and working towards
something due to arbitrary choice and social pressure are quite different
things. Since you seem to advocate the latter, I'm assuming that we both agree
that people rarely care about anything and most actions are result of social
pressure and stuff not directly related to actually caring or valuing anything.
Which brings me back to my first point: Why does it seem that many people here
actually care about the world? Like, as in paperclip maximizer cares about
paperclips. Just optical illusion and conscious effort to appear as a rational
agent valuing the world, or something else?
0Jonii13y
So, I've been thinking about this for some time now, and here's what I've got:
First, the point here is to self-reflect to want what you really want. This
presumably converges to some specific set of first degree desires for each one
of us. However, now I'm a bit lost on what do we call "values", are they the set
of first degree desires we have(not?), set of first degree desires we would
reach after infinity of self-reflection, or set of first degree desires we know
we want to have at any given time?
As far as I can tell, akrasia would be a subproblem of this.
So, this should be about right. However, I think it's weird that here people
talk a bit about akrasia, and how to achieve those n-degree desires, but I
haven't seen anything about actually reflecting and updating what you want.
Seems to me that people trust a tiny bit too much to the power of cognitive
dissonance fixing the problem between wanting to want and actually wanting, this
resulting to the lack of actual desire in achieving what you know you should
want(akrasia).
I really dunno how to overcome this, but this gap seems worth discussing.
Also, since we need eternity of self-reflection to reach what we really want,
this looks kinda bad for FAI: Figuring out where our self-reflection would
converge in infinity seems pretty much impossible to compute, and so, we're left
with compromises that can and probably will eventually lead to something we
really don't want.
Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.
The manner of status obsession that Robin Hanson finds all around him is
definitely due to the fact that we live in a part of a world where our immediate
needs are easily met. Particularly if you are considering signalling.
I think you are probably right in general too. Although a lot of the status
obsession remains even in resources scarce environments, it is less about
signalling your ability to conspicuously consume or do irrational costly things.
It's more being obsessed with having enough status that the other tribe members
don't kill you to take your food (for example).
Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.
Sure. What about it in particular? Care to post some insights?
0whpearson13y
Would my other reply to you be an interesting/valid way of thinking about the
problem. If not what were you looking for?
0wedrifid13y
Pardon me. Missed the reply. Yes, I'd certainly engage with that subject if you
fleshed it out a bit.
0whpearson13y
I was thinking about starting with very simple agents. Things like 1 input, 1
output with 1 bit of memory and looking at them from a decision theory point of
view. Asking questions like "Would we view it as having a goal/decision theory?"
If not what is the minimal agent that we would and does it make any trade offs
for having a decision theory module in terms of the complexity of the function
it can represent.
0wedrifid13y
I tend to let other people draw those lines up. It just seems like defining
words and doesn't tend to spark my interest.
I would be interested to see where you went with your answer to that one.
Given that we're sentient products of evolution, shouldn't we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a state space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components.
Observing the world for 32-odd years, it appears to... (read more)
Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence.
Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)
Goals would not include building any detailed model of human preferences or intelligence.
I think we would find some general patterns that might also apply to more complex simulations.
I've read Newcomb's problem (Omega, two boxes, etc.), but I was wondering if, shortly, "Newcomb's problem is when someone reliably wins as a result of acting on wrong beliefs." Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?
I am completely baffled why this would be downvoted. I guess asking a question
in genuine pursuit of knowledge, in an open thread, is wasting someone's time,
or is offensive.
I like to think someone didn't have the time to write "No, that's not the case,"
and wished, before dashing off, leaving nothing but a silhouette of dust, that
their cryptic, curt signal would be received as intended; that as they hurry
down the underground tunnel past red, rotating spotlights, they hoped against
hope that their seed of truth landed in fertile ground -- godspeed, downvote.
1gwern13y
I upvoted you because your second sentence painted a story that deeply amused
me.
/pats seed of truth, and pours a little water on it
0Morendil13y
The community would benefit from a convention of "no downvotes in Open Thread".
However, I did find your question cryptic; you're dragging into a decision
theory problem historical and religious referents that seem to have little to do
with it. You need to say more if you really want an answer to the question.
2bgrah44913y
Sure, that's fair.
Peter walked on water out to Jesus because he thought he could; when he looked
down and saw the sea, the fell in. As long as he believed Jesus instead of his
experience with the sea, he could walk on water.
I don't think the Napoleon story is true, but that's beside the point. He
thought he was so tough that an ordinary dose of poison wouldn't kill him, so he
took six times the normal dosage. This much gave his system such a shock that
the poison was rejected and he lived, thinking to himself, "Damn, I
underestimated how incredibly fantastic I am." As long as he (wrongly) believed
in his own exceptionalism instead of his experience with poison on other men, he
was immune to the poison.
My train of thought was, you have a predictor and a chooser, but that's just
getting you to a point where you choose either "trust the proposed worldview" or
"trust my experience to date" - do you go for the option that your prior
experience tells you shouldn't work (and hope your prior experience was wrong)
or do you go with your prior experience (and hope the proposed worldview is
wrong)?
I understand that in Newcomb's, that what Omega says is true. But change it up
to "is true way more than 99% of the time but less than 100% of the time" and
start working your way down that until you get to "is false way more than 99% of
the time but less than 100% of the time" and at some point, not that long after
you start, you get into situations very close to reality (I think, if I'm
understanding it right).
This basically started from trying to think about who, or what, in real life
takes on the Predictor role, who takes on the Belief-holder role, who takes on
the Chooser role, and who receives the money, and seeing if anything familiar
starts falling out if I spread those roles out to more than 2 people or shrink
them down to a single person whose instincts implore them to do something
against the conclusion to which their logical thought process is leading them.
-3Morendil13y
You seem to be generalizing from fictional evidence, which is frowned upon here
[http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/], and
may explain the downvote (assuming people inferred the longer version from your
initial question).
2bgrah44913y
That post (which was interesting and informative - thanks for the link) was
about using stories as evidence for use in predicting the actual future, whereas
my question is about whether these fictional stories are examples of a general
conceptual framework. If I asked if Prisoner's Dilemma was a special case of
Newcomb's, I don't think you'd say, "We don't like generalizing from fictional
evidence."
Which leads, ironically, to the conclusion that my error was generalizing from
evidence which wasn't sufficiently fictional.
1Morendil13y
Perhaps I jumped to conclusions. Downvotes aren't accompanied with explanations,
and groping for one that might fit I happened to remember the linked post. More
PC than supposing you were dinged just for a religious allusion. (The Peter
reference at least required no further effort on my part to classify as
fictional; I had to fact-check the Napoleon story, which was an annoyance.)
It still seems the stories you're evoking bear no close relation to Newcomb's as
I understand it.
0gwern13y
I have heard of real drugs & poisons which induce vomiting at high doses and so
make it hard to kill oneself; but unfortunately I can't seem to remember any
cases. (Except for one attempt to commit suicide using modafinil, which gave the
woman so severe a headache she couldn't swallow any more; and apparently LSD has
such a high LD-50 that you can't even hurt yourself before getting high.)
0bbnvnm13y
I was wondering, shortly, "Is brgrah449 from Sicily?"
3Tyrrell_McAllister13y
No, that's not the case. A one-boxer in Newcomb's problems is acting with
entirely correct beliefs. All agree that the one-boxer will get more money than
the two-boxer. That correct belief is what motivates the one-boxer.
The scenarios that you describe sound somewhat (but not exactly) like Gettier
problems [http://en.wikipedia.org/wiki/Gettier_problem] to me.
(I wasn't the downvoter.)
Yet the converse bears … contemplation, reputation. Only then refutation.
We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.
We consider this; an error in logic, an error in logic.
Even though! we know: intelligence is not computation.
Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, wi... (read more)
While reading a collection of Tom Wayman's poetry, suddenly a poem came to me
about Hal Finney ("Dying Outside" [http://lesswrong.com/lw/1ab/dying_outside/]);
since we're contributing poems, I don't feel quite so self-conscious. Here goes:
He will die outside, he says.
Flawed flesh betrayed him,
it has divorced him -
for the brain was left him,
but not the silverware
nor the limbs nor the car.
So he will take up
a brazen hussy,
tomorrow's eve,
a breather-for-him,
a pisser-for-him.
He will be letters,
delivered slowly;
deliberation
his future watch-word.
He would not leave until he left this world.
I try not to see his mobile flesh,
how it will sag into eternal rest,
but what he will see:
symbol and symbol, in their endless braids,
and with them, spread over strange seas of thought
mind (not body), forever voyaging.
http://www.gwern.net/fiction/Dying%20Outside
[http://www.gwern.net/fiction/Dying%20Outside]
2byrnema13y
I wrote this poem yesterday in an unusual mood. I don't entirely agree with it
today. Or at least, I would qualify it.
What is meant by computation? When I wrote that intelligence is not computation,
I must have meant a certain sort of computation because of course all thought is
some kind of computation.
To what extent has distinction been made between systematic/linear/deductive
thought (which I am criticizing as obviously limited in the poem) and
intelligent pattern-based thought? Has there been any progress in characterizing
the latter?
For example, consider the canonical story about Gauss. To keep him busy with a
computation, his math teacher told him to add all the numbers from 1 to 100.
Instead, according to the story, Gauss added the first number and the last
number, multiplied by 100 and divided by 2. Obviously, this is a computation.
But yet a different sort. To what extent do you suppose he logically deduced the
pattern of the lowest number and highest number combining always to single value
or just guessed/observed it was a pattern that might work? And then found that
it did work inductively?
I'm very interested in characterizing the difference between these kinds of
computation. Intelligent thinking seems to really be guesses followed by
verification, not steady linear deduction.
1JamesAndrix13y
Gah, Thank You. Saves me the trouble of a long reply. I'll upvote for a
change-of-mind disclaimer in the original.
My recent thoughts have been along these lines, but this is also what evolution
does. At some point, the general things learned by guessing have to be
incorporated into the guess-generating process.
Does anyone know how many neurons various species of birds have? I'd like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.
I've looked for an hour and it seems really hard to find. From what I've seen,
(a) birds have a different brain structure than mammals (“general intelligence”
originates in other parts of the brain), and (b) their neuron count changes
hugely (relative to mammals) during their lifetimes. I've seen lots of articles
giving numbers for various species and various brain components, but nothing in
aggregate. If you really want a good estimate you'll have to read up to learn
the brain structure of birds, and use that together with neuron counts for
different parts to gather a total estimate. Google Scholar might help in that
endeavor.
0anonym13y
I also looked for a while and had little luck. I did find though that the
brain-to-body-mass ratios for two of the smartest known species of birds -- the
Western Scrub Jay, and the New Caledonian Crow -- are comparable to those of the
chimps. These two species have shown very sophisticated
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2346514/] cognition
[http://www.cell.com/current-biology/abstract/S0960-9822%2807%2901770-8].
0Jordan13y
Blast.
I'll have to keep the question in mind for the next time I run into a
neuroscientist.
I could test this hypothesis, but I would rather not have to create a fake account or lose my posting karma on this one.
I strongly suspect that lesswrong.com has an ideological bias in favor of "morality." There is nothing wrong with this, but perhaps the community should be honest with itself and change the professed objectives of this site. As it says on the "about" page, "Less Wrong is devoted to refining the art of human rationality."
There has been no proof that rationality requires morality. Yet I suspect that posts comin... (read more)
It does.
In general they are not. But I find that a high quality clearly rational reply
that doesn't adopt the politically correct morality will hover around 0 instead
of (say) 8. You can then post a couple of quotes to get your karma fix if
desired.
1[anonymous]13y
That's unfortunate, since I'm a moral agnostic. I simply believe that if there
is a reasonable moral system, it has to be derivable from a position of total
self-interest. Therefore, these moralists are ultimately only defeating
themselves with this zealotry; by refusing to consider all possibilities, they
cripple their own capability to find the "correct" morality if it exists.
Utilitarianism will be Yudkowski's Cophenagen.
Sharing my Christmas (totally non-supernatural) miracle:
My theist girlfriend on Christmas Eve: "For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like 'oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.' I almost laughed."
I'm looking for a particular fallacy or bias that I can't find on any list.
Specifically, this is when people say "one more can't hurt;" like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can't find a name. I would expect it to be called the "Lost Cause Fallacy" or the "Fallacy of Futility" or something, but neither seems to be recognized anywhere. Does this have a standard name that I don't know, or is it so obvious that no one ever bothered to name it?
What are the implications to FAI theory of Robin's claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with "status" as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
(reposted from last month's open thread)
An interesting site I recently stumbled upon:
http://changingminds.org/
They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.
Here's the results from typing in "bias" into their search bar.
A quick search for "changingminds" in LW's search bar shows that noone has mentioned this site before on LW.
Is this site of any use to anyone here?
Does anyone here think they're particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall---it all just feels like I chose to do what I did out of my magical free will. Which doesn't explain anything... (read more)
S.S. 2010 videos: http://vimeo.com/siai/videos
Just thought I'd mention this: as a child, I detested praise. (I'm guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it's affected my overall development.
Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.
I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it's still subject to revision.
Note: The protagonist's name is "Key". Key, and one other character, receive Spivak pronouns, which can make either Key's name or eir pronouns look like some kind of typo or formatting error if you don't know it's coming. If this annoys enough people, I may change Key's name or switch to a different genderless pronoun system. I'm curious if anyone finds that they think of Key and the other Spivak... (read more)
Tags now sort chronologically oldest-to-newest by default, making them much more useful for reading posts in order.
Henceforth, I am Dr. Cyan.
If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I'm working through Pearl's Causality and am having trouble deriving something... or say I've stared at the wikipedia pages for ages and STILL don't get the difference between Minimum Description Length and Minimum Message Length... is LW an appropriate place to go "please help me understand this", and if so, should I request it in a top level post or in an open thread or...
More generally: LW is about developing human rationalit... (read more)
David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.
This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I'd be interested to see more discussion of it.
I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.
Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.
Specifically, the two hardest problems that I see are:
I have some advice.
Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.
A good dataset is incredibly valuable. When starting to attack a problem - both the whole thing, and subproblems that will arise - build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.
Succeed "instantaneously" - and don't break it. Make getting to "victory" - a complete entry - your first priority and aim to be done with it in a day or a week. Often, there's temptation to do a lot of "foundational" work before getting something complete working, or a "big refactoring" that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you're not breaking it.
I'm going to repeat my request (for the last time) that the most recent Open Thread have a link in the bar up top, between 'Top' and 'Comments', so that people can reach it a tad easier. (Possible downside: people could amble onto the site and more easily post time-wasting nonsense.)
I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:
If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn't the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?
Would it matter if we dropped "undetectable" from the proposed simulation? At what point would it begin to matter?
In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes' Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an appr... (read more)
Hmm, this "mentat wiki" seems to have some reasonably practical intelligence (and maybe rationality) techniques.
It has been awhile since I have been around, so please ignore if this has been brought up before.
I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.
Ivan Sutherland (inventor of Sketchpad - the first computer-aided drawing program) wrote about how "courage" feels, internally, when doing research or technological projects.
"[...] When I get bogged down in a project, the failure of my courage to go on never feels to me like a failure of courage, but always feels like something entirely dif- ferent. One such feeling is that my research isn't going anywhere anyhow, it isn't that important. Another feeling involves the urgency of something else. I have come to recognize these feelings as “who cares” and “the urgent drives out the important.” [...]"
I'm looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: "You can't really be sure evolution is true until you've listened to a creationist for five minutes."
Ah, never mind, I found it.
"In a way, no one can really trust the theory of natural selection until after they have listened to creationists for five minutes; and then they know it's solid."
I'd like a pithier way of phrasing it, though, than the original quote.
http://scicom.ucsc.edu/SciNotes/0901/pages/geeks/geeks.html
" They told them that half the test generally showed gender differences (though they didn't mention which gender it favored), and the other half didn't.
Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind."
Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenari... (read more)
I like the color red. When people around me wear red, it makes me happy - when they wear any other color, it makes me sad. I crunch some numbers and tell myself, "People wear red about 15% of the time, but they wear blue 40% of the time." I campaign for increasing the amount that people wear red, but my campaign fails miserably.
"It'd be great if I could like blue instead of red," I tell myself. So I start trying to get myself to like blue - I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.
What just happened? Did a belief or a preference change?
By coincidence, two blog posts went up today that should be of interest to people here.
Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.
(Needless to say, I don't agree with either of these arguments, but they're great for application of yo... (read more)
I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?
Robin Hanson podcast due 2009-12-23:
http://www.blogtalkradio.com/fastforwardradio/2009/12/23/fastforward-radio--countdown-to-foresight-2010-par
Repost from Bruce Schneier's CRYPTO-GRAM:
The Psychology of Being Scammed
This is a very interesting paper: "Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson." Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There's no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios -- en... (read more)
With Channukah right around the corner, it occurs to me that "Light One Candle" becomes a transhumanist/existential-risk-reduction song with just a few line edits.
Is there a proof anywhere that occam's razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don't (immediately) see how to get from here to an actual number that you can plug into Baye's rule. Is this just something that is buried in textbook on information theory?
On that note, assuming someone had a strong background in statistics (phd level) and ... (read more)
I'm interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don't really have anything like "values" beside "stay alive, get immediatly satisfying sensory input".
This, afaict, results to lip servive to "greater good", when people just select some nice values that they signal they want to promote, when in reality they haven't done the math by which these selected "values" derive from these "stay alive"... (read more)
Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.
Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.
"People are crazy, the world is mad. "
This is in response to this comment.
Given that we're sentient products of evolution, shouldn't we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a state space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components.
Observing the world for 32-odd years, it appears to... (read more)
Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence. Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)
Goals would not include building any detailed model of human preferences or intelligence.
I think we would find some general patterns that might also apply to more complex simulations.
I've read Newcomb's problem (Omega, two boxes, etc.), but I was wondering if, shortly, "Newcomb's problem is when someone reliably wins as a result of acting on wrong beliefs." Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?
A poem, not a post:
Intelligence is not computation.
As you know.
Yet the converse bears … contemplation, reputation. Only then refutation.
We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.
We consider this; an error in logic, an error in logic.
Even though! we know: intelligence is not computation.
Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, wi... (read more)
Does anyone know how many neurons various species of birds have? I'd like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.
I could test this hypothesis, but I would rather not have to create a fake account or lose my posting karma on this one.
I strongly suspect that lesswrong.com has an ideological bias in favor of "morality." There is nothing wrong with this, but perhaps the community should be honest with itself and change the professed objectives of this site. As it says on the "about" page, "Less Wrong is devoted to refining the art of human rationality."
There has been no proof that rationality requires morality. Yet I suspect that posts comin... (read more)