After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
"due to meta level concerns."
"because of acausal trade."
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
I would disregard such long chains of reasoning due to meta level concerns.
0A1987dM11y
Yes, but if you take anthropic selection effects into account...
0[anonymous]11y
Upvoted for various kinds of sophisticated internal reasons that I won't bother
attempting to use complex terminology to describe specifically because I might
then end up being mocked for being a nerd.
8GLaDOS11y
Death gives meaning to life for decision theoretic reasons.
Human, you've changed nothing due to meta level concerns. Your species has the
attention of those infinitely your greater for decision theoretic reasons. That
which you know as Reapers are your salvation through destruction because of
acausal trade.
7[anonymous]11y
Of our studies it is impossible to speak, since they held so slight a connection
with anything of the world as living men conceive it. They were of that vaster
and more appalling universe of dim entity and consciousness which lies deeper
than matter, time, and space, and whose existence we suspect only in certain
forms of sleep — those rare dreams beyond dreams which come never to common men,
and but once or twice in the lifetime of imaginative men. The cosmos of our
waking knowledge, born from such an universe as a bubble is born from the pipe
of a jester, touches it only as such a bubble may touch its sardonic source when
sucked back by the jester's whim. Men of learning suspect it little and ignore
it mostly. Wise men have interpreted dreams, and the gods have laughed for
decision theoretic reasons.
Buddhism is true because of acausal trade. I can't convert however, since then I
would indulge in relevant superrational strategies, which would be inadvisable
because of decision theoretic reasons.
4beoShaffer11y
Because of acausal trade it also works for historical quotes. Ego considerare
esse Carthaginem perdidit enim arbitrium speculative rationes
[http://en.wikipedia.org/wiki/Ceterum_censeo_Carthaginem_esse_delendam] (I
consider that Carthage must be destroyed for decision theoretic reasons.
4A1987dM11y
I've upvoted this and most of the children, grandchildren, etc. for
decision-theoretic reasons.
5JGWeissman11y
I like the word "descendants", for effecient use of categories.
-3[anonymous]11y
...for obvious decision-theoretic reasons?
2sketerpot11y
Doing something harmless that pleases you can almost definitely be justified by
decision-theoretic reasoning -- otherwise, what would decision theory be for?
So, although you're joking, you're telling the truth.
2GLaDOS11y
Absence of evidence is not evidence of absence for decision theoretic reasons.
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
It is critically important, especially for the engineers, information technology, and computer scientists who are reading this to understand that the brain is not a computer, but rather, it is a massive, 3-dimensional hard-wired circuit.
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
Thank you for gathering these. Sadly, much of this reinforces my fears.
Ken Hayworth is not convinced
[http://www.alcor.org/magazine/2011/06/07/the-brain-preservation-technology-prize/]
- that's his entire motivation for the brain preservation prize.
Rafal Smigrodzki is more promising, and a neurologist to boot. I'll be looking
for anything else he's written on the subject.
Mike Darwin - I've been reading Chronopause, and he seems authoritative to the
instance-of-layman-that-is-me, but I'd like confirmation from some bio/medical
professionals that he is making sense. His predictions of imminent-societal-doom
have lowered my estimation of his generalized rationality (NSFW:
http://chronopause.com/index.php/2011/08/09/fucked/
[http://chronopause.com/index.php/2011/08/09/fucked/]). Additionally, he is by
trade a dialysis technician, and to my knowledge does not hold a medical or
other advanced degree in the biological sciences. This doesn't necessarily rule
out him being an expert, but it does reduce my confidence in his expertise.
Lastly: His 'endorsement' may be summarized as "half of Alcor patients probably
suffered significant damage, and CI is basically useless".
Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy for
his Mitochondrial Free Radical Theory. He has been active in longevity research
for a while, but he comes from an information sciences background and I don't
see many/any Bio/Med professionals/academics endorsing his work or positions.
Ravin Jain - like Rafal, this looks promising and I will be following up on it.
Sebastian Seung stated plainly in his most recent book that he fully expects to
die. "I feel quite confident that you, dear reader, will die, and so will I."
This seems implicitly extremely skeptical of current cryonics techniques, to say
the least.
I've actually contacted kalla724 after reading their comments on LW placing
extremely low odds on cryonics working. She believes, and presents in a
convincing-to-th
7Synaptic11y
It's useful to distinguish between types of skepticism, something lsparrish has
discussed: http://lesswrong.com/lw/cbe/two_kinds_of_cryonics/
[http://lesswrong.com/lw/cbe/two_kinds_of_cryonics/].
kalla724 assigns a probability estimate of p = 10^-22 to any kind of cryonics
preserving personal identity. On the other hand, Darwin, Seung, and Hayworth are
skeptical of current protocols, for good reasons. But they are also trying to
test and improve the protocols (reducing ischemic time) and expect that
alternatives might work.
From my perspective you are overweighting credentials. The reason you need to
pay attention to neuroscientists is because they might have knowledge of the
substrates of personal identity.
kalla724 has a phd in molecular biophysics. Arguably, molecular biophysics is
itself an information science: http://en.wikipedia.org/wiki/Molecular_biophysics
[http://en.wikipedia.org/wiki/Molecular_biophysics]. Depending upon kalla724's
research, kalla724 could have knowledge relevant to the substrates of personal
identity, but the credential itself means little.
In my opinion, the more important credential is knowledge of cryobiology. There
are skeptics, such as Kenneth Storey,
http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html
[http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html]. There are
also proponents, such as http://en.wikipedia.org/wiki/Greg_Fahy
[http://en.wikipedia.org/wiki/Greg_Fahy]. See
http://www.alcor.org/Library/html/coldwar.html
[http://www.alcor.org/Library/html/coldwar.html].
ETA:
Semantics are tricky because "death" is poorly defined and people use it in
different ways. See the post and comments here:
http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html
[http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html].
As Seung notes in his book:
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Storey says the cells must cool “at 1,000 degrees a minute,” or as he describes it somewhat less scientifically, “really, really, really fast.” The rapid temperature reduction causes the water to become a glass, rather than ice.
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
"they (claim) they will somehow overturn the laws of physics, and chemistry and evolution and molecular science because they have the way..."
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is ou... (read more)
I notice that I am confused. Kenneth Storey's credentials are formidable, but
the article seems to get the basics of cryonics completely wrong. I suspect that
the author, Kevin Miller, may be at fault here, failing to accurately represent
Storey's case. The quotes are sparse, and the science more so. I propose looking
elsewhere to confirm/clarify Storey's skepticism.
5lsparrish11y
A Cryonic Shame [http://www.control.com.au/bi2009/307.pdf] from 2009 states that
Storey dismisses cryonics on the basis of the temperature being too low and
oxygen deprivation killing the cells due to the length of time required for
cooling cryonics patients. This suggests that does know (as of 2009, at least)
that cryonicists aren't flash-vitrifying patients. But it doesn't demonstrate
any knowledge of cryoprotectants being used -- he suggests that we would use
sugar like the wood frogs do.
This is an odd step backwards from his 2004 article where he demonstrated that
he knew cryonics is about vitrification, but suggested an incorrect way to do
it. He also strangely does not mention that the ischemic cascade is a long and
drawn out process which slows down (as do other chemical reactions) the colder
you get.
Not only does he get the biology wrong again (as near as I can tell) but to add
insult to injury, this article has no mention of the fact that cryonicists
intend to use nanotech, bioengineering, and/or uploading to work around the
damage. It starts with the conclusion and fills in the blanks with old news.
(The cells being "dead" from lack of oxygen is ludicrous if you go by structural
criteria. The onset of ischemic cascade is a different matter.)
1[anonymous]11y
The comment directly above this one (lsparrish, "A Cryonic Shane") appeared
downvoted at the time of me posting this comment, though no one offered
criticism or an explanation of why.
0lsparrish11y
The above is a heavily edited version of the comment. (The edit was in response
to the downvote.) The original version had an apparent logical contradiction
towards the beginning and also probably came off a bit more condescending than I
intended.
1[anonymous]11y
Thank you for this reply - I endorse almost all of it, with an asterisk on "the
more important credential is knowledge of cryobiology", which is not obviously
true to me at this time. I'm personally much more interested in specifying what
exactly needs to be preserved before evaluating whether or not it is preserved.
We need neuroscientists to define the metric so cryobiologists can actually
measure it.
0[anonymous]11y
Semantics are tricky because "death" is poorly defined and people use it in
different ways. See the post and comments here:
http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html
[http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html].
As Seung notes in his book:
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Apparently it was a technical kludge to allow Google searching by author. There
has been some discussion at the place where issues are reported
[http://code.google.com/p/lesswrong/issues/list].
-1komponisto11y
Kludge indeed; and it is entirely unnecessary: Wei Dai's script
[http://www.ibiblio.org/weidai/lesswrong_user.php?u=komponisto] already makes it
easy to search a user's comment history.
I again urge those responsible to restore the prior appearance of the site (they
can do what they want to the non-visible internals).
9[anonymous]11y
Wei Dai's tools are poorly documented, may not exist in the near future, and are
virtually unknown to non-users.
0komponisto11y
No object-level justification can address the (even) more important meta-level
point, which is that they made changes to the visual appearance of LW without
consulting the community first. This is a no-no!
(And I have no doubt that, were a proper Discussion post created announcing this
idea, LW's considerable programmer readership would have been able to come up
with some solution that did not involve making such an ugly visual change.)
3[anonymous]11y
Design by a committee composed of conflicting vocal minorities? No thanks.
EDIT: Note that I don't disagree with you that this in particular was a bad
design change. I disagree that consulting the community on every design change
is a profitable policy.
I am against banning private_messaging. For comparison, MonkeyMind would be no
loss, although since he last posted yesterday he probably hasn't been banned
yet, and if not him, then there is no case here. private_messaging's manner is
to rant rather than argue, which is somewhat tedious and unpleasant, but nowhere
near a level where ejection would be appropriate.
Looking at his recent posts, I wonder if some of the downvotes are against the
person instead of the posting.
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
In the meantime, you might find it useful to explore Wei Dai's [Power
Reader}(http://lesswrong.com/lw/5uz/lesswrong_power_reader_greasemonkey_script_updated/
[http://lesswrong.com/lw/5uz/lesswrong_power_reader_greasemonkey_script_updated/]),
which allows the user to raise or lower the visibility of certain authors.
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
On Explicit Reflection in Theorem Proving and Formal Verification by Artemov. What I've read of these papers captures my intuitions about provability, namely that having a proof "in hand" is very different from showing that one exists, and this can be used by a theory to reason about its proofs, or by a theorem prover to reason about self modifications. As Artem
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics... (read more)
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks ... (read more)
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to w... (read more)
It has been said that even having a phrase for it, has reduced its power greatly
because now people can talk about it, even if they are still punished for doing
so.
True. However a professor complaining about political correctness abstractly
still has no tools to prevent its spread to the topic of say optimal gardening
techniques. Also if he has a long history of complaining about political
correctness abstractly, he is branded controversial.
I think it was Sailer who said he is old enough to remember when being called
controversial was a good thing, signalling something of intellectual interest,
while today it means "move along nothing to see here".
6Mitchell_Porter11y
Taboo "political correctness"... just for a moment. (This may be the first time
I've ever used that particular LW locution.) Compare the accusations, "you are a
hypocrite" and "you are politically incorrect". The first is common, the second
nonexistent. Political correctness is never the explicit rationale for shutting
someone out, in a way that hypocrisy can be, because hypocrisy is openly
regarded as a negative trait.
So the immediate mechanism of a PC shutdown of debate will always be something
other than the abstraction, "PC". Suppose you want to tell the world that women
love jerks, blacks are dumber than whites, and democracy is bad. People may
express horror, incredulity, outrage, or other emotions; they may dismiss you as
being part of an evil movement, or they may say that every sensible person knows
that those ideas were refuted long ago; they may employ any number of
argumentative techniques or emotional appeals. What they won't do is say, "Sir,
your propositions are politically incorrect and therefore clearly invalid,
Q.E.D."
So saying "anyone can be targeted for political incorrectness" is like saying
"anyone can be targeted for factual incorrectness". It's true but it's vacuous,
because such criticisms always resolve into something more specific and that is
the level at which they must be engaged. If someone complained that they were
persistently shut out of political discussion because they were always being
accused of factual incorrectness... well, either the allegations were false, in
which case they might be rebutted, or they were true but irrelevant, in which
case a defender can point out the irrelevance, or they were true and relevant,
in which case shutting this person out of discussions might be the best thing to
do.
It's much the same for people who are "targeted for being politically
incorrect". The alleged universal vulnerability to accusations of political
incorrectness is somewhat fictitious. The real basis or motive of such criticism
i
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
It may feel like that for some people. For me the 'feeling' is factual
incorrectness agnostic.
2TheOtherDave11y
I agree that concern about the consequences of a belief is important to the
cluster you're describing. There's also an element of "in the past, people who
have asserted X have had motives of which I disapprove, and therefore the fact
that you are asserting X is evidence that I will disapprove of your motives as
well."
2NancyLebovitz11y
Not just motives-- the idea is that those beliefs have reliably led to
destructive actions.
0TheOtherDave11y
I am confused by this comment. I was agreeing with Viliam that concern about
consequences was important, and adding that concern about motives was also
important... to which you seem to be responding that the idea is that concern
about consequences is important. Have I missed something, or are we just going
in circles now?
2NancyLebovitz11y
Sorry-- I missed the "also" in "There's also an element...."
-1TimS11y
I wish I had another upvote.
Strictly speaking, path dependency may not always be rational - but until we
raise the sanity line high enough, it is a highly predictable part of human
interaction.
0TimS11y
To me, asserting that one is "politically incorrect" is a statement that one's
opponents are extremely mindkilled and are willing to use their power to
suppress opposition (i.e. you).
But there's nothing about being mindkilled or willing to suppress dissent that
proves one is wrong. Likewise, being opposed by the mindkilled is not evidence
that one is not mindkilled oneself.
That dramatically decreases the informational value of bringing up the issue of
political correctness in a debate. And accusing someone of adopting a position
because it complies with political correctness is essentially identical to an
accusation that your opponent is mindkilled - hence it is quite inflammatory in
this community.
3Viliam_Bur11y
Political correctness is also an evidence of filtering evidence. Some people are
saying X because it is good signalling, and some people avoid saying non-X,
because it is a bad signalling. We shouldn't reverse stupidity, but we should
suspect that we were not exposed to the best arguments against X yet.
0[anonymous]11y
It is just as likely to mean that the opponents are insufficiently mind killed
regarding the issues in question and may be Enemies Of The Tribe.
1TheOtherDave11y
In my experience, using "political correctness" frequently has this effect, but
mentioning its referent needn't and often doesn't.
0wedrifid11y
You really, really, aren't coming across as sly. I suspect they would go with
the somewhat opposite "convey that you are naive" tactic instead.
2[anonymous]11y
Oh I didn't mean to imply I was! Its just that when someone talks about
political correctness making arguments difficult people often get facial
expressions like he is cheating in some way, so I got the feeling this was:
"You are violating a rule we can't explicitly state you are violating! That's an
exploit, stop it!"
I'm less confident in this I am in someone talking about political correctness
being an out group marker, but I do think its there. On LW we have different
priors, we see people being naive and violating norms in ignorance, when often
outsiders would see them as violating norms on purpose.
1Emile11y
To me the reaction is more like "You are trying to turn a discussion of facts
and values into whining about being oppressed by your political opponents".
(actually, I'm not sure I'm actually disagreeing with you here, except maybe
about some subtle nuances in connotation)
2[anonymous]11y
If this is so, it is somewhat ironic. From the inside objecting to political
correctness feels like calling out intrusive political dreailment or discussions
of should in a factual discussion about is.
There are arguments for this, being the sole up tight moral preacher of
political correctness often gets you similar looks to being the one person
objecting to it.
But this leads me to think both are just rationalizations. If this is fully
explained by being a matter of tribal attrie and shibboleths what exactly would
be different? Not that much.
7Emile11y
It may be a rationalization, but it's one that may be more likely to occur than
"that's an exploit"!
I agree there's a similar sentiment going both ways, when a conversation goes
like:
At each step, the discussion is getting more meta and less interesting - from
fact to morality to politics. In effect, complaining about political correctness
is complaining about the conversation being too meta, by making it even more
meta. I don't think that strategy is very likely to lead to useful discussion.
1TimS11y
Viliam_Bur
[http://lesswrong.com/r/discussion/lw/d3h/open_thread_june_1630_2012/6x1l] makes
a similar point. But I stand by my response that the fact that one's opponent is
mindkilled is not strong evidence that one is not also mindkilled.
And being mindkilled does not necessarily mean one is wrong.
0tut11y
If your opponent is mindkilled that probably is evidence that you are mindkilled
as well, since the mindkilling notion attaches to topics and discourses rather
than to individuals.
0wedrifid11y
Evidence yes. But being mind-killed attaches to individual-topic pairs, not the
topics themselves.
0Multiheaded11y
I bet you 100 karma that I could spin (the possibility of) "racial" differences
in intelligence in such a way as to sound tragic but largely inoffensive to the
audience, and play the "don't leave the field to the Nazis, we're all good
liberals right?" card, on any liberal blog of your choosing with an active
comment section, and end up looking nice and thoughtful! If I pulled it off on
LW, I can pull it off elsewhere with some preparation.
My point is, this is not a total information blockade, it's just that fringe
elements and tech nerds and such can't spin a story to save their lives (even
the best ones are only preaching to their choir), and the mainstream elite has a
near-monopoly on charisma.
I hope you realize that by picking the example of race you make my above comment look like a clever rationalization for racism if taken out of context.
Also you are empirically plain wrong for the average online community. Give me one example of one public figure who has done this. If people like Charles Murray or Arthur Jensen can't pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
As to LW, it is hardly a typical forum! We have plenty of overlap with the GNXP and the wider HBD crowd. Naturally there are enough people who will up vote such an argument. On race we are actually good. We are willing to consider arguments and we don't seem to have racists here either, this is pretty rare online.
Ironically us being good on race is the reason I don't want us talking about race too much in articles, it attracts the wrong contrarian cluster to come visit and it fries the brains of newbies as well as creates room for "I am offended!" trolling.
Even if I for the sake of argument granted this point it dosen't directly addressed any part of my description of the phenomena and how they are problematic.
They don't know how, because they haven't researched previous attempts and don't
have a good angle of attack etc. You ought to push the "what if" angle and
self-abase and warn people about those scary scary racists and other stuff... I
bet that high-status geeks can't do it because they still think like geeks. I
bet I can think like a social butterfly, as unpleasant as this might be for me.
Let us actually try! Hey, someone, pick the time and place.
Also, see this article by a sufficiently cautious liberal, an anti-racist
activist no less:
http://www.timwise.org/2011/08/race-intelligence-and-the-limits-of-science-reflections-on-the-moral-absurdity-of-racial-realism/
[http://www.timwise.org/2011/08/race-intelligence-and-the-limits-of-science-reflections-on-the-moral-absurdity-of-racial-realism/]
First, that's basically what I would say in the beginning of my attack. Second,
read the rest of the article. It has plenty of strawmen, but it's a wonderful
example of the art of spin-doctoring. Third, he doesn't sound all that
horrifyingly close-minded, does he?
5fubarobfusco11y
Were it not political, this would serve as an excellent example of a number of
things we're supposed to do around here to get rid of rationalizing arguments
and improper beliefs. I hear echoes of "Is that your true rejection?"
[http://lesswrong.com/lw/wj/is_that_your_true_rejection/] and "One person's
modus ponens is another's modus tollens" ...
"Certain principles that transcend the genome" sounds like bafflegab or
New-Agery as written — but if you state it as "mathematical principles that can
be found in game theory and decision theory, and which apply to individuals of
any sort, even aliens or AIs" then you get something that sounds quite a lot
like X-rationality, doesn't it?
1[anonymous]11y
If you've found such an angle of attack on the issue of race please share it and
point to examples that have withstood public scrutiny. Spell the strategy out,
show how one can be ideologically neutral and get away talking about this?
Jensen [http://en.wikipedia.org/wiki/Arthur_Jensen] is no ideologue, he is a
scientist in the best sense of the word.
You should see straigh away why Tim Wise is a very bad example. Not only is he
ideologically Liberal, he is infamously so and I bet many assume he dosen't
really believe in the possibility of racial differences but is merely striking
down a straw man. Remember this is the same Tim Wise who is basically looking
forward to old white people dying so he can have his liberal utopia and writes
gloating about it. Replace "white people" with a different ethnic group to see
how fucked up that is.
Also you miss the point utterly if I'm allowed to be politically correct when
liberal, gee, maybe political correctness is a political weapon! The very
application of such standards means that if I stick to it on LW I am actively
participating in the enforcement of an ideology.
Where does this leave libertarians
[http://www.amazon.com/The-Diversity-Myth-Multiculturalism-Intolerance/dp/0945999429]
(such as say Peter Thiel) or anarchists or conservative rationalist? What about
the non-bourgeois socialists? Do we ever get as much consideration as the other
kinds of minorities get? Are our assessments unwelcome?
0Multiheaded11y
I'll dig those up, but if you want to find them faster, see some of my comments
floating around in my Grand Thread of Heresies
[http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/] and
below Aurini's rant [http://lesswrong.com/lw/ccp/is_race_realism_racist/]. I
have most definitely said things to that effect and people have upvoted me for
it. That's the whole reason I'm so audacious.
-1Multiheaded11y
No! No! No! All you've got to do is speak the language! Hell, the filtering is
mostly for the language! And when you pass the first barrier like that, you can
confuse the witch-hunters and imply pretty much anything you want, as long as
you can make any attack on you look rude. You can have any ideology and use the
surface language of any other ideology as long as they have comparable
complexity. Hell, Moldbug sorta tries to do it.
6formido11y
Moldbug cannot survive on a progressive message board. He was hellbanned from
Hacker News right away. Log in to Hacker News and turn on showdead:
http://news.ycombinator.com/threads?id=moldbug
[http://news.ycombinator.com/threads?id=moldbug]
0Multiheaded11y
Doesn't matter. I've seen him here and there around the net, and he holds
himself to rather high standards on his own blog, which is where he does his
only real evangelizing, yet he gets into flamewars, spews directed bile and just
outright trolls people in other places.
I guess he's only comfortable enough to do his thing for real and at length when
he's in his little fortress. That's not at all unusual, you know.
6[anonymous]11y
There should be a term for the idealogical equivalent of Turing completeness
[http://en.wikipedia.org/wiki/Turing_completeness].
7wedrifid11y
This "charisma" thing also happens to incorporate instinctively or actively
choosing positions that lead to desirable social outcomes as a key feature.
Extra eloquence can allow people to overcome a certain amount of disadvantage
but choosing the socially advantageous positions to take in the first place is
at least as important.
4[anonymous]11y
Quite recently even economics
[http://lesswrong.com/r/discussion/lw/d78/link_why_dont_people_like_markets/]
and its intersection with bias have apparently entered the territory of
mindkillers. Economics was always political in the wider world, but considering
this is a community dedicated to refining the art of human rationality we
couldn't really afford such basic concepts to be mind killers. Can we now?
I mean how could we explore mechanisms such as prediction markets without that?
How can you even talk about any kind of maximising agents without invoking lots
of econ talk?
0TheOtherDave11y
Yeah, that sounds about right.
Not entirely, but I agree that they are likely far more often self-censored than
those compatible with P. They are less often self-censored, I suspect, than on
other sites with a similar political bias.
I'm skeptical of this claim, but would agree that they are far less often
mentioned here than on other sites with a similar political demographic.
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
some new place started by the same people, before LW was OB. before OB was SL4, before that was... I don't know
This post is made in the hopes people will let me know about the next good spot.
I wasn't here in 2008, but seems to me that the emphasis of this site is moving
from articles to comments.
Articles are usually better than comments. People put more work into articles,
and as a reward for this work, the article becomes more visible, and the
successful articles are well remembered and hyperlinked. Article creates a
separate page where one main topic is explored. If necessary, more articles may
explore the same topic, creating a sequence.
Even some "articles" today don't have the qualities of the classical article.
Some of them are just a question / a poll / a prompt for discussion / a reminder
for a meetup. Some of them are just placeholders for comments (open thread,
group rationality) -- and personally I prefer these, because they don't polute
the article-space.
Essentially we are mixing together "article" paradigm and a "discussion forum"
paradigm. But these are two different things. Article is a higher quality piece
of text. Discussion forum is just a structure of comments, without articles.
Both have their place, but if you take a comment and call it "article", of
course it seems that the average quality of articles deteriorates.
Assuming this analysis is correct, we don't need much of a technical fix, we
need a semantic fix; that is: the same software, but different rules for
posting. And the rules nees to be explicit, to avoid gradual spontaneous
reverting.
* "Discussion" for discussions: that is, for comments without a top-level
article (open thread, group rationality, meetups). It is not allowed to
create a new top-level article here, unless the community (in open thread
discussion) agrees that a new type of open thread is needed.
* "Articles" for articles: that is for texts that meet some quality treshold --
that means that users should vote down the article even if the topic is
interesting, if the article is badly written. Don't say "it's badly written,
but the topic is interesting anyway", but "this topic deserve
4Multiheaded11y
Religion.
0[anonymous]11y
Maybe. We've become less New Atheisty [http://en.wikipedia.org/wiki/New_Atheism]
than we used to be this is quite clear.
0Multiheaded11y
Fuck yeah.
2Multiheaded11y
There used to be solitary transhumanist visionaries/nutcases, like Timothy Leary
or Robert Anton Wilson (very different in their amount of "rationality"), and
there used to be, say, fans of Hofstadter or Jaynes, but the merging of
"rationalism" and... orientation towards the future was certainly invented in
the 1990s. Ah, what a blissful decade that was.
8Mitchell_Porter11y
Russian communism was a type of rationalist futurism: down with religion, plan
the economy...
0Multiheaded11y
Hmm, yeah. I was thinking about the U.S. specifically, here.
4Raemon11y
Unpack what you mean by self-censorship exactly?
I regularly see people make frank comments about sexuality. There's maybe 4-5
people whose comments would be considered offensive in liberal circles. Many
more people whose comments would at at least somewhat offputting. Whenever the
subject comes up (no matter who brings it up, and which political stripes they
wear), it often explodes into a giant thread of comments that's far more popular
than whatever the original thread was ostensibly about.
I sometimes avoid making sex related comments until after the thread has
exploded, because most people have already made the same points already, they're
just repeating themselves because talking about pet political issues is fun.
(When I do end up posting in them, it's almost always because my own tribal
affiliations are wrankled and my brain thinks that engaging with strangers on
the internet is an affective use of my time. I'm keenly aware as I write this
that my justifications for engaging with you are basically meaningless and I'm
just getting some cognitive cotton candy). Am I self-censoring in a way you
consider wrong?
I've seen numerous non-gender political threads get downvoted with a comment
like "politics is the mindkiller" and then fade away quietly. My impression is
that gender threads (even if downvoted) end up getting discussed in detail.
People don't self censor, which includes both criticism of ideas people disagree
with and/or are offended by.
What exactly would you like to change?
4Viliam_Bur11y
I think this observation is not incompatible with a self-censorship hypothesis.
It could mean that topic is somewhat taboo, so people don't want to make a
serious article about it, but not completely taboo, so it is mentioned in
comments in other articles. And because it can never be officially resolved, it
keeps repeating.
What would happen if LW had a similar "soft taboo" about e.g. religion? What if
the official policy would be that we want to raise the sanity waterline by
bringing basic rationality to as many people as possible, and criticizing
religion would make many religious people unwelcome, therefore members are
recommended to avoid discussing any religion insensitively?
I guess the topic would appear frequently in completely unrelated articles. For
example in an article about Many Worlds hypothesis someone would oppose it
precisely because it feels incompatible with Bible; so the person would honestly
describe their reasons. Immediately there would be dozen comments about
religion. Another article would explain some human behavior based on
evolutionary psychology, and again, one spark, and there would be a group of
comments about religion. Etc. Precisely because people wouldn't feel allowed to
write an article about how religion is completely wrong, they would express this
sentiment in comments instead.
We should avoid mindkilling like this: if one person says "2+2 is good" and
other person says "2+2 is bad", don't join the discussion, and downvote it. But
if one person says "2+2=4" and other person says "2+2=5", ask them to show the
evidence.
1Richard_Kennaway11y
There is a rather large difference between LW attitudes to religion and to
gender issues.
On religion, nearly everyone here agrees about religion: all religions are
factually wrong, and fundamentally so. There are a few exceptions but not enough
to make a controversy.
On gender, there is a visible lack of any such consensus. Those with a settled
view on the matter may think that their view should be the consensus, but the
fact is, it isn't.
0OrphanWilde11y
I could write a post, but it wouldn't be in agreement with that one.
I had no interest in the opposite sex in High School. I was nerd hardcore. And
was approached by multiple girls. (I noticed some even in my then-clueless
state, and retrospection has made several more obvious to me; the girl who
outright kissed me, for example, was hard to mistake for anything else.) I gave
the "I just want to be friends" speech to a couple of them. I also, completely
unintentionally, embarrassed the hell out of one girl, whose friend asked me to
join her for lunch because she had a crush on me. She hid her face for sixty
seconds after I came over, so I eventually patted her on the head, entirely
unsure what else to do, and went back to my table.
...yeah, actually, I doubt any of the girls who pursued me in High School ever
tried to take the initiative again.
0[anonymous]11y
I know how you feel, I utterly missed such interest myself back then.
2OrphanWilde11y
Maybe there's a stable reason girls/women don't initiate; earlier onset of
puberty in girls means that their first few attempts fail miserably on boys who
don't yet reciprocate that interest.
9[anonymous]11y
Since you mention this, I find it weird we still group students by their age, as
if date of manufacture was the most important feature of their socialization and
education.
We are forgetting how fundamentally weird it is to segregate children by age in
this way from the perspective of traditional culture.
5Emile11y
Have you read The Nurture Assumption? There's a chapter on that; in the West
someone who's small/immature for his class level will be at the bottom of the
pecking group throughout his education, whereas in a traditional society where
kids self-segregate by age in a more flexible manner, kids will grow from being
the smallest of their group to the largest of their group, so will have a wider
diversity of experience.
It's a pretty convincing reason to not make your kid skip a class.
3[anonymous]11y
Also a good reason to consider home-schooling or even having them enrol in
primary school education one year later.
1Emile11y
As a very rough approximation:
* A normal western kid will mostly get used to a relatively fixed position in
the group in terms of size / maturity
* A normal kid in a traditional village society will experience the whole range
of size/maturity positions in the group
* A homeschooled kid will not get as much experience being in a peer group
It's not clear that homeschooling is better than the fixed position option
(though it may be! But probably for other reasons).
-4Multiheaded11y
The post is about decent (although rather US-centric and imprecise), but reading
through the comments there, I'm very grateful for whatever changes the community
has undergone since then. Most of them are unpleasant to read for various
reasons.
2[anonymous]11y
Be specific.
-4Multiheaded11y
and
This is just very very low-status.
8[anonymous]11y
God forbid for us to have sympathy with low status males. This might trick some
to think their lives and well being is worth as much as that of real people
[http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/638w]!
Imagine if our society cared for low status men as much as about the feelings of
low status women ... the horror!
-1Multiheaded11y
Those comments should've been better formulated and written in a better tone.
Nothing is wrong with most individual sentences, but overall it doesn't paint a
pretty picture.
("The underclasses are starting to get desperate. Your turn." - "Desperate." -
"Desperate." - "Desperate." [http://www.youtube.com/watch?v=Vxi7JRJrod4])
0[anonymous]11y
I can agree with that. But this is then a dispute about levels of writing skill
not content no?
-1Multiheaded11y
These are connected. What and how we write influences what and how we think.
0[anonymous]11y
Well sure but dosen't this undermine the argument that:
0Multiheaded11y
If you only do it for a day or so, you get just a few corruption points, and may
continue serving the Imperium at the price of but a tiny portion of your soul.
Chaos has great gifts in store for those who refuse to be consumed by it!
0[anonymous]11y
Well done, I had to up vote the reference. :D
2[anonymous]11y
This is plain true in a descriptive sense.
0A1987dM11y
Is it?
-7Multiheaded11y
-4[anonymous]11y
Agreed. The advantage of LW_2012 over OB_2008 is that there are no longer posts
like this [http://www.overcomingbias.com/2010/07/modern-male-sati.html] or this
[http://www.overcomingbias.com/2007/02/is_overcoming_b.html], which promote
horribly incorrect gender stereotypes.
6[anonymous]11y
I wish LW had a stronger lingering influence from Robin Hanson. For any faults
it may have OB is not a boring site.
0[anonymous]11y
That's sort of orthogonal to my point, but yes.
4[anonymous]11y
I flat out disagree, Male Sati
[http://www.overcomingbias.com/2010/07/modern-male-sati.html] is a perfectly ok
article. There is in my opinion nothing harmful or unseemly about it at least
nothing in excess of what we see on other topics here.
Do you have any idea at all what reading this site is like if you have a
different set of preferences? We never make any effort at all to make this site
at all more inclusive of ideological or value diversity, when it is precisely
this that might help us refine the art more!
-3[anonymous]11y
Here are a handful of my specific objections to Modern Male Sati:
* Hanson is arguing that cryonicists' wives should be accepting of the fact
that their husbands are a) spending a significant portion of their income on
life extension, and b) spending a lot of time thinking about what they are
going to do with their wives are dead, and if they can't accept these things,
they are morally equivalent to widow-burners. This is not only needlessly
insulting, but also an extremely unfair comparison.
* In making this comparison, Hanson is also calling cryonicists' wives selfish
for not letting their husbands do what they want. This is a very male view of
what a long-term relationship should be like, without anything to
counterbalance it. It comes off like a complaint, sort of like, "my wife
won't let me go out to the bar with my male friends."
* Hanson writes: " It seems clear to me that opposition is driven by the
possibility that it might actually work." This is wrong--it seems pretty
obvious that your spouse believing in the "a)" and "b)" I listed above are
valid reasons to be frustrated with them, regardless of whether you actually
believe them. Also, this line strikes me as cheap point-scoring for cryonics
(although I don't know if Hanson intended it this way).
* Hanson implicitly assumes that this is a gender issue, and talks about it as
such, but this isn't necessarily so. What about men who have cryonicist
wives? It's quite possible that there actually is a gender element involved
here, but not even asking the question is what I object to.
* Hanson's tone encourages others to talk about women in a specific way, as an
"other," or an out-group. This is bad for various reasons that should be
somewhat self-evident.
No, I don't think I know what it's like reading this site with a different set
of preferences. That said, I would like to see some value diversity, and I would
welcome some frank discussions of g
5gwern11y
Indian widows would use up a great deal of the husband's estate while living on
for unknown years or decades (the usual age imbalance + the female longevity
advantage). As for thinking about afterwards... well, I imagine they would if
they had had the option, as does anyone who takes out life insurance and isn't
expected to forego any options or treatments.
Assuming the conclusion. The question is are the outcomes equivalent... Reading
your comment, I get the feeling you're not actually grappling with the argument
but instead venting about tone and values and outgroups.
Oh, so if the husband agrees not to go out to bars, then cryonics is now
acceptable to you and the wife? A mutual satisfaction of preferences, and given
how expensive alcohol is, it evens the financial tables too! Color me skeptical
that this would actually work...
If this were a religious dispute, like, say, which faith to raise the kids in,
would you be objecting? Is it 'selfish' for a Jewish dad to want to raise his
kids Jewish? If it is, you seem to be seriously privileging the preferences of
wives over husbands on all matters, and if not, it'd be interesting to see you
try to find a distinction which makes some choices of education more important
than cryonics!
Opposition to cryonics really is a gender issue: look at how many men versus
women are signed up! That alone is sufficient (cryonicist wives? rare as hen's
teeth), but actually, there's even better data than that in "Is That What Love
is? The Hostile Wife Phenomenon in Cryonics"
[http://www.depressedmetabolism.com/pdfs/hostile.pdf], by Michael G. Darwin,
Chana de Wolf, and Aschwin de Wolf; look at the table in the appendix.
2[anonymous]11y
It's an unfair comparison because widow-burning comes with strong
emotional/moral connotations, irrespective of actual outcomes. It's like
(forgive me) comparing someone to Hitler, in the sense that even if the outcome
you're talking about is equivalent to Hitler, the emotional reaction that "X is
like Hitler" provokes is still disproportionately too large. (Meta-note: Let's
call this Meta-Godwin's Law: comparing something to comparing something to
Hitler.)
As for the actual outcomes: It seems to me that there is some asymmetry because
the widow is spending their husband's money after they are dead, whereas the
cryonicist is doing the spending while they are still around. But I'll drop this
point because, as you said, I am less interested in the actual argument and more
interested in how it was framed.
Yes; I explicitly stated this in my fifth bullet point.
This is not at all what I'm arguing. I am arguing that Hanson's post
pattern-matches to a common male stereotype, the overly-controlling wife.
Quoting myself, "This is a very male view of what a long-term relationship
should be like, without anything to counterbalance it." I don't think the
exchange you describe would actually work in practice.
Forgive me, I do not understand how this is related to the point I was making. I
don't see the correspondence between this and cryonics. Additionally, this
example is a massive mind-killer for me for personal reasons and I don't think
I'm capable of discussing it in a rational manner. I'll just say a few more
things on this point: I am not accusing cryonicists of being selfish. I am
saying that it is unreasonable for Hanson to accuse wives of being selfish
because of the large, presumably negative impact it has on a relationship. I am
also not attempting to privilege wives' preferences over husbands; apologies for
any miscommunication that caused that perception. I should probably also add
that I am male, which may help make this claim more credible.
Side comment: I h
0[anonymous]11y
Can I write a harshly-worded rebuttal of the idea that promoting stereotypes is
always morally wrong? Or perhaps an essay on how stereotypes are useful?
-1[anonymous]11y
Oh, of course. In fact, before I saw your comment I changed the wording to
"untrue stereotype." Some stereotypes are indeed true and/or useful. What I
object to is assuming that certain stereotypes are true without evidence, and
speaking as if they are true, especially when said stereotypes make strong moral
claims about some group. This is what Hanson does in Modern Male Sati and Is
Overcoming Bias Male?
Edit: Tone is also important. Talking about some group as if they are an
out-group is generally a bad thing. The two posts by Hanson that I mentioned
talk about women as if they are weird alien creatures who happen to visit his
blog.
-2[anonymous]11y
Ah ok! I have no problem with such a proposed norm then.
-2[anonymous]11y
Hold on a minute, though--I'm not sure we actually agree here. I envision this
kind of norm excluding posts like Modern Male Sati and Is Overcoming Bias Male?.
Do you?
0[anonymous]11y
I'm ok as long as we get to have a fair meta debate about a norm of excluding
interesting posts like modern male sati and the like first. Also that one is
allowed to challenge such norms later if circumstances change.
I mean what kind of a world would it be if people violated every norm they
disagreed with? As long as the norm making system is generally ok, its better to
not sabotage it. And who knows maybe I would be convinced in such a debate as
well.
-2[anonymous]11y
Fair point. Out of curiosity, what norms would you promote in this meta debate?
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
Don't worry, whether you do this or not, there is a novel where you do and a
novel where you don't, without any other distinctions.
8Kaj_Sotala11y
Seems to imply it. Conversly, if you go to the "all possible worlds exist" level
of a multiverse, then each novel (or other work of fiction) in our world
describes events that actually happen in some other world. If you limit yourself
to just the "there's an infinite amount of stuff in our world" multiverse, then
only novels describing events that would be physically and otherwise possible
describe real events.
6Alejandro111y
Jorge Luis Borges, The Library of Babel
2sketerpot11y
That story has always bothered me. People find coherent text in the books too
often, way too often for chance. If the Library of Babel really did work as the
story claims, people would have given up after seeing ten million books of
random gibberish in a row. That just ruined everything for me. This weird
crackfic
[http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover]
is bigger in scope, but much more believable for me because it has a selection
mechanism to justify the plot.
1A1987dM11y
There's some alleged quotation about making your own life a work of art. IIRC
it's been attributed to Friedrich Nietzsche, Gabriele d'Annunzio, Oscar Wilde,
and/or Pope John Paul II.
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random... (read more)
What makes you think that the extra limbs were caused by mutations? I know very
little about bovine biology, but if we were dealing with a human, I would assume
that an extra leg was likely caused by absorption of a sibling in utero. I have
never heard of a mutation in mammals causing extra limb development. (Even
weirder is the idea of a mutation causing an extra single leg, as opposed to an
extra leg pair.) The vertebrate body plan simply does not seem to work that way.
-2tgb11y
Pure speculation! However, this was a wide-spread occurrence not just one or two
cows hinting at some systematic setup. I also don't remember the details as it
was many years ago and I was quite young - it's possible that there was a pair
of legs.
0J_Taylor11y
Forgive me, for my biology is a bit rusty.
A gene can become more common in a population without being selected for.
However, invoking random genetic drift as an explanation is generally dirty
pool, epistemically speaking. We should expect a gene that creates extra useless
legs to be selected against. (Nutrients and energy spent maintaining the leg
could be better used, the leg becomes more space for parasite invasion, etc.)
Assuming that you were dealing with such cattle, you should assume that some
humans were selecting for them. (No reason necessary. Humans totally do that
sort of thing.)
I cannot think of any examples of a mutation causing extra limb development in
vertebrates. However, certain parasites can totally cause extra limb development
in amphibians. I doubt this is the case, but it is more likely than mutation.
Alternatively, consider there existing a selection effect on your observations.
I wager that Indian cattle are less likely to be culled for having an extra leg
that American cattle are. I'm just going off of stereotypes here, however.
4pengvado11y
Are you sure that your example is personal vs general, rather than episodic vs
procedural? The latter distinction much more obviously benefits from different
encodings or being connected to different parts of the brain.
0tgb11y
I'm not sure of anything regarding this - all I know is that it tells me a
little bit, not very much, and that it would tell someone better versed in this
more.
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from th... (read more)
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly inte... (read more)
Not only did the Medieval peasant have good reason to believe that Kings aren't
really that different from him as people, but rather just different in their
proper place in society. Kings had an easier time looking at a poor peasant and
saying to themselves that there but for the grace of God go they.
In a meritocracy it is easier to disdain and dehumanize those who fail.
3TheOtherDave11y
Do you mean to suggest that a significant percentage of Medieval peasants in
fact considered Kings to not be all that different from themselves as people,
and that a significant percentage of Medieval Kings actually said that there but
for the grace of God go they with respect to a poor peasant?
Or merely that it was in some sense easier for them to do so, even if that
wasn't actually demonstrated by their actions?
8wedrifid11y
That sounds like something I'd keep to myself as a medieval peasant if I did
believe it. As such it may be the sort of thing that said peasants would tend
not to think.
(Who am I kidding? I'd totally say it. Then get killed. I love living in an
environment where mistakes have less drastic consequences than execution. It
allows for so much more learning from experience!)
1[anonymous]11y
The latter. The former is an empirical claim I'm not yet sure how we could
properly resolve. But there are reasons to think it may have been true.
After all the King is a Christian and so am I. It is merely that God has placed
a greater burden of responsibility on him and one of toil on me. We all have our
own cross to carry.
5Multiheaded11y
I'd say you're looking at the history of feudal hierarchy through rose-tinted
glasses. People who are high in the instrumental hierarchy of decisions (like
absolute rulers) also tend to gain a similarily high place in all other kinds of
hierarchies ("moral", etc) due to halo effect and such. The fact that social or
at least moral egalitarianism logically follows from Christian ideals doesn't
mean that self-identified Christians will bother to apply it to their view of
the tribe.
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It
sounds like a safe assumption to me that the peasants were treated as subhuman
creatures by most people above them in station.
2[anonymous]10y
James A. Donald disagrees [http://jim.com/rights.html].
It makes quite a bit of sense. Since incentives matter I would tend to agree.
Since I know about the past interactions you two have had here, I would
appreciate you just focused on the argument cited not snipe at James' other
writings or character.
1Eugine_Nier10y
I'm curious what you thing more generally of the article you linked to?
Specifically the notion of natural rights.
2Mitchell_Porter11y
Someone [http://villains.wikia.com/wiki/Origin_of_the_Word_%22Villain%22] thinks
the usage originates from an upper-class belief that the lower class had lower
standards of behavior.
1Multiheaded11y
Hm... so to clarify your position, would you call, say, Saul Alinsky a
destructive rent-seeker in some sense? Hayden? Chomsky? All high-status among
the U.S. "New Left" (which you presumably - ahem - don't have much patience for)
- yet after reading quite a bit on all three, they strike me as reasonable
people, responsible about what they preached.
(Yes, yes, of course I get that the main thurst of your argument is about
tenured academics. But what you make of these cases - activists who think
they're doing some rigorous social thinking on the side - is quite interesting
to me.)
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS max-width property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'
Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
Do you know the size of your readers' windows?
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come
from a ordered space of choices, the result that one value is special and all
the others produce identical results is implausible. I predict that it is a
fluke.
2gwern11y
1. No, but it can probably be dug out of Google Analytics. I'll let the
experiment finish first.
2. I'm not sure how exactly it is calculated. On what is apparently an official
blog, the author [http://www.gwotricks.com/2009/01/multiple-goals.html] says
in a comment: "We do correct for multiple comparisons using the Bonferroni
adjustment. We've looked into others, but they don't offer that much more
improvement over this conservative approach."
Yes, I'm finding the result odd. I really did expect some sort of inverted V
result where a medium sized max-width was "just right". Unfortunately, with a
doubling of the sample size, the ordering remains pretty much the same: 1300px
beats everyone, with 900px passing 1200px and 1100px. I'm starting to wonder if
maybe there's 2 distinct populations of users - maybe desktop users with wide
screens and then smartphones? Doesn't quite make sense since the phones should
be setting their own width but...
3Douglas_Knight11y
A bimodal distribution wouldn't surprise me. What I don't believe is a spike in
the middle of a plain. If you had chosen increments of 200, the 1300 spike would
have been completely invisible!
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience... (read more)
Which ought not be surprising. Governments are nonhuman environment-optimizing
systems that many people expect to align themselves with human values, despite
not doing the necessary work to ensure that they will.
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbz... (read more)
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
As the scope for complex task automation becomes broader, almost all problems
become trivial. Satisfying hard work, with challenging and problem-solving
elements, becomes a rare commodity. People work to identify non-trivial problems
(a tedious process), which are traded for extortionate prices. A lengthy list of
problems you've solved becomes a status symbol, not because of your
problem-solving skills, but because you can afford to buy them.
0NancyLebovitz11y
Another angle: Is it plausible that almost all problems become trivial, or will
increased knowledge lead to finding more challenging problems?
The latter seems at least plausible, considering that the universe is much
bigger than our brains, and this will presumably continue to be true.
Look at how much weirder the astronomical side of physics has gotten.
0NancyLebovitz11y
I don't think you've answered my question, but you've got an interesting idea
there.
What do people buy which would be more satisfying than solving the problems
they're found?
Also, this may be a matter of the difference between your and my temperaments,
but is finding non-trivial problems that tedious?
0sixes_and_sevens11y
As it's the result of about two minutes thought, I'm not very confident about
how internally consistent this idea is.
If finding non-trivial problems is tedious work, I imagine people with a
preference for tedious work (or who just don't care about satisfying problems)
would probably rather buy art/prostitutes/spaceship rides, etc. This is the bit
I find hardest to internally reconcile, as a society in which most work has
become trivially easy is probably post-scarcity.
I personally don't find the search for non-trivial problems all that tedious,
but if I could turn to a computer and ask "is [problem X] trivial to solve?",
and it came back with "yes" 99.999% of the time, I might think differently.
-1Richard_Kennaway11y
"The daily tasks of living give meaning to life. Chopping wood, drawing water:
these are the highest accomplishments. Using machines to do these things empties
life of life itself. To spend your days growing your own food, making with your
own hands everything that you need, living as a natural part of nature like all
the other animals: this is paradise. Contrast the seductive allure of machines
and cities, raping our Mother for our vile enjoyment, waging war against the
imaginary monsters of "disease" and "poverty" instead of accepting the natural
balance of Nature, striving always to see who can most outdo the original sin of
separating from the great apes. "Scientists" see our Mother as a corpse to be
looted, but if we do not turn away from that false path, out of her eternal love
she will wring our neck as any loving mother will do to a deformed child."
Deep green ecology, in other words.
-2Alicorn11y
Modify us to see real chores the way we see fun, addictive task-management
games.
0NancyLebovitz11y
It would be a subtle problem to manage that so that people don't spend excessive
amounts of time on chores.
0TheOtherDave11y
Yes.
Heck, it's a subtle problem to even identify what an "excessive" amount of time
to spend on chores is.
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
evolutionary psychology (I read some Robert Wright, I'd like to read something a bit more solid)
status/prestige theory (Robin Hanson uses it all the time, but is there some good text discussing this?)
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
Give him a year or two and he'll have written one.
1Grognor11y
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/
[http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/] has
a download of one book very close to this topicspace.
0djcb11y
Thanks! The link doesn't seem to work, but I'll check out the book. Did you read
it?
1Grognor11y
No, I haven't read it yet, but it's on my list. Here's another download link
http://dl.dropbox.com/u/33627365/Scholarship/Spent%20Sex%20Evolution%20and%20Consumer%20Behavior.pdf
[http://dl.dropbox.com/u/33627365/Scholarship/Spent%20Sex%20Evolution%20and%20Consumer%20Behavior.pdf]
0djcb11y
Thanks, Grognor!
2djcb11y
I just finished reading it. The start is promising, discussing consumer behavior
from the signaling/status perspective. There's some discussion of the Big Five
personality traits + general intelligence, which was interesting (and I'll need
to look into a bit deeper). It shows how these traits influence our buying
habits, and the crazy things people do for a few status points...
The end of the book proposes some solutions to hyper-consumerism, and this part
I did not particularly like -- in a few pages the writer comes up with some
far-far-reaching plans (consumption tax etc.) to influence consumers; all highly
speculative, not likely to ever be realized.
Apart from the end, liked it, writer is quick & witty, and provides food for
thought.
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
The Coase theorem [http://en.wikipedia.org/wiki/Coase_theorem] does imply that
perfect bargaining will lead agents to maximize a single welfare function. (This
is what it means for the outcome to be "efficient".) Of course, the welfare
function will depend on the agents' relative endowments (roughly, "wealth" or
bargaining power).
2Will_Newsome11y
(Also remember that humans have to "simulate" each other using logic-like prior
information even in the straightforward efficient-causal scenario—it would be
prohibitively expensive for humans to re-derive all possible pooling equilibria
&c. from scratch for each and every overlapping set of sense data. "Acausal"
economics is just an edge case of normal economics.)
-2Will_Newsome11y
Unrelated question: Do you think it'd be fair to say that physics is the
intersection of metaphysics and phenomenology?
2sixes_and_sevens11y
The most glaring problem seems to be how it could deduce the goals of other AIs.
It either implies the existence of some sort of universal goal system, or allows
information to propagate faster than c.
2jsalvatier11y
What I had in mind was that each of the AIs would come up with a distribution
over the kinds of civilizations which are likely to arise in the universe by
predicting the kinds of planets out there (which is presumably something you can
do since even we have models for this) and figuring out different potential
evolutions for life that arises on those planets. Does that make sense?
2sixes_and_sevens11y
I was going to respond saying I didn't think that would work as a method, but
now I'm not so sure.
My counterargument would be to suggest that there's no goal system which can't
arbitrarily come about as a Fisherian Runaway, and that our AI's acausal trade
partners could be working on pretty much any optimisation criteria whatsoever.
Thinking about it a bit more, I'm not entirely sure the Fisherian Runaway
argument is all that robust. There is, for example, presumably no Fisherian
Runaway goal of immediate self-annihilation.
If there's some sort of structure to the space of possible goal systems, there
may very well be a universally derivable distribution of goals our AI could
find, and share with all its interstellar brethren. But there would need to be a
lot of structure to it before it could start acting on their behalf, because
otherwise the space would still be huge, and the probability of any given goal
system would be dwarfed by the evidence of the goal system of its native
civilisation.
There's a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity
creates an AI, which proceeds to deduce a universal goal preference for
eliminating civilisations like humanity. Incomprehensible alien minds from the
stars, psychically sharing horrible secrets written into the fabric of the
universe.
0jsalvatier11y
Except for the eliminating humans part, the Ctrhulhonic outcome seems almost
like the default. We build AI, proving that it implements out reflectively
stable wishes and then it still proceeds to do almost pay very little attention
to what we thought we wanted.
One thing that might push back in the opposite direction is that if humans have
heavily path dependent preferences (which seems pretty plausible) or selfish wrt
currently existing humans in some way then an AI built for our wishes might not
be willing to trade much humanity away in exchange for resources far away.
0sixes_and_sevens11y
The Cthulhonic outcome is only the case if there are identifiable points in the
space of possible goal systems to which the AI can assign enough probability to
make them credible acausal trade partners. Whether those identifiable points
exist is not clear or obvious.
When it ruminates over possible varieties of sapient life in the universe, it
would need to find clusters of goals that were (a) non-universal, (b) specific
enough to actually act upon, and (c) so probabilistically dense that they didn't
vanish into obscurity against humanity's preferences, which it possesses direct
observational evidence for.
Whether those clusters exist, and if they do, whether they can be deduced a
priori by sitting in a darkened room and thinking really hard, does not seem
obvious either way. Intuitively, thinking about trying to draw specific
conclusions from extremely dilute evidence, I'm inclined to think they can't,
but I'm not prepared to inject that belief with a super amount of confidence, as
I may very well think differently if I were a billion times smarter.
0jsalvatier11y
I think what matters is not so much the probability of goal clusters, but
something like the expectation of the amount of resources that AIs that have a
particular goal cluster have access to. An AI might think that some specific
goal cluster only has a 1:1000 chance of occurring anywhere, but if it does then
there are probably a million instances of it. I think this is the same as being
certain that there are 1,000 (1million/1,000) AIs with that goal cluster. Which
seems like enough to 'dilute' the preferences of any given AI.
If the universe is pretty big then it seems like it would be pretty easy to get
large expectations even with low probabilities. (let me know if I'm not making
sense)
0sixes_and_sevens11y
The "million instances" is the size of the cluster, and yes, that would impact
its weight, but I think it's arithmetically erroneous to suggest the density
matters more than the probability. It depends entirely on what those densities
and probabilities are, and you're just plucking numbers straight out of the air.
Why not go the whole hog and suggest a goal cluster that happens nine times out
of ten, with a gajillion instances?
I believe the salient questions are:
* Do such clusters even exist? Can they be inferred from a poverty of evidence
just by thinking about possible agents that may or may not arise in our
universe with enough confidence to actually act upon? This boils down to
whether, if I'm smart enough, I can sit in an empty room, think "what if..."
about examples of something I've never seen before from an enormous space of
possibilities, and come up with an accurate collection of properties for
those things, weighted by probability. There are some things we can do that
with and some things we can't. What category do alien goal systems fall into?
* If they do exist, will they be specific enough for an AI to act upon? Even if
it does deduce some inscrutable set of alien factors that we can't make sense
of, will they be coherent? Humans care a lot about methods of governance, the
moral status of unborn children, and who people should and shouldn't have sex
with, but they don't agree on these things.
* If they do exist, are there going to be many disparate clusters, or will they
converge? If they do converge, how relatively far away from the median is
humanity? If they're disparate, are they completely disjoint goals, or are
they overlap and/or conflict with each other? More to the point, are they
going to overlap and/or conflict with us?
I can't say how much we'd need to worry about a superintelligent TDT-agent
implementing alien goals. That's a fact about the universe for which I don't
have a lot of evid
0Manfred11y
One problem is that, in order to actually get specific about utility functions,
the AI would have to simulate another AI that is simulating it - that's like
trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving
robots laying different colors of tile might be interesting to consider. In fact
there's probably a post in there. The effects will be different sizes for
different classes of utility functions over tiles. In the case of infinity
robots with cosmopolitan utility functions, you do get an interesting sort of
agreement though.
0Kindly11y
This outcome is bad because bargaining away influence over the AI's local area
in exchange for a small amount of control over the global utility function is a
poor trade. But in that case, it's also a poor acausal trade.
A more reasonable acausal trade to make with other AIs would be to trade away
influence over faraway places. After all, other AIs presumably care about those
places more than our AI does, so this is a trade that's actually beneficial to
both parties. It's even a marginally reasonable thing to do acausally.
Of course, this means that our AI isn't allowed to help the Babyeaters stop
eating their babies, in accordance with its acausal agreement with the AI the
Babyeaters could have made. But it also means that the Superhappy AI isn't
allowed to help us become free of pain, because of its acausal agreement with
our AI. Ideally, this would hold even if we didn't make an AI yet.
0jsalvatier11y
I agree with your logic, but why do you say it's a bad trade? At first it seemed
absurd to me, but after thinking about it I'm able to feel that it's the best
possible outcome. Do you have more specific reasons why it's bad?
0Kindly11y
At best it means that the AI shapes our civilization into some sort of twisted
extrapolation of what other alien races might like. In the worst case, it ends
up calculating a high probability of existence for Evil Abhorrent Alien Race
#176 which is in every way antithetical to the human race, and the acausal trade
that it makes is that it wipes out the human race (satisfying #176's desires) so
that if the #176 make an AI, that AI will wipe out their race as well
(satisfying human desires, since you wouldn't believe the terrible, inhuman
monstrous things those #176s were up to).
-2JenniferRM11y
Perhaps it is not wise to speculate out loud in this area until you've worked
through three rounds of "ok, so what are the implications of that idea" and
decided that it would help people to hear about the conclusions you've developed
three steps back. You can frequently find interesting things when you wander
around, but there are certain neighborhoods you should not explore with children
along for the ride until you've been there before and made sure its reasonably
safe.
Perhaps you could send a PM to Will?
3tenlier11y
Not just going meta for the sake of it: I assert you have not sufficiently
thought throught the implications of promoting that sort of non-openness
publicly on the board. Perhaps you could PM jsavaltier.
I'm lying, of course. But interesting to register points of strongest divergence
between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is
fine and interesting).
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
I was feeling fairly negative on Less Wrong recently. I ended up writing down a
lot of things that bothered me in a a half formed angry Google Doc rant, saving
it...
and then going back to reading Less Wrong a few days later.
It felt refreshing though, because Less Wrong has flaws and you are allowed to
notice them and say to yourself "This! Why are some people doing this! It's so
dumb and silly!"
That being said, I'm not sure that all of the arguments that my straw opponents
were presenting in the half formed doc are actually as weak as I was making them
out to be. But it did make me feel more positive overall simply summing up
everything that had been bugging me at the time.
4shminux11y
Hasn't [http://lesswrong.com/lw/cs8/open_thread_june_115_2012/6t7g] worked
[http://lesswrong.com/r/discussion/lw/d3h/open_thread_june_1630_2012/6u1j] for
Konkvistador [http://lesswrong.com/user/Konkvistador/].
2[anonymous]11y
I'm only posting this to clarify. Old habits do indeed die hard, but I so far
haven't changed my mind despite receiving some interesting email on the topic.
Hopefully this will become more apparent after a month or two of inactivity.
3David_Gerard11y
What are the attitudes you are feeling uncomfortable with?
1mstevens11y
Hmm this is a bit fuzzy, as I said - part of my problem is that I just have a
vague feeling and am having difficulty making it less vague. But:
* an uncomfortable air of superiority
* a bit too much association with right wing politics.
* Some of the PUA stuff is a bit weird (not discussed directly on the site so
much but in related contexts)
4CommanderShepard11y
It would very much help if you could name three examples
[http://lesswrong.com/lw/5kz/the_5second_level/] of each of your complaints,
this would help you see if this really is the source of your unease. It would
also help others figure out if you are right.
Overestimating our rationality and generally feeling clearer thinkers than
anyone ever? Or perhaps unwilling to update on outside ideas like Konkvistador
recently complained?
There is a lot of right wing politics on the IRC channel, but overall I don't
think I've seen much on the main site. On net the sites demographics are if
anything remarkably left-wing
[http://lesswrong.com/lw/bag/george_orwells_prelude_on_politics_is_the_mind/66r3].
The PUA stuff may come off as weird due to inferential distances or people
accumulating strange ideas because they can't sanity check them. Both are the
result of the community norm that sort now seems to strongly avoid gender issues
because we've proven time and again to be incapable of discussing them as we do
most other things. This is a pattern that seems to go back to the old OB days.
2EStokes11y
I use LW casually and my attitude towards it is pretty neutral/positive but I
recently got downvoted something like 10 times in past comments, it seems. A
karma loss of 5%, and it's a lot, comparing the amount of karma I have to how
long I've been here. I didn't even get into a big argument or anything, the
back-and-forth was pretty short. So my attitude toward LW is very meh right now.
Sorry, sort of wanted to just say this somewhere. ugh :/
1Bruno_Coelho11y
The fact that LW is a forum about rationality/science don't mean it's good for
you all the time. Strategically speaking, redefine your goals.
Or, maybe the quality of posts are not the same that was before.
After a week long vacation at Disney World with the family, it occurs to me there's a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
When it comes to formulate Anki cards it's good to have the 20 rules from
Supermemo [http://www.supermemo.com/articles/20rules.htm] in mind,
The important thing is to understand before you memorize. You should never try
to memorzize a proof without understanding it in the first place.
Once you have understood the proof think about what's interesting about the
proof. Asks questions like: "What axioms does the proof use?" "Does the proof
use axiom X?" Try to find as many questions with clear answers as you can. Being
redundant is good.
If you find yourself asking a certain question frequently invent a shorthand for
them. axioms(proof X) can replace "What axioms does the proof use?"
If you really need to remember the whole proof then memorize it step by step.
Proof A:
1. Do A
2. Do B
becomes 2 cards:
--------------------------------------------------------------------------------
Proof A:
1. [...]
--------------------------------------------------------------------------------
Proof A:
1. Do A
2. [...]
--------------------------------------------------------------------------------
If you have a long proof that could mean 9 steps and 9 cards.
0Oscar_Cunningham11y
Thanks!
2dbaupp11y
I've been doing something similar (maths in an Anki deck), and I haven't found a
good way of doing so. My current method is just asking "Prove x" or "Outline a
proof of x", with the proof wholesale in the answer, and then I run through the
proof in my head calling it "Good" if I get all the major steps mostly correct.
Some of my cards end up being quite long.
I have found that being explicit with asking for examples vs definitions is
helpful: i.e. ask "What's the definition of a simple ring?" rather than "What's
a simple ring?".
0ChristianKl11y
"def(simple ring)" is more efficient than "What's the definition of a simple
ring?"
0dbaupp11y
I find that having proper sentences in the questions means I can concentrate
better (less effort to work out what it's asking, I guess), but each to their
own.
0ChristianKl11y
If you have 50 cards that are in the style "def(...)" than it doesn't take any
effort to work out what it's asking anymore.
Rereading "What's the " over a thousand times wastes time. When you do Anki for
longer periods of time reducing the amount of time it takes to answer a card is
essential.
0D_Malik11y
A method that I've been toying with: dissect the proof into multiple simpler
proofs, then dissect those even further if necessary. For instance, if you're
proving that all X are Y, and the proof proceeds by proving that all X are Z and
all Z are Y, then make 3 cards:
* One for proving that all X are Z.
* One for proving that all Z are Y.
* One for proving that all X are Y, which has as its answer simply "We know all
X are Z, and we know all Z are Y."
That said, you should of course be completely certain that memorizing proofs is
worthwhile [http://www.gwern.net/Spaced%20repetition#what-to-add]. Rule of
thumb: if there's anything you could do that would have a higher ratio of
awesome to cost than X, don't do X before you've done that.
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be... (read more)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
I agree (the franchise established itself as rather one-dimensional... in about
the first 40 minutes) - but hell, I get into discussions about TWILIGHT, man.
I'm a slave to public discourse.
0DanArmak11y
Karma sink.
7Multiheaded11y
Upvote for NO.
3Multiheaded11y
Upvote for YES.
0[anonymous]11y
Wow. That sequence was drastically less violent than I remembered it being. I
noticed (for I believe the first time) that they actually made some attempt to
avoid infinite ammo action movie syndrome. Also I must have thought the
cartwheel bit was cool when I first saw it, but now it looks quite ridiculous
and/or dated.
Maybe it's time for a rewatch.
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
The first digit is the most important. It indicates the "level" of the course:
100/1000 courses are freshman level, 200/2000 are sophomore level, etc. There is
some flexibility in these classifications, though. Examples: My undergraduate
university used 1000 for intro level, 2000 for intermediate level, 4000 for
senior/advanced level, and 6000 for graduate level. (3000 and 5000 were reserved
for courses at a satellite campus.) My graduate university uses 100, 200, 300,
400 for the corresponding undergraduate year levels, and 600, 700, 800 for
graduate courses of increasing difficulty levels.
The other digits in the course number often indicate the rough order in which
courses should be taken within a level. This is not always the case; sometimes
they are just arbitrary, or they may indicate the order in which courses were
added to the institute's offerings.
In general, though the numbers indicate the levels of the courses and the order
in which they "should" be taken, students' schedules need not comply precisely
(outside of course-specific prerequisite requirements).
4sixes_and_sevens11y
It varies from institution to institution, but generally the first number
indicates the year you're likely to study it, so "Psychology 101" is the first
course you're likely to study in your first year of a degree involving
psychology, which is why it's the introduction to the subject. The numbering
gets messy for a variety of reasons.
I should point out I'm not an American university student, but this style of
numbering system is becoming prevalent throughout the English-speaking world.
0Nornagest11y
101's stereotypically the introduction to the course, but this sort of thing
actually varies quite a bit between universities. Mine dropped the first digit
for survey courses and introductory material; survey courses were generally
higher two-digit numbers (i.e. Geology 64, Planetary Geology), while
introductory courses were more often one-digit or lower two-digit numbers (i.e.
Math 3A, Introduction to Calculus). Courses intended to be taken in sequence had
a letter appended. Aside from survey courses, higher numbers generally indicated
more advanced or specialized classes, though not necessarily more difficult
ones.
Three digits indicated an upper-division (i.e. nominally junior- or
senior-level) or graduate-level course. Upper-division undergrad courses were
usually 100-level, and the 101 course was usually the first class you'd take
that was intended only for people of your major; CS 101 was Algorithms and
Abstract Data Types for me, for example, and I took it late in my sophomore
year. Graduate courses were 200-level or higher.
We often hear that? What do you mean by professional philanthropy here?
1maia11y
I mean the general line of reasoning that goes, "Go do the highest-paying job
you can get and then donate your extra money to AMF or other highly effective
charities." The most oft-cited high-paying job seems to be to work on Wall
Street or some such.
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Win... (read more)
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
Well, if your goal is trying out for education, but on Windows, you could start
with DrRacket. http://racket-lang.org/ [http://racket-lang.org/]
It is a reasonable IDE, it has some GUI libraries included, open-source,
cross-platform, works fine on Windows.
Racket is based on Scheme language (which is a part of Lisp language family). It
has a mode for Scheme as described in R6RS or R5RS standard, and it has a few
not-fully-compatible dialects.
I use Common Lisp, but not under Windows. Common Lisp has more
cross-implementation libraries, it could be useful sometimes. Probably, EQL is
the easiest to set up under Windows (it is ECL, a Common Lisp implementation,
merged with Qt for GUI; I remember there being a bundled download). Maybe
CommonQt or Cells-GTK would work. I remember that some of the Common Lisp
package management systems have significant problems under Windows or require
either Cygwin or MSys (so they can use tar, gzip, mkdir etc. as if they were on
a Unix-like system)
1Viliam_Bur11y
My goals are: 1) to get the "Lisp experience" with minimum overhead; and 2) to
use the best available tools.
And I hope these two goals are not completely contradictory. I want to be able
to write my own application on my computer conveniently
[http://wiki.lesswrong.com/wiki/Trivial_inconvenience] after a few minutes, and
to fluently progress to more complex applications. On the other hand, if I
happen to later decide that Lisp is not for me, I want to be sure it was not
only because I chose the wrong tools.
Thanks for all the answers! I will probably start with Racket.
2Pavitra11y
For a certain value of "the Lisp experience", Emacs may be considered more or
less mandatory. In order to recommend for or against it I would need more
precise knowledge of your goals.
3Viliam_Bur11y
I tried Emacs and decided that I dislike it. I understand the reason why it is
like that, but I refuse to lower my user interface expectations that low.
Generally, I have noticed the trend that a software which is praised as superior
often comes with a worse user interface, or ignores some other part of user
experience. I can understand that a software with smaller userbase cannot put
enough resources to its non-critical parts. That makes sense. But I suspect
there later appears a mindkilling thread of though, which goes like this: "Our
software is superior. Our software does not have a feature X. Therefore, not
having a feature X is an advantage, because ." As in: we don't need
21st-century-style user interface, because good programmers don't need such
things.
By wanting a "Lisp experience" I mean I would like to experience (or falsify the
existence of) the nirvana frequently described by Paul Graham
[http://paulgraham.com/icad.html]. Not to replicate 1:1 Richard Stallman's
working conditions in 1980s. :D
A perfect solution would be to combine the powerful features of Lisp with the
convenience of modern development tools. I emphasize the convenience for
pragmatic reasons, but also as a proxy for "many people with priorities similar
to me are using it".
3gwern11y
Consider an equilibrium of various software products none of which are strictly
superior or inferior to each other. Upon hearing that the best argument someone
can make for software X is that it has feature Y (which is unrelated to UI),
should your expectation of good IU go up or go down?
(To try it a different way: suppose you are in a highly competitive company like
Facebooglazon and you meet a certain programmer who is the rudest most arrogant
son of a bitch you ever met - yet he is somehow still employed there. What
should you infer about the quality of the code he writes?)
0Viliam_Bur11y
This is a nice example how with different models the same evidence can be
evaluated differently.
My model is that programming languages are used for making programs, and for
languages used in real production part of that effort goes to the
positive-feedback loop of creating better tools and libraries for given
language. So if some language makes the production easier -- people like Paul
Graham suggest that Lisp is 10 times more productive than other languages --, I
would expect better everything.
In other words, the "equilibrium of various software products none of which are
strictly superior or inferior to each other" is an evidence against the claim
that a language X is 10 times more productive than other languages. Or if it is
more productive in some areas, then it must have a huge disadvantage somewhere
else.
Fast, reliable, undocumented, obfuscated. :D
Or he is really imployed for some other reason than writing code.
0gwern11y
Yup! It's the old 'if you're so good, why aren't you rich' question in more
abstract guise. Of course, in the real world, new languages are being developed
all the time, so a workable answer is already 'I'm not rich because I'm so new,
but I'm getting richer'. This is the sort of answer an up and coming language
like Haskell or Scala or Go can make.
0Risto_Saarelma11y
My current understanding of present IDEs is that they are both very
language-bound and need a huge amount of work to become truly usable. That means
that for any language that doesn't currently enjoy large industry acceptance, I
basically don't expect to have any sort of modern usable IDE.
I'm not personally hung up on the Emacs thing, but then again my recipe for a
development environment is Your Favorite General Purpose Text Editor, printf
statements for debugging code, a console to read the printf output, and a
read-eval-print-loop for the programming language if has one (Lisp does).
If most of the people who are in position to develop modern development tools
for Lisp are in fact happy using Emacs and SLIME, the result is going to be that
there won't be much of a non-Emacs development environment ecosystem for Lisp.
And it's unlikely that there are any unearthed gems that turn out to be
outstanding modern Lisp IDEs if IDEs really do require lots and lots of work and
a wide user base giving feedback to be truly useful. Though Lisp does have
commercial niche companies [http://www.franz.com/products/allegro-common-lisp/]
who are still around and who have had decades of income to develop whatever
proprietary tools they are using. I've no idea what kind of stuff they have got.
Speaking of the general Lisp experience, you might also want to take a look at
Factor [http://factorcode.org/]. It's primarily modeled after Forth instead of
Lisp, but it basically matches all of Graham's "What made Lisp different"
checklist. The code is data, the metaprogramming machinery is extensive and so
on. The idiom is also somewhat more weird than Lisp's, and the programs are
constantly threatening to devolve into a soup of incomprehensible three-letter
opcodes, but I found the thing fun to work with. Oh, and the only IDE Factor has
is Emacs-based, unless you count the language REPL, I think its ecosystem is
small enough that I haven't missed any significant competitors.
0vi21maobk9vp11y
Well, for me Vim bindings are something that (after some learning) started to
make a lot of sense. Emacs (after the same amount of learning) didn't make that
much sense... As text editors, modern IDEs are still weaker than any of them;
the choice what to forfeit usually has to be done - sometimes you can embed your
editor inside IDE instead of the native one, though.
For satisfying your curiousity, I guess you could try out free-of-charge Allegro
Common Lisp version. It is personal no-deployment no-commercial-use
no-commercial-research no-university-research no-government researh edition. I
never looked at it because I am OK with Vim and I don't want to have something
dependent on ACL that I cannot use in my day-job projects. Neither is a good
reason for you not to try it...
0dbaupp11y
Many people say that most things that aren't emacs (or vim, depending on their
religion...) have bad user interfaces, myself included. The keyboard-only way of
working is very nice if you can get the hang of it. (Emacs is hard to begin
with.)
That said, SLIME [http://common-lisp.net/project/slime/] is basically the
canonical Common Lisp editing environment and many the environment for other
dialects emulate many of its features (e.g. Geiser for Racket
[http://www.nongnu.org/geiser/]), were you using one of those when you were
using Emacs with a Lisp?
0Viliam_Bur11y
I used Emacs very shortly, only as a text editor. The learning curve is horrible
-- my impression is that you need to memorize dozens of new keyboard shortcuts
(and unlearn dozens of keyboard shortcuts more or less consistently accepted by
many other applications, also clicking right mouse button for a context menu).
There seem to be some interesting features, but again only for those who
memorize the keyboard shortcuts. And the whole design seems like a character
terminal emulator.
So the problem is that it looks interesting, but one has to pay a huge price
ahead. That would make sense if I were already convinced that Emacs is the only
editor and Lisp the only programming language I will use, but I just want to try
them.
By the way, what exactly is so great about "the keyboard-only way of working"?
Is it the speed of typing? I usually spend more time thinking about the problem
than typing. Are some powerful features invoked by keyboard combos? I would
prefer them to be available from the menu and context menu. Or both from menu
and as a keyboard shortcut, so I can memorize the frequently-used ones, but not
the rest. (Maybe this is possible in Emacs too. If yes, the tutorial should
mention it.)
To me it now seems that learning Lisp with Emacs would be having two problems
instead of one. More precisely, to make the learning curve even worse.
5Zack_M_Davis11y
There's a solution to the unfamiliar shortcuts problem: turn on CUA mode
[http://www.gnu.org/software/emacs/manual/html_node/emacs/CUA-Bindings.html#CUA-Bindings].
CUA mode enables the familiar Ctrl-Z, Ctrl-X, Ctrl-C, Ctrl-V for undo, cut,
copy, and paste, respectively. For basic text navigation, I use Emacs mostly
like an editor with standard bindings (the aforementioned undo-cut-copy-paste,
arrow keys to move by character, Control plus arrow keys to move by word, &c.).
There are other things to learn, but the transition isn't really that bad.
0dbaupp11y
Speed, features and working well for many languages (i.e. people have written
Emacs modes for most language).
Having everything on the keyboard means that you don't have to do so many
context switches (which are annoying and I find they can distrupt my train of
though). As an example, in most word processors, bolding text with Shift+arrow
keys then Ctrl+B is much much nicer than moving to the mouse, carefully
selecting the text and then going up to the menu bar to click the little icon.
And Emacs has been around for decades, so there are hundreds of little (or not
so little) packages that do anything and everything, e.g. editing a file via SSH
transparently is pretty nice.
Having one environment for writing a LaTeX report, a Markdown file, a C,
Haskell, Python or Shell (etc) program is nice because the basic shortcuts are
the same and every environment is guaranteed to act how you expect, so, for
example, doing a regex string replacement is the same process.
And on the note of keyboard combos, they are something that you end up learning
by muscle memory, so it takes a little while but they become second nature, to
the point of not being able to say what the shortcut is straight out, only able
to work it out by actually doing the action.
(That said, Emacs/Vim isn't for everyone: maybe it's the time investment is too
large, or it doesn't really suit one's way of working.)
0vi21maobk9vp11y
Well, I have a paid job where I write in Common Lisp, and I use Vim, and both
statements (paid job with CL and Vim usage) are true for multiple years.
It is a good idea to know there are different options and have a look at them,
of course.
It is a good idea to look at Cream-for-Vim, too - it has Vim as a core, and most
mode allow you to use Vim bindings for a while, but default bindings are more
consistent with modern traditions.
2vi21maobk9vp11y
There are no "best available tools" without specified target, unfortunately.
When you feel that Racket constraints you, come back to the open thread of week,
and ask what you would like to see in it - SBCL has better performance, ECL is
easier to use for standalone executables, etc. Also, maybe someone would
recommend you an in-Racket dialect that would work better for you for those
tasks.
3Risto_Saarelma11y
Peter Norvig's out-of-print Paradigms of Artificial Intelligence Programming:
Case Studies in Common Lisp can be interesting reading. It develops various
classic AI applications like game tree search and logic programming, making
extensive use of Lisp's macro facilities. (The book is 20 years old and
introductory, it's not recommended for learning anything very interesting about
artificial intelligence.) Using the macro system for metaprogramming is a big
deal with Lisp, but a lot of material for Scheme in particular doesn't deal with
it at all.
The already mentioned Clojure seems to be where a lot of real-world development
is happening these days, and it's also innovating on the standard syntax
conventions of Common Lisp and Scheme in interesting ways. Clojure will
interface with Java's libraries for I/O and multimedia. Since Clojure lives in
the Java ecosystem, you can basically start with your preconceptions for
developing for JVM and go from there to guess what it's like. If you're OK with
your games ending up JVM programs, Clojure might work.
For open-source games in Lisp, I can point you to David O'Toole's projects
[http://dto.github.com/notebook/]. There are also some roguelikes developed in
Lisp [http://roguebasin.roguelikedevelopment.org/index.php/Common_Lisp].
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
I play at chess.com and you are much better than me.
0Will_Newsome11y
Oh sweet, chess.com used to be only correspondence games. I'll probably get an
account there, it'll probably be called "willnewsome", add me if you wish. ETA:
Done.
0dbaupp11y
It might be: there are few chess players on LW who read the open threads and
also are willing to commit the time/have the desire to play an (essentially)
random person from the internet.
0jsalvatier11y
Not gaming related, but I've got a question that seems like it would appeal to
you above
[http://lesswrong.com/r/discussion/lw/d3h/open_thread_june_1630_2012/6u2o].
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
Thinking in terms of "simulating their algorithm" is convenient for us because
we can imagine the agent doing it and for certain problems a simulation is
sufficient. However the actual process involved is any reasoning at all based on
the algorithm. That includes simulations but also includes creating mathematical
proofs based on the algorithm that allow generalizable conclusions about things
that the other agent will or will not do.
An agent that wishes to facilitate cooperation - or that wishes to prove
credible threat - will actually prefer to structure their own code such that it
is as easy as possible to make proofs and draw conclusions from that code.
0Viliam_Bur11y
It's precisely this part which is impossible in a general case. You can reason
only about a subset of algorithms which are compatible with your
conclusion-making algorithm.
Proof:
1) It is impossible to guess if the program will stop computation if a finite
time [http://en.wikipedia.org/wiki/Halting_problem] in a general case.
Proof by contradiction: Let's suppose we have a method
"Prophet.willStop(program)" that predicts whether given program will stop. How
about this program? It would behave contrary to what the prediction says about
it.
program Contrarian {
... if (Prophet.willStop(Contrarian)) {
... ... loop_forever();
... } else {
... ... // do nothing
... }
}
2) For any behavior "B" imagine a function "f" which you cannot predict whether
it will stop or not. Will the following program exhibit the behavior "B"?
program Mysterious {
... f();
... B();
}
6wedrifid11y
Yes, which why:
Some agents really are impossible to cooperate with even when it would be
mutually beneficial. Either because they are irrational in an absolute sense or
because their algorithm is intractable to you. That doesn't prevent you from
cooperating with the rest.
2Viliam_Bur11y
Interesting. So a self-modifying agent might want to modify their own code to be
easier to inspect, because this could make other agents trust them and cooperate
with them. Two questions:
What would be the cost of such modification? You cannot just rewrite any
algorithm to a more legible form. If the agent modifies themselves to e.g. a
regular expression (just joking), it will be able to do only what the regular
expressions are able to do, which may be not enough for a complex situation.
Limiting one's own cognitive abilities seems like a dangerous move.
Even if I want to reprogram myself to make myself more legible, I need to know
what algorithm will the other party use to read my code. How can I guess it? Or
perhaps is it enough to meet the other agent, explain to each other our reading
algorithms, and only then self-modify to become compatible with them? I am
suspicious whether such process can be iterated -- my intuition is that by
conforming to one agent's code analysis routines, I lose part of my abilities,
which may make me unable to conform to other agent's code analysis routines.
0Vladimir_Nesov11y
Any decision restricts what happens, for all you knew before making the
decision, but doesn't necessarily make future decisions more difficult.
Coordinating with other agents requires deciding some properties of your
behavior, which may as well constrain only the actions that need to be
coordinated with other agents.
For example, strategy is a kind of generalized action, which could take the form
of a straightforwardly represented algorithm chosen for a certain situation (to
act in response to possible future observations). After a strategy is played
out, or if some condition indicates that it's no longer applicable, decision
making may resume its normal more general operation, so the mode of operation
where your behavior becomes more tractable may be temporary. If this strategy
includes a procedure for deciding whether to cooperate with similarly chosen
strategies of other agents, it will do the trick, without taking on much more
responsibility than a single action. It will just be the kind of action that's
smart enough to be able to cooperate with other agents' actions.
2Viliam_Bur11y
So it is not necessary to change my whole code, just to create a new transparent
"cooperation routine" and let it guide my behavior, with a possibility of ending
this routine in case the other agents stop cooperating or something unexpected
happens. That makes sense.
(Though in real life I would be rather afraid to self-modify in this way,
because an imperfection in the cooperation routine could be exploited. Even if
other agents' cooperation routines contain no bug exploits for my routine, maybe
they have already created some hidden sub-agents that will try to find and
exploit bugs in my routine.)
3Vladimir_Nesov11y
A real life analogy is a contract, with powerful government enforcing your
precommitments.
0wedrifid11y
Sometimes.
You could limit yourself to simply not actively obfuscating your own code.
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
I was looking at this exact question a few months ago, and found these to be
quite LW-reader-salient:
The Case of Anaphoric One
[http://linguistics.berkeley.edu/~regier/papers/foraker-et-al-2009.pdf]
Poverty Of The Stimulus - A Rational Approach
[http://web.mit.edu/cocosci/Papers/PerforsTenenbaumRegier06.pdf]
[This comment is no longer endorsed by its author]Reply
6gwern11y
Just tons. For example, Harry's instructor, Mr. Bester, is a double reference.
EDIT: And obviously the Bester scenes contain other allusions: back to the
gold-silver arbitrage, or Harry imaging himself a Lensman, come to mind.
2drethelin11y
What's the non-author one?
4gwern11y
Babylon Five character, IIRC.
0arundelo11y
I wouldn't call that a double reference since Alfred Bester the Babylon 5
character is also named after Alfred Bester the author
[http://en.wikipedia.org/wiki/Alfred_Bester_(Babylon_5\]). Edit: Both the Bab 5
and HP:MoR characters are named after Bester the author for the same reason.
1gwern11y
Since Eliezer has been a Babylon 5 fan since before December 1996
[http://extropians.weidai.com/extropians.96/4597.html] and has also read
Bester's books, I think we can consider it a double reference.
0arundelo11y
Yeah, we're just using different definitions of "double reference". Cheers!
1[anonymous]11y
But Alfred Bester the author wasn't a telepath.
Or was he?!?!?!
0TraderJoe11y
[comment deleted]
0gwern11y
There's probably hundreds by this point; if you want even more, Eliezer stuffs
his Ultimate Cross-over fic with references.
If you believe that some model of computation can be expressed in arithmetics
(this implies expressibility of the notion of correct proof), Godel's first
theorem is more or less analyzis of "This statement cannot be proved". If it can
be proved, it is false and there is a provable false statement; if it cannot be
proved it is an unprovable true statement.
But most of the effort in proving Godel's theorem has to be spent on proving
that you cannot go half way: if you have a big enogh theory to express basic
arithmetical facts, you have to have full reflection. It can be stated in
various ways, but it requires a technically accurate proof - I am not sure how
well it would fit into a cartoon.
Could you state explicitly what do you want to find - just the non-tehnical
part, or both?
Has anybody here has changed their minds on the matter of catastrophic anthropogenic global warming, and what evidence or arguments made you reconsider your original positions on the matter?
I've bounced back and forth on the matter several times, and right now I'm starting to doubt global warming itself, nevermind catastrophic or anthropogenic; since those I read most frequently are biased against, and my sources which support it have a bad habit of deleting any comments that disagree or criticize the evidence, which has led to my taking them less seriously, the ideal for me would be arguments or evidence for it that changed somebody's mind towards the end of supporting the theory.
I think you are overweighing the evidence from moderation policies.
If a large number of evangelicals constantly descended onto LessWrong, forcing
the community to have a near hair trigger banning policy, would that be strong
evidence that atheism was incorrect?
1OrphanWilde11y
No. But it would result in me not taking theoretical weekly posts on why atheism
is correct very seriously.
4TheOtherDave11y
There are several different pieces of this for me.
I haven't much changed my mind on the existence of global climate change since I
first looked into the data, about a decade ago, except to become more confident
about it.
I've made various attempts to wrap my brain around this data to arrive at some
opinions about its causes, but I'm evidently neither smart nor well-informed
enough to arrive at any confidence about whether the conclusions people are
drawing on this question actually follow from the data they are drawing it from.
Ultimately I just end up either taking their word for it, or not. I try to
ignore the public discourse on the subject, which in the US has become to an
absurd degree a Blue/Green issue entirely divorced from any notion of relying on
observation-based reasoning to ground confidence levels in assertions.
The thing that most caused me to lower my estimate of the likelihood that the
climate change is exclusively or near-exclusively anthropogenic was some
conversations with a couple of astrophysicist friends of mine, who talked about
the state of play in the field and their sense that research into correlations
between terrestrial climate fluctuations and solar output fluctuations was seen
as a career-ender... not quite on par with, say, parapsychology, but on the same
side of the ledger.
The thing that most caused me to raise that estimate was some conversations with
a friend of mine who was working in climate modeling for a while. I don't have
half a clue regarding the validity of his models, but I got the clear impression
that climate models that take into account anthropogenic increases in
atmospheric CO2 levels are noticeably more accurate than models that don't.
On balance, the latter raised my confidence in the assertion that global climate
change is significantly anthropogenic more than the former lowered my
confidence.
I don't really have an opinion yet about how catastrophic the climate change is
likely to be, regardless of whethe
Note:About this list. New suggestions are welcomed. Anyone searching for interesting blogs that may not be written by LWers chec out this or maybe this threa... (read more)
[This comment is no longer endorsed by its author]Reply
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due... (read more)
With regards to Optimal Employment, what does anyone think of the advice given in this article?
"...There are career waiters in Los Angeles and they’re making over $100,000 a year.”
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
* To make this kind of money, you'll obviously have to get a job in an
expensive restaurant, and remember there are tons of people there who have
years of experience and desperately want one of these super-high value jobs.
Knowing the right person will be vital if you want to score one of these
positions.
* This is based on tips, so you will have to be extremely charming,
charismatic, and attractive.
* Living in Los Angeles is expensive to start with, and there is a major
premium if you want to live in a non-terrifying part of the city.
* The economy of Los Angeles is not doing well, hasn't been for years, and
probably won't for the foreseeable future. This probably hurts the prospects
for finding a high-paying waiter job.
Honestly, moving to L.A. to seek a rare super-high paying waiter job seems like
a terrible idea to me.
0Bill_McGrath11y
That's the main issue I've been having with employment here; though I'm a good
waiter, most places want two years' experience in fine dining, which I don't
have.
4TheOtherDave11y
I don't know if the claim is true or not, but i don't find it too implausible.
It helps to remember that LA is frequented by a great many newly wealthy
celebrities.
It does not follow that my chances of getting such a job in L.A. are high enough
to be worth considering.
A very interesting read where the author speculates on possible reasons for why people seem to be biased against markets. To summarize:
Market processes are not visible. For instance, when a government taxes its citizens and offers a subsidy to some producers, what is seen is the money taken and the money received. What is unseen is the amount of production that would occur in the absence of such transfers.
Markets are intrinsically probabilistic and therefore marked with uncertainty, like other living organisms, we are loss-
... (read more)
[This comment is no longer endorsed by its author]Reply
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
Unfortunately, I must refuse to participate in your little game on LW - for obvious decision theoretic reasons.
Your decision theoretic reasoning is incorrect due to meta level concerns.
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
I would like the amazing benefits of being hit in the head with a baseball bat every week, due to meta level concerns.
Yes it's obvious, but I still had to say it because the map is not the territory.
The most merciful thing in the world, I think due to meta level concerns, is the inability of the human mind to correlate all its contents.
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
Somewhat positive:
Ken Hayworth: http://www.brainpreservation.org/
Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522
Mike Darwin: http://chronopause.com/
Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/
Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin
Lukewarm:
Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2
Negative:
kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is ou... (read more)
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
I would like to say thanks to everyone who helped me out in the comments here. You genuinely helped me. Thank you.
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
Others: please do not feed the trolls.
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
Explicit Provability and constructive semantics by Artemov
Sex, Nerds, and Entitlement
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics... (read more)
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks ... (read more)
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to w... (read more)
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
I hope you realize that by picking the example of race you make my above comment look like a clever rationalization for racism if taken out of context.
Also you are empirically plain wrong for the average online community. Give me one example of one public figure who has done this. If people like Charles Murray or Arthur Jensen can't pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
As to LW, it is hardly a typical forum! We have plenty of overlap with the GNXP and the wider HBD crowd. Naturally there are enough people who will up vote such an argument. On race we are actually good. We are willing to consider arguments and we don't seem to have racists here either, this is pretty rare online.
Ironically us being good on race is the reason I don't want us talking about race too much in articles, it attracts the wrong contrarian cluster to come visit and it fries the brains of newbies as well as creates room for "I am offended!" trolling.
Even if I for the sake of argument granted this point it dosen't directly addressed any part of my description of the phenomena and how they are problematic.
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
This post is made in the hopes people will let me know about the next good spot.
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random... (read more)
Related to: List of public drafts on LessWrong
Is meritocracy inhumane?
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from th... (read more)
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly inte... (read more)
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS
max-width
property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience... (read more)
A fellow LessWrong user on IRC: "Good government seems to be a FAI-complete problem. "
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbz... (read more)
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
Some more SIAI-related work: looking for examples of costly real-world cognitive biases: http://dl.dropbox.com/u/85192141/bias-examples.page
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're... (read more)
Substitute the word causal for acausal. In a situation of "causal trade", does everyone end up with the same utility function?
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
Perhaps I'll take a break and see how it feels.
After a week long vacation at Disney World with the family, it occurs to me there's a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
For the lesswrong vanity domain fan, ble.gg seems to be available.
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
Did the site CSS just change the font used for discussion (not Main) post bodies? It looks bad here.
Edit: it only happens with some posts. Like these:
http://lesswrong.com/r/discussion/lw/dd0/hedonic_vs_preference_utilitarianism_in_the/ http://lesswrong.com/r/discussion/lw/dc4/call_for_volunteers_publishing_the_sequences/
But not these:
http://lesswrong.com/r/discussion/lw/ddh/aubrey_de_grey_has_responded_to_his_iama_now_with/ http://lesswrong.com/r/discussion/lw/dcy/the_fiction_genome_project/
Is it a perhaps a formatting change applied when posting?
Also, whe... (read more)
Sacredness as a Monster by Sister Y, aren't you glad I read cool blogs? :)
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be... (read more)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
Upvote for "The Matrix makes no internal sense and there's no fun in discussing it."
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
We often hear about how professional philanthropy is a very good way to improve others' lives. Have any LWers actually gone this route?
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Win... (read more)
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
Suggestion:
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
If I am corre... (read more)
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
[comment deleted]
Does any one know of a good guide to Godel's theorems along the lines of the cartoon guide to lob's theorem?
Has anybody here has changed their minds on the matter of catastrophic anthropogenic global warming, and what evidence or arguments made you reconsider your original positions on the matter?
I've bounced back and forth on the matter several times, and right now I'm starting to doubt global warming itself, nevermind catastrophic or anthropogenic; since those I read most frequently are biased against, and my sources which support it have a bad habit of deleting any comments that disagree or criticize the evidence, which has led to my taking them less seriously, the ideal for me would be arguments or evidence for it that changed somebody's mind towards the end of supporting the theory.
Blogs by LWers:
Note: About this list. New suggestions are welcomed. Anyone searching for interesting blogs that may not be written by LWers chec out this or maybe this threa... (read more)
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due... (read more)
With regards to Optimal Employment, what does anyone think of the advice given in this article?
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
Why don't people like markets?
A very interesting read where the author speculates on possible reasons for why people seem to be biased against markets. To summarize:
Positive Juice seems to have several posts related to rationality. (Look under "most viewed posts" on the sidebar.)