A simple reframe that helped jumpstart my creativity:
My cookie dough froze in the fridge, so I couldn't pry it out of the bowl to carry with me to bake at a party. I tried to get it out, but didn't succeed, and had basically resigned myself to schlepping the bowl on the metro.
But then I paused and posed the question to myself: "If something important depended on me getting this dough out, what would I try?"
I immediately covered the top of the bowl, ran the base under lukewarm to warm water, popped it out, wrapped it up, and went on my way.
After reading the third paragraph, I had already decided to post the following similar story:
It snowed a few weeks ago and my car was stuck in the driveway. Parts of the wheels had gotten ice/snow kind of frozen/compacted around them. I was breaking up the ice with one of those things you use to break up ice, but a lot of it was too hard and a lot of it was underneath the car and I couldn't get to it. I was pretty close to being late to work. So I thought "I need to make some kind of desperate rationalist effort here, what would HPJEV do?". And I sat and thought about it for five minutes, and I got a big tub, filled it with hot water, and poured it around the wheels. This melted/softened enough of the compacted ice that I was able to break up the rest and make it to work on time.
Then I read your fourth paragraph and saw your story was also about hot water.
I don't know if there's some kind of moral to this episode, like that the most rational solution to a problem always involves hot water, but I guess I'll raise it a little higher on my list of things to think about in various situations.
The Doctrine of Academic Freedom, Let’s give up on academic freedom in favor of justice from the Harvard Crimson
No academic question is ever “free” from political realities. If our university community opposes racism, sexism, and heterosexism, why should we put up with research that counters our goals simply in the name of “academic freedom”?
Instead, I would like to propose a more rigorous standard: one of “academic justice.” When an academic community observes research promoting or justifying oppression, it should ensure that this research does not continue.
This already describes the reality on the ground, though to see it announced explicitly as a good and noble goal, by the upcoming generation, is disturbing. And people like Steven Pinker let are getting old. I'm now updating my trust for the conclusions of academic institutions and culture when they happen to coincide with their political biases downward further.
When an academic community observes research promoting or justifying oppression, it should ensure that this research does not continue.
By the way, this is stupid even from the "we only care about the 'good' people (women, black, trans, etc.)" viewpoint, because the consequences sometimes look like this:
1) Someone suggests there could be biological differences between men and women. Angry screams, research abandoned.
2) Medical research done on volunteers (the expendable males) finds a new cure.
3) It appears that the cure works better for men, and may be even harmful for women (because it was never tested on women separately, and no one even dared to suggest it should be). Angry screams again -- unfortunately no reflection of what actually happened; instead the usual scapegoat blamed again.
More meta lessons for the LW audience: The world is entangled, you can't conveniently split it into separate magisteria. If you decide to remove a part of reality from your model, you don't know how much it will cost you: because to properly estimate the cost of X you need to have X in your model.
A side note to your otherwise excellent comment:
"we only care about the 'good' people (women, black, trans, etc.)"
As someone from the other side of the fence, I should warn you that your model of how liberals think about social justice seems to be subtly but significantly flawed. My experience is that virtually no liberals talk or (as far as I can tell) think in terms of "good" vs. "bad" people, or more generally in terms of people's intrinsic moral worth. A more accurate model would probably be something like "we should only be helping the standard 'oppressed' people (women, black, trans, etc.)". The main difference being that real liberals are far more likely to think in terms of combating social forces than in terms of rewarding people based on their merit.
My model of how liberals think, based on teaching at a left wing college, is that liberals find "politically incorrect" views disgusting.
Haidt's claim is that liberals rely on purity/sacredness relatively less often, but it's still there. Some of the earlier work on the purity axis put heavy emphasis on sex or sin. Since then, Haidt has acknowledged that the difference between liberals and conservatives might even out if you add food or environmental concerns to purity.
Yeah, environmentalist attitudes towards e.g. GMOs and nuclear power look awfully purity-minded to me. I'm not sure whether I want to count environmentalism/Green thought as part of the mainline Left, though; it's certainly not central to it, and seems to be its own thing in a lot of ways.
(Cladistically speaking it's definitely not. But cladistics can get you in trouble when you're looking at political movements.)
Most people, independent of political faction, can't have civil political disagreements. This effect tends to be exacerbated when they are surrounded by like-minded people and mitigated when they are surrounded by political opponents. Conservatives in elite academic environments are usually in the latter category, so I do think they will tend to be more civil in political disagreements than their liberal counterparts. However, I suspect that this situation would be reversed in, say, a military environment, although I have no experience with the military.
You could look at Fox News, where conservative contributors are generally far more bombastic and partisan than their liberal counterparts. Many liberals allege that Fox News deliberately hires milquetoast liberals in order to make liberalism look bad, but I don't think we need to posit a top-down agenda to explain the "Fox News liberal" phenomenon. It's simply the case that people are much less comfortable expressing their political views vigorously when they see themselves as being in enemy territory, especially if they need to make a home in that territory, rather than just briefly visiting it.
My model is that it's: "we want to help everyone who is suffering" but also: "the only real suffering is the suffering according to our definitions".
Or more precisely: "the suffering according to our definitions influences millions of people, and anything you said (assuming you are not lying, which is kinda dubious, considering you are not one of us) is merely one specific weird exception, which might be an interesting footnote in an academic debate, but... sorry, limited resources".
I understand that with given model of reality, this is the right thing to do. But unfortunately, the model seems to suffer horribly from double-counting the evidence for it and treating everything else (including the whole science, if necessary) as an enemy soldier. A galaxy-sized affective death spiral. -- On the other hand, this is my impression mostly from the internet debates, and the internet debates usually show the darker side of humanity, in any direction, because the evaporative cooling is so much easier there.
(Off-topic: Heh, I feel I'm linking Sequences better than a Jehovah's Witness could quote the Bible. If anyone gets a cultish vibe from this, let me note tha...
By the way, here is a recent example of just such a bad consequence for women. Basic summery:
1) Latest extreme sport added to olympics.
2) The playing field and obstacles will be the same for men and women; otherwise, it would be sexist and besides its cheaper to only build one arena. (We will of avoid thinking about why we have separate women's and men's competitions.)
3) Women wind up playing on the area designed for men and frequently get seriously injured at much higher rates.
I think "from the Harvard Crimson" is a misleading description.
One of their undergraduate columnists had a very silly column. Undergraduates do that sometimes. Speaking as a former student newspaper columnist, often these columns are a low priority for the authors, and they're thrown together in a hurry the night before they're due. The column might not even represent what the author would think upon reflection, let alone what the editorial board of the Crimson as a whole believes. So I wouldn't read too much into this.
(For non-US readers: The Harvard Crimson is the student-produced newspaper of Harvard University. The editors and writers are generally undergraduates and they don't reflect any sort of institutional viewpoint.)
They are certainly not willing to print... even non-crazy right-leaning articles.
That's not really true. Several of their contributors lean right. A few of one of these contributors' articles:
Affirmative Dissatisfaction: Affirmative action does more harm than good
Lessons from the Iron Lady: A tribute to the most polemic figure of post-war Britain
General Petraeus Should Not Have Resigned: What if all cheating men quit their day jobs?
Now it is certainly true that conservative writers are the minority, just as conservatives are a minority in the college as a whole. But the Crimson doesn't discriminate on the basis of political orientation when approving writers.
announced explicitly as a good and noble goal, by the upcoming generation
Undergrad publications print the craziest shit imaginable and sometimes even mean it. I wouldn't expect them to "think" the same way a few years after graduation, though.
LWers may find this interesting: someone may've finally figured out how to build a fully distributed prediction market (including distributed judging) on top of blockchains, dubbing it 'Truthcoin'.
The key idea is how judgment of a prediction market is carried out: holders of truthcoins submit encrypted votes 1/0 on every outstanding market, and rather than a simple majority vote, they're weighted by how well they mirror the overall consensus (across all markets they voted on) and paid out a share of trading fees based on that weight. This punishes deviation from the majority and reminds me of Bayesian truth serum.
Clever. I haven't been too impressed with the Bitcoin betting sites I've seen too far (some of them like Bets of Bitcoin are just atrocious), but this seems like a fully decentralized design. The problem is it's so complex that I don't see anyone implementing it anytime soon.
I just want to thank all of you, as both individuals, and as a community, for being a decent place for discourse. In the last few months, I've been actively engaging with Less Wrong more frequently. Prior to that, I mostly tried asking for opinions on an issue I wanted analyzed on my Facebook. On Facebook, there has been typically been one person writing something like 'ha, this is a strange question! [insert terrible joke'here]. Other than that, radio silence.
On Less Wrong, typical responses are people not thinking I'm weird because I want to analyze stuff outside of the classroom, or question things outside of a meeting dedicated to airing one's skepticism. On Less Wrong, typical responses to my queries are correcting me directly, without beating around the bush, or fearing of offending me. All of you ask me to clarify my thinking when it's confused. When you cannot provide an academic citation, you seem to try to extract what most relevant information you can from the anecdotes from your personal experiences. I find this greatly refreshing.
I created this open thread to ask a specific question, and then I asked some more. Even just from this open thread, the gratification I recei...
I've lost 30 pounds since September 17th, 2013*. Interestingly, I've noticed doing so caused me to lose a lot of faith in LW.
In the midst of my diet, discussion in the comments on this series of posts confounded me. I'm no expert on nutrition or dieting(I do know perhaps more than the average person), but my sense is that I encountered a higher noise-to-signal ratio on the subject here at LW than anywhere else I've looked. There seemed to be all sorts of discussion about everything other than the simple math behind weight loss. Lots of super fascinating stuff—but much of it missing the point, I thought.
I learned a few interesting things during the discussion—which I always seem to do here. But in terms of providing a boost to my instrumental rationality, it didn't help at all. In fact, it's possible LW had a negative impact on my ability to win at dieting and weight management.
I notice this got me wondering about LW's views and discussions about many other things that I know very little about. I feel myself asking "How could I rationally believe LW knows what they are talking about in regard to the Singularity, UFAI, etc. if they seem to spin their wheels so badly on a discus...
Consider the following story:
I was feeling a little blue. I looked at the psychiatric literature, and they were saying all this weird stuff about neurotrophic factors and cognitive-behavioral therapy. But then that night I had dinner with some friends, went to the gym for an hour, and sure enough I felt a lot better afterwards!
I would have at least three qualms with such an attitude:
First, there are different kinds of low mood. Some differences are obvious; some people are less depressed than others, or depressed for much shorter time periods. But it could also be that there are no visible differences between two people, but that for hidden reasons one person's depression will respond to some quick exercise and social activity, and another person's won't.
Second, even interventions that are known to always work can be hard to task-ify. Exercise is indeed often a very effective treatment for depression, but when you tell a depressed person "just go and exercise", they usually won't do that because they're too depressed. Having a good social support network can be helpful in depression, but depressed people can be unable to make friends because deep down they assume ever...
I think you've rescued the rule that depressive people can't just decide to feel happy. But by your theory, they should still be able to go to work, maintain all their relationships, and otherwise behave exactly like a non-depressed person in every way. In practice this seems very hard for depressed people and a lot of the burden of depression is effects from not being able to do this. The metaphor that just as this is a hard problem and worthy of scientific attention, so weight loss can be a hard problem and worthy of scientific attention still holds.
But why stick with depression? I could just as easily move to obsessive-compulsive disorder. Can't they just "force" not washing their hands too often? Or social phobia - can't they just "force" themselves to go out and socialize when appropriate?
Probably the best example is substance abuse - can't people just "force" themselves not to drink alcohol? And yet not only do therapy-type interventions like Alcoholics Anonymous appear to work, but purely biological interventions like Vivitrol seem to work as well. I am pretty happy that these exist and the more of them people can think up for weight loss, the ...
I think you missed my point, or I threw it by you poorly. I don't think they "should", I think they sometimes can. I sometimes can, and though I know from LW that not all minds are alike, it's safe to assume I'm also not wholly unique in my depression.
I agree that they sometimes can. I also agree people can sometimes lose weight. As far as I was concerned, our disagreement here (if one exists) isn't about whether it's possible in some cases.
Are you willing to agree to a statement like:
"Weight loss is possible in some cases, and in fact very easy in some cases. In other cases it is very hard, bordering in impossible given the marathon-analogy definition of impossible below. This can be negated by heroic measures like locking them in a room where excess food is unavailable and ignoring their appetite and distress, but in the real world you cannot do this. Because of these difficult cases, it is useful to explore the science behind weight loss and come up with more effective strategies.
If so, we agree, but then I'm confused why you were criticizing the Less Wrongers in your original statement. If you don't agree, please let me know which part we disagree about.
...To y
I agree that trying to avoid all pain can be a failure mode. But insisting that pain needs to be plowed through can also be a failure mode.
The advice "You should run a marathon by continuing to run even if it hurts" might perhaps be useful as part of a package of different interventions to a runner who's hit some kind of a motivational wall.
But in other situations it is completely inappropriate. For example, suppose a certain runner has a broken leg, but you don't know this and he can't communicate it to you. He just says "It really really hurts when I run!" And you just answer "Well, you need to run through the pain!"
This is an unreasonable request. If you were more clueful, you might make a suggestion like "You should go to a doctor, wait for your broken leg to heal, and then try running later."
And if enough people have broken legs, then promoting the advice "You should run a marathon by continuing to run even when it hurts" is bad advice. Even if we assume that people are still capable of running on broken legs and will not collapse, you are generalizing from your own example to assume that the pain they suffer will be minimal a...
For what it's worth, weight loss and related topics are one of the things that I might describe as unconventionally political: not aligned with any of the standard Blue-Green axes, but nonetheless identity-entangled enough that getting useful information out of them is unusually difficult. (Also in this category: relationship advice, file sharing, hardcore vs. casual within gaming, romantic status of fictional characters, ponies.)
"How could I rationally believe LW knows what they are talking about in regard to the Singularity, UFAI, etc. if they seem to spin their wheels so badly on a discussion about something as simple as weight loss?"
Dieting is one of many topics discussed on this forum where the level of discourse is hardly above dilettante. Applied rationality and AGI friendliness research is done by several people full-time, which brings the discussion quality up in these areas, mostly. So it would not be fair to judge those by the averages. Everything else is probably subreddit-level, only more polite and on-topic.
Dieting is anything but simple. It is still an open problem. Human body and mind is an extremely complicated system. What works for one person doesn't work for another. Eliezer put in some significant time into figuring out his weight issues, to no avail, and is apparently desperate enough to resort to some extreme measures, like consuming home-made gloop. Many people are lucky to be able to maintain a healthy weight with only a few simple tweaks, and you might be one of them. If you want a more fair comparison, "You can't expect a graduate-level philosophy professor to know how to design a multi-threaded operating system". No, that's not quite enough. "...how to solve an unsolved millennium problem" is closer.
But the fundamental causal mechanism at play is very simple.
Sure, calories in vs calories out... Except it is not helpful when you cannot effectively control one or both without reducing the real or perceived quality of life to the level where people refuse to exercise this control. This is where most diets eventually fail. And you seem to agree with that, while still maintaining that "understanding what needs to be done to lose weight is simple", where it is anything but, since it includes understanding of the actual doable actions one has to perform and still enjoy life. And this all-important understanding is sorely lacking in a general case.
[...]something as simple as weight loss?"
Weight loss isn't as simple as you think.
Sure it's all about burning more than you eat, but for a lot of people "just eat less and exercise more!" isn't advice they can follow. You seem to have "lucked out" on that front.
The far more interesting and useful question is "what factors determine how easy it is to eat less and exercise more?". This is where it gets nontrivial. You can't even narrow it down to one field of study. I've known people to have success from just changing their diet (not all in the same way) as well as others who have had success from psychological shifts - and one from surgery.
I don't consider LW to be the experts on how to lose weight either, but that doesn't signal incompetence to me. Finding the flaws in the current set of visible "solutions" is much easier than finding your own better solution or even grasping the underlying mechanisms that explain the value and limitations of different approaches. So if you have a group of people who are good at spotting sloppy thinking who spend a few minutes of their day analyzing things for fun, of course you're going to see a very critical literature review rather than a unanimously supported "winner". Even if there were such a thing in the territory waiting to be found (I suspect there isn't), then you wouldn't expect anyone on LW to find it unless they were motivated to really study it.
I agree dieting isn't easy to do for all sorts of reasons. But it is simple. And that seemed to be completely lost on a group of people that are way smarter than me.
An alternative explanation might be that the "weightloss = energy output - energy intake" model is so simple that all the people involved in the discussion already understand it, consider it obvious and trivial, and have moved on to discussing harder questions.
Some Bayesian thoughts on the classic mystery genre, prompted by watching on Netflix episodes of the Poirot series with David Suchet (which is really excellent by the way).
A common pattern in classic mystery stories is that there is a an obvious suspect, who had clear motive, means and opportunity for the crime (perhaps there is also some physical evidence against him/her). However, there is one piece of evidence that is unexplainable if the obvious person did it: a little clue unaccounted for, or perhaps a seemingly inconsequential lie or inconsistency in a witness' testimony. The Great Detective insists that no detail should be ignored, that the true explanation should account for all the clues. He eventually finds the true solution, which perfectly explains all the evidence, and usually involves a complicated plot by someone else committing the crime in such a way to get an airtight alibi, or to frame the first suspect, or both.
In Bayesian terms, the obvious solution has high prior probability P(H), and high P(E|H) for all components of E except for one or two apparently minor ones. The true solution, by contrast, has very high probability P(E|H) for all components of E. It is a...
My following queries are addressed to those who have experience using nicotine as a nootropic and/or have learned much about what taking nicotine as a nootropic is like. If you yourself don't match either of these descriptions, but have gained information from those who do, also please feel free to answer my queries. However, references, or citations, backing up the information you provide would be appreciated. If you're aware of another thread, or post, where my concerns, or questions, have previously been addressed, please let me know.
Gwern, appreciated on Less Wrong for the caliber of his analysis, makes the case for experimenting with using nicotine as a nootropic on an occasional basis. For the use of nicotine as a nootropic within the community which is Less Wrong, the most recent date for which I could find data on usage rates was the 2012 Less Wrong survey results:
NICOTINE (OTHER THAN SMOKING): Never used: 916, 77.4% Rarely use: 82, 6.9% 1x/month: 32, 2.7% Every day: 14, 1.2% No answer: 139, 11.7%
I haven't used nootropics other than caffeine in the past, but when I was first reading about the promise they might hold for improving my cognition in various ways I was impre...
Given the importance of communication style in interpersonal relationships, I am looking to create an OkCupid question to determine if someone is an asker/teller or guesser. I'm having difficulty creating an unbiased question. Any way I've written the question makes ask/tell seem obviously better, e.g., here are two possibilities:
When you want someone to do something for you, do you prefer to ask them directly or do you prefer to mention something related and expect that they infer what you want?
Should your partner "just know" what you want without you ever saying so explicitly?
That perception might just be my own bias. Quite a few people I know would probably answer #2 as yes.
Unfortunately, this question probably won't be answered very often, so it's also useful to look for a proxy. Vaniver suggested a question about gifts when I mentioned this at a meetup, and I believe he meant the question "How often should your significant other buy you gifts, jewelry, or other things more expensive than, say, dinner, cards, or flowers?" This question is a reasonable proxy because many guessers I know seem to expect people to "just know" what sort of gifts ...
When you want someone to do something for you, do you prefer to ask them directly or do you prefer to mention something related and expect that they infer what you want?
You're gonna lose at least 20% of the OKC population and a much larger chunk of the general population with the complexity of your sentence structure and the use of words like "infer".
When you want something do you
[pollid:614]
And there's another problem - the real answer will usually be "it depends on the situation". So an even better question would be
How often do you drop hints about what you want, instead of asking directly?
[pollid:615]
(Even now, my real answer is "it depends on what system I think the person I am talking to uses". I'm not sure ask/tell is actually a property attributable to individual people...it's more a mode of group interaction)
Possibly insurmountable problem is that loads of people want to think that they are Tell or at least Ask but in practice they are actually Guess and you have no way of filtering for this. In my experience people are extremely bad at knowing "how they are" relative to other people.
Perhaps the questions should give concrete scenarios. Something like
Ann needed to visit Chicago to go to a conference, and asked her friend Beth "Can I stay in your apartment Mon through Wed". Beth answered, "no, it's too much trouble to have a houseguest". Was Beth unreasonable?
and
Your friend Ann send you an email saying, "I need to go to Chicago for a conference, can I stay with you in your apartment Mon through Wed?" Is this an inconsiderate request?
Arthur Chu was discussed here previously for his success with Jeopardy using careful scholarship to develop strategies that he knew had worked in the past for other people.
In the comments section here he makes a much more extreme case against LessWrong's policy of not censoring ideas than Apophemi did a while back. Frankly he scares me*. But on a more concrete note, he makes a number of claims I find disturbing:
1) Certain ideas/world-views (he targets Reaction and scientific racism) are evil and therefore must be opposed at all costs even if it means using dishonest arguments to defeat them.
2) The forces that oppose social justice (capitalism, systematic oppression) don't play nice, so in order to overcome those forces it is necessary to get your hands dirty as well.
3) Sitting around considering arguments that are evil (he really hates scientific racism) legitimizes them giving them power.
4) Carefully considering arguments accomplishes nothing in contrast to what social justice movement is doing which at least is making progress. Hence considering arguments is contrary to the idea of rationality as winning. (This seems extreme, I hope I am misreading him)
5) Under consequentialism, ...
I mean this literally, I am actually physically frightened.
Why are you physically frightened of a random Internet blowhard?
The increase in knowledge doesn't even seem worth the sacrifice; we're talking about differences in average IQ between 95, 105, 110, 115. For one such as I, who's got an IQ of 168, this degree of difference seems unimpressive, and, frankly, worth ignoring/not worth knowing.
Come now, you know how normal distributions work. Small differences in means cause over-representation at the extreme ends of the scale. From your IQ I can predict a ~30-40% chance of you being Ashkenazi, despite them being a global minority, just because of a "slightly" higher mean of 110. This is an important thing.
(EDIT: This calculation uses sd=15, which may or may not be a baseless assumption)
Plus, maybe there's a reverse-"Level above mine" effect going on here. The difference between someone at 90 and someone at 110 might not seem big to you, but it might just be your provincialism talking.
(Agreed about the immigration rationalization though)
Come now, you know how normal distributions work. Small differences in means cause over-representation at the extreme ends of the scale. From your IQ I can predict a ~30-40% chance of you being Ashkenazi, despite them being a global minority, just because of a "slightly" higher mean of 110. This is an important thing.
I think we have to be careful with our mathematics here.
By definition IQ is distributed normally. But if we use this definition of IQ then we don't know how IQ is distributed within each population. In particular even if we assume each population is normal, we don't know they all have the same variance. So I think there's little we can say without looking at the data themselves (which I haven't done).
In this instance it might be better to try to measure intelligence on an absolute scale, and do your comparisons with that scale. I don't know how well that would go.
(I'm using the anonymous account (Username and password are "Username" and "password") since I just want to make a statistical point and not associate myself with scientific racism.)
As it turns out, I'm a green-eyed, pale-skinned but tan-capable Arab from North Africa. I've got several uncles that look downright East Asian (round face, slanted eyes, pale-skinned), and another side of my family looks south-asian, and another looks downright black, and we have blue-eys blondes, an the traits skip generations and branches, and I find the whole notion of "race" to be laughably vague.
If, like in the US, you put a bunch of Scandinavians, Southwest Africans, and East Asians right next to each other, without miscegenation between their descendants, and with a very distinct social stratification between them, I can see how words like "Hispanic" might sound like they might be meaningful, but in lands like Brazil or Morocco where everyone got mixed with everyone and you got a kaleidoscope of phenotypes popping up in the most unexpected places, the "lines" start looking decidedly more blurry, and, in particular, no-one expects phenotype to be in any way correlated with personality traits, or intelligence, or competence.
And let us not get started on the whole notion of "Ashkenazi" from a genetic standpoint; in fact, the very result that they get the highest IQ results makes me place my bet on a nurture rather than nature cause for the discrepancy. I'm willing to bet actual money on this outcome.
Eugine, at the risk of stating the obvious, I don't like that being known to have those true beliefs lowers my status and gets in the way of me doing good. I think it's unfair, and I find it frustrating.
Assuming this particular piece of knowledge matters, what are we supposed to do about it? Be more forgiving of teachers' inability to enable black students to reach some average standard? Allocate Jewish and Asian kids less resources and demand that they meet higher standards? Should we treat kids differently, segregating them by race or by IQ? What practical use do we even have for scientific racism?
It's not even that we would need to use it, just that denying it would be harmful.
Without taking sides on the object-level debate of whether it's true or not, let me sketch out some ways that, if scientific racism were true, we would want to believe that it was true. In the spirit of not making this degenerate further, I'll ignore everything to do with eugenics, and with partisan issues like affirmative action.
(1) Racial differences tend to show up most starkly on IQ tests. This has led to the cultural trope that IQ is meaningless or biased or associated with racism. This has led to a culture in which it is unacceptable (borderline illegal depending on exactly how you do it) to use IQ tests in situations like employment interviews. But employers continue to want highly intelligent ...
The most convincing explanation I have heard for these problems is that inner cities massively overconcentrate lead, which is neurotoxic and causes crime/impulsivity. This is a highly solvable problem. But solving it would require us to say things like "the population of inner cities is neurologically disturbed", which would require discussing the problem, which is something that we have to prevent people from doing in order to discourage scientific racism.
The lead-crime link was brought to public attention by a prominent liberal journalist, writing in a prominent liberal/progressive magazine. As far as I'm aware, there was no huge outcry about this. In fact, the article was widely linked and praised in the liberal blogosphere. I am pretty sure that Drum and the editors at Mother Jones would denounce scientific racism quite vigorously if asked about it. So I think you are overestimating the "chilling effect" produced by a taboo against scientific racism.
The best geography/climate to develop a civilization is not necessarily the best geography/climate to produce high intelligence. Early civilizations arose in places where agriculture was productive enough to generate significant surplus.
Can one detect intelligence in retrospect?
Let me explain. Let's take the definition of an intelligent agent as an optimizer over possible futures, steering the world toward the preferred one. Now, suppose we look at the world after the optimizer is done. Only one of the many possible worlds, the one steered by the optimizer, is accessible to retrospection. Let's further assume that we have no access to the internals of the optimizer, only to the recorded history. In particular, we cannot rely on it having human-like goals and use pattern-matching to whatever a human would do.
Is there still enough data left to tell with high probability that an intelligent optimizer is at work, and not just a random process? If so, how would one determine that? If not, what hope do we have of detecting an alien intelligence?
Mildly interesting challenge:
There is a new internet community game taking off called Twitch Plays Pokemon. The concept is simple: set up a server that takes the next properly formatted input ("up", "down", "a button") from a chat window, and apply it - in order, with no filtering - to a copy of Pokemon Red.
This is going about as well as can be expected, with 90,000 players, about a third of whom are actively attempting to impede progress.
So, a TDT style challenge: Beat the game in the shortest number of steps
What do you mean by "communicate"? If I send a command, and you observe the result of that command on the game, we've communicated.
If that's allowed, the non-troll case is easy: Wait a random amount of time, then send a command. If yours was the first command to be sent, play the game. If someone else sends a command before you do, do nothing ever again.
Daniel Bell's introduction to The Year 2000 : A Framework for Speculation on the Next Thirty-Three Years (1969) provides a handy half-prolegomenon for what Robin Hanson called "serious futurism":
...More than forty years ago, Kegan Paul in England and E.P. Dutton in New York published a series of small books, about eighty in number, entitled Today and Tomorrow, in which some outstanding minds of the time made predictions about the future. The titles were romantic and metaphorical, and this provided a clue to the style and contents of the series...
W
This app has been demonstrated to successfully improve visual acuity in baseball players and performance in game. (Works on the brain, not the eyes.)
Original paper:00005-0)
^ Link formatting is weird, so just copy-paste (Edit: fixed thanks to PECOS-9)
Thought this article on relationships was well-written and enlightening: How to Pick Your Life Partner.
I recently commented on one of my friends' Facebook posts in regards to the Bill Nye/Ken Ham debate. One of the issues I brought up was that Ham's Creationism lacked the qualities that we would usually associate with good explanations, namely what I called "precision". Which I defined as:
...Good explanations exclude more possible evidence than bad explanations. Let’s say that you have two friends who collect marbles. One friend collects only black marbles while the other collects every single color marble he can get his hands on. If your plumbing
Reposting for visibility from the previous open thread as I posted on the last day of it (will not be reposting this anymore):
Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statistics, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure rea...
SciShow did a 4 minute YouTube clip on Bayes' Theorem. It could hold the attention of most eight-year-olds and is factually adequate considering the constraints of the medium.
I posted this open thread yesterday because I wanted to ask a question that belonged in an open thread, but the last open thread was only until the date of February 17th. Is there a policy for who posts open threads, or how they're posted? If there is such a policy, does it go for only open threads, or for other special threads as well?
I noticed that different people post the open threads over the course of weeks or months when I was searching for them. I'm guessing that the policy is that if a given user notices there is no open thread for the week in which they would like to post, after searching for it, they create it themselves.
Is there a policy for who posts open threads, or how they're posted?
No and yes. And I'm sorry but you did it badly (I'm saying this only because you are asking). So for the future:
120 gibberish papers were in journals for up to 5 years. They were found as a result of a test for one kind of gibberish.created by a program called SCIgen.
For those interested, MITx is starting their intro to programming course today. It's the first part of a 7-course XSeries certificate program.
Is anyone else bothered by the word "opposite"?
It has many different usages, but there are two in particular that bother me: "The opposite of hot is cold" "The opposite of red is green" Opposite of A is [something that appears to be on the other side of a spectrum from A]
"The opposite of hot is not-hot" "The opposite of red is not-red"
Opposite of A is ~A
These two usages really ought not to be assigned to the same word. Does anyone know if there are simple ways to unambiguously use one meaning and not the...
Does anyone have heuristics for when it's worthwhile to upvote, or downvote, a post? I've had an account on Less Wrong for a while now, but it's only recently that I've started using it on more than a weekly basis, so I suspect I'll be engaging with this online community more. So, I'm wondering what is the up-and-up on, i.e., courteous method of, upvoting/downvoting. I'm aware that this might be a controversial issue, so let's not use this thread for debates. I'm only looking for useful, or appropriate, heuristics for (understanding) voting I might have mi...
There is a lot of noise in voting, so don't overanalyze it. There is a correlation between good comments and upvotes, but unless you get at least -3 on a comment, or perhaps -1 on 5 comments in a row, you should probably just ignore it. Also, upvotes usually mean you did something right, but of course a comment made early in the debate gets more visibility and votes than a comment made late in the debate.
Generally, upvotes and downvotes mean "want more of this" and "want less of this". This is a community about rationality, so you should consider whether the given way of communicating contributes to rationality, or more specifically to building a good rationalist community. Use your own judgement.
Going against the majority opinion... I'd guess it depends on whether the argument brings something new to the discussion. Saying: "you are all wrong because you didn't consider X" (where X is something that makes sense and really wasn't mentioned on LW) will probably be welcome; saying "you are all wrong, because this is against my beliefs / against majority opinion" will not. But here I would expect even more noise than usual.
...By the time I get to
I need new T-shirts. I can never find ones I like, so I'm resorting to making my own slogan T-shirts on the usual design sites. So far I've ordered "NO POEMS FOR YOU, GNOMEKILLER!". What shall I get next?
My friends and I made a trolley problem shirt. (Also Plato's Cave and Prisoners Dilemma jokes)
I am looking into noise reduction options for sleeping - I'm a side sleeper, and the foam insert earplugs I've been using so far are extremely uncomfortable to sleep on. It is surprisingly hard to find a comprehensive guide for this that's not trying to sell you something. Do any of the sleep hackers around here have suggestions?
(If this is more appropriate for the stupid questions thread, let me know.)
How do I verify whether the air quality in a room is bad? I'm concerned that being in a particular room is causing me to sneeze.
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.
Anyone seen that 'her' film yet, the one with Joaquin Phoenix in the lead and directed by Spike Jonze? It's a film about a guy falling in love with an AI. Is it any good?
Edit: to summarize, Robin Hanson thinks it works very well as a Pixar-ish whimsical sentimental movie, but not as a realistic interpretation of how a world with that kind of AI would work, despite getting a couple of things right. Other posters, having seen other Spike Jonze projects, and knowing the lead actor's antecedents, suspect the film might be a bit of a prank.
I feel even more int...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.