If it's worth saying, but not worth its own post, even in Discussion, it goes here.

160 comments, sorted by Click to highlight new comments since: Today at 7:25 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]9y 43

The deadliest school massacre in US history was in 1927. Why its aftermath matters now

In the end there were 38 children dead at the school, two teachers and four other adults.

I’m not talking about the horrific shooting in Connecticut today. I’m talking about the worst school murder in American history. It took place in Michigan, in 1927. A school board official, enraged at a tax increase to fund school construction, quietly planted explosives in Bath Township Elementary. Then, the day he was finally ready, he set off an inferno. When crowds rushed in to rescue the children, he drove up his shrapnel-filled car and detonated it, too, killing more people, including himself. And then, something we’d find very strange happened.


No cameras were placed at the front of schools. No school guards started making visitors show identification. No Zero Tolerance laws were passed, nor were background checks required of PTA volunteers—all precautions that many American schools instituted in the wake of the Columbine shootings, in 1999. Americans in 1928—and for the next several generations —continued to send their kids to school without any of these measures. They didn’t even drive them t

... (read more)

Since the perpetrator in 1928 was a school board official who had taken time to prepare the crime, it's unlikely that modern-style security would have helped. If the same thing happened today, it would be harder for people to demand important officials be searched every time they enter the school, than to demand random adults should be searched. This accounts for some of the difference in reaction.

These things are huge triggers for me. It drives me mad that society has the reaction it does when this event killed as many people as die every eleven seconds. If we had a proportional societal reaction to all deaths, maybe we'd have solved the problem by now.

Edit: I accidentally the math

Yes! I really liked this tweet from Thom Blake on the matter:
I did too. I think I retweeted that.
The other side of the coin -how this very distance can be put to a sickening use with drone strikes. [http://gu.com/p/3cj6d]
By sickening use do you mean more pinpoint attacks that kill fewer people than conventional means?
By sickening use I mean that I see no way that a large-scale conventional operation in Northern Pakistan would've even been approved - nor a reason to start a military operation there, instead of the U.S. handling the real problem - the unstable, aggressive and brutally incompetent Pakistani government. In my opinion, it ought to have been pressured to provide good administration and good policing in the troubled areas, to eliminate the roots of insurgency and terrorism instead of continuing the cycle of violence. A regime that has nuclear weapons, a modern army and a huge "security" apparatus but can't prevent chaos, poverty and tribal warfare in its own backyard is being coddled just because it's politically expedient for the US. Unless the Americans are willing to blast every single inch of Pakistan, I see no way how drone strikes could be helping the whole mess. But oh, the CIA and the military have some dead insurgents to show for it; no more frags could mean no more promotions and no more huge budgets!
I think you are absolutely right about the optimal response / change in procedures (basically, don't change anything) given these awful events. Given the rarity of these types of events, the expected value of any security program is essentially identical to what it was two weeks ago. It's a crying shame that this kind of cost-benefit analysis is not the universal reaction to proposed changes in policy arising out of unusual events. But it is a logistical fact that faster killing devices kill more people per incident [http://www.slate.com/articles/news_and_politics/human_nature/2012/12/connecticut_school_shooting_semi_automatic_weapons_and_other_high_speed.html] .

This weekend I staged a Les Mis "One Day More" flashmob in our train station with about 20 people. (I'm Javert).

I realized I had been wishing that a flashmob would happen, and finally wised up and realized I should just do it myself. I think most of us underestimate how willing people are to go along with wacky plans, as long as they don't have to have any logistical responsibilities. It also helped to set point people for a couple different social circles to recruit (alums, work colleagues, an improv group, church friends). This has definitely lowered my reticence to stage other public spectacles.

Ha! That's the delightful little project [http://www.patheos.com/blogs/unequallyyoked/2012/12/trade-you-a-cookie-for-a-charitable-donation.html] , no?
This is my favorite Les Miserables song. And incredibly awesome.

I'd like to take a moment to boast about my site traffic: gwern.net recently passed 1.5m pageviews with 1m unique pageviews by 973.4k readers.

I began gwern.net on ~3 October 2010, so it took a little over 2 years to reach this point. (I have fewer pageviews and visitors than Overcoming Bias did when it broke 1m visits, but Eliezer and Hanson and co. is some pretty stiff competition!)

LW traffic has been indispensable all the while, both in building up numbers and in criticizing many of my pages and hopefully improving them, so thanks to everyone.

I've finished my potassium sleep experiment: http://www.gwern.net/Zeo#potassium It made my sleep much worse, and I didn't notice any daytime benefit.

EDIT: ditto for the followup

How does this affect the probability that the first superhuman AI to be implemented will be Friendly? I guess it increases it, but I don't know Kurzweil that well.

Why is there a Boing Boing article in Featured Articles on the front page? Who has edit access to the front page and why are they spamming?

Apparently the Featured Articles list comes from http://wiki.lesswrong.com/wiki/Template:Lesswrong:FeaturedArticles , which is maintained by Costanza. Looks like she added it during a routine weekly update, I guess to get some variety to the list? Anyway, I deleted it for now. (You may need to hit "Force reload from the wiki" to see the updated front page.)

And before anyone criticizes Costanza too harshly, I'd like to say that regularly updating the list of featured articles each week is a great thing for someone to do, and I'd like to thank Costanza for being the one to do it. :-)

Kudos to Costanza for doing this nearly regularly every week for over one year. I'm curious though, how are the front-page articles selected?
Seriously, this. I concede that the article is well-written and seems to fit well with LW principles and values, but why is it "Featured" (and by who?) and why is it trolling the front page? It doesn't seem to be written by a prominent LW user that I know of, unless much alias shenanigans is happening or they're an "offline" LWer.
[-][anonymous]9y 11

I've changed my mind on a few big things recently, or at least clarified my doubts. Somewhere along the way, I noticed that the correct side (as I judge) on controversial issues tends to use evidence and careful logical argument, and the incorrect side tends to use indignation, invocation of taboos, straw-manning, and scoffing.

I find logical arguments more convincing than social authority, so a lot of the fact that the "correct" arguments use logos (logical argument) instead of ethos (social argument) could be selection. This explains the above, but fails to explain the extreme polarization on some issues where one side uses mostly logos and the other uses mostly ethos.

So if logos has a reliable correlation with the truth (either because logos-tending people more reliably produce truth, or because only truth can be armed with logos), that has some interesting implications. For one, you can get a quick estimate of correct side in a controversy just by surface-level syntactical analysis.

That idea has disagreed with previous beliefs in interesting ways that have not yet been resolved:

  • I ran into some holocaust deniers, who tended to cite "facts" (that may or may n

... (read more)
How people form their opinions matters. The bottom-line-writing process [http://lesswrong.com/lw/js/the_bottom_line/], after all, is not "how do you defend your conclusions?" but "how did you form your conclusions? In other words, not "what do we hear from X's towards non-X's in arguments?" but "on what basis do people acquire X views in the first place?" (We do not live in a world where these are equivalent. If they were, then all rationalists would automatically become perfectly persuasive speakers — which is not the case.) When I think of a person becoming a feminist, I think of a person who had a lot of previous personal experiences with gender relations, different treatment of men and women, etc.; who is then exposed to feminist ideas, and finds that these ideas offer ways to describe or explain experiences they could not relate (or explain the importance or value of) in non-feminist ways of talking about (e.g.) upbringing, relationships, workplaces, etc. When I think of a person becoming a Holocaust denier ... well ... I don't really have much of a model of that, since I've never observed the process — but I don't get the sense that it has a lot to do with previous personal experiences. Rather, it might have to do with them having read some Holocaust anecdotes and history (in school) for which they are now accepting the explanation "these anecdotes were written by a fraudulent conspiracy" rather than "these anecdotes are pretty much accurate". I'm not sure what the motivation is, though, if it isn't either ① politics, or ② the usual "I know a secret" found in a lot of conspiracy theory. There's a list of introductory feminism sources in the Feminism 101 article at Geek Feminism Wiki [http://geekfeminism.wikia.com/wiki/Feminism_101].
Not quite the same thing, but the post as a whole made me thing of Yvian's Why I Defend Scoundrels part 1 [http://squid314.livejournal.com/333353.html] part 2 [http://squid314.livejournal.com/335286.html]. Edit-Fixed link.
Damn. Yvain nails it again. His model of the situation is spot-on, but I'm scratching my head about something: He argues that signaling, etc cause the socially-dominant position to atrophy and drift to maximum absurdity, but then he simultaneously claims that the socially dominant position is reliably correct. Maybe it's just my prior from having recently had my faith in the Cathedral shaken, but this seems like one more good argument why the socially dominant position is bullshit. Curiosity is a wonderful feeling! I actually don't know which way this will go, and it feels great. I'm getting a lot of practice at changing my mind and getting very epistemically self-skeptical. Fuck I love being a rationalist... On to part two... (EDIT, your link has a stray trailing underscore that causes 404)

How much should I care about online privacy?

As far as I can tell most people's fear s in relation to privacy are motivated by an intuitive 'ick' feeling not by any projection of future harms. But as we know reversed stupidity isn't intelligence, so the question of how much one should care about privacy, especially online, is still open.

What do you do in terms of preserving privacy and why? E.g. Do you keep your real world and online (or different online) personalities distinct, if so why?

Edited for typos and clarity

It's hard to judge. When I started seriously participating in stuff online, I kept heavily pseudonymous so I could disavow it later (I was still growing up) and out of interest in crypto & security issues. This came in handy when I earned some enemies on Wikipedia who sought to 'out' me; one group went so far as to call up universities looking for info on me in order to harass me and in the best-case scenario, get me fired from a job. Naturally, they failed. More recently, I learned of death threats; I believe the threat to be very minimal, but it's still not a particularly happy thought.

The point being that when I started, I didn't seriously expect stalkers and threats, but if I had started 'public', I didn't have the option to retract all the relevant privacy info.

Seriously? What the hell? I forget how many crazy people there are on the internet. I'm feeling a little more paranoid now. How high do you estimate the risk is and does it link to any particular topics?
This isn't uncommon at all. A lot of bloggers who discuss anything controversial receive death threats, rape threats, or other threats of violence. Fortunately, the threateners are almost always Internet Tough Guys, all keyboard and no fists. I've been targeted by an online stalker once over stuff on Wikipedia — fortunately he was incompetent and the personal information he posted about me was obsolete. (Ironically enough, he thought of himself as a privacy activist. Self-righteousness is strong in that one.) An ex-girlfriend of mine who blogs about mental health issues has been repeatedly harassed, had private email messages leaked and posted online, and has been threatened repeatedly. And some organizations (e.g. Scientology) have quite a reputation for attacking people who criticize them online (or in print) ....
I was aware of this for specific things, (e.g. blogging about gender and sexuality issues, or the weirder parts of reddit or 4chan). But I'd always thought of wikipedia as a nice friendly place. Basically I thought it was a risk you took on in certain areas not a general background thing.
The vast majority of activity on Wikipedia is nice and friendly. But some of that minority, well... (More in high-conflict areas than elsewhere, yes, but crazy people are everywhere. Articles get written on obscure subjects because no matter what the topic is, someone is obsessive about it. But people go crazy about unexpected topics, because no matter what the topic is, someone is obsessive about it...)
Wikipedia doesn't have a culture that promotes being awful to people, the way that some sites do — but it's a high-value target.
I'd be comfortable with an estimate like <1/1,000. Well, there's always something which caused them to do it. But the topic isn't always useful. In one area, I was entirely unsurprised; in another area, I was completely blindsided and still find it hard to believe; in a third, I was moderately surprised.
On a scale of 'attempted privacy invasions/total posts on the internet'? Can I ask what the unexpected ones were? What did they intend to do with personal information? Just contact real life people and be rude about you? I'd guess if you have a good real life reputation already that would be ineffective.
No, that the death threat would result in any harms to me. No. You'd be surprised. One Wikipedia admin, Kate IIRC, quit Wikipedia entirely because she thought her stalkers like Daniel Brandt could get her fired. Another, PhilWelch IIRC (who's a LWer now), had some unfortunate encounters with the police, courtesy of his stalkers, demonstrating that even if they can't get you fired they can do a distressing amount remotely.
"Privacy" is a single word that people use to mean many, many different things. In "A Taxonomy of Privacy" [https://www.law.upenn.edu/journals/lawreview/articles/volume154/issue3/Solove154U.Pa.L.Rev.477\(2006\] .pdf), Daniel Solove identifies sixteen distinct sorts of harms that people group under the notion of harms to "privacy": * Surveillance — someone spying on you, tapping your calls, etc.; * Interrogation — someone making you answer questions, testify against yourself, etc.; * Aggregation — someone assembling dossiers or profiles about you; * Identification — someone tracking you from here to there, or making you do so yourself by carrying papers; * Insecurity — someone gaining illicit access to records legitimately kept about or for you; * Secondary use — someone taking records that were made for one purpose, and turning them to another purpose; * Exclusion — someone refusing to tell you what records are kept about you, or to correct mistakes; * Breach of confidentiality — someone leaking something that you told them in confidence; * Disclosure — someone leaking something that damages your reputation or your safety; * Exposure — someone intruding on you in activities that are conventionally private, such as defecation or grieving; * Increased accessibility — someone making it easier for others to get records about you that were hard to obtain before; * Blackmail — someone threatening to expose something about you in order to gain power over you; * Appropriation — someone using your image or name to promote a product or other goals; * Distortion — someone saying false or misleading things that hurt your reputation; * Intrusion — someone entering your home or private places without your consent; * Decisional interference — someone denying you information, or manipulating you, regarding personal matters such as sexuality and reproduction.
I was using it in the loose sense of "how much effort should I put into making it difficult to ascertain my real life identity." That is a very useful taxonomy, I may reference it in another project I'm working on analysing why people value privacy (and whether they should). The main motivation seems to be the 'icky' feeling people get at the thought of being observed, not any expected harms. Which means they are unwilling to accept small violations of privacy that would have massive social benefits (e.g. supplying police with all citizens fingerprints and DNA would massively reduce the number of unsolved crimes, with very minimal possibility for abuse).
This might be overselling it. Only about a quarter of criminal offences are recorded by the police [http://fds.oup.com/www.oup.com/pdf/13/9780199597376.pdf] , and they already solve some crimes despite incomplete fingerprint & DNA databases. So even if expanding the databases meant the police solved every crime they recorded, the number of unsolved crimes would fall by at most 20% (unless there were a reason to expect reporting/recording of crimes to rise as well).
I act so that if someone linked my online persona to my real life, their estimation of my real life would improve (I only say cool or impressive things online, basically.)
If that's the case why aren't you using your real name?
doesn't want to taint his online persona with his boring real self
As drethelin pointed out, if my online persona would make my real life look better, then my real life would make my online persona look worse.
It's not zero sum. It's not possible to find the things that you say in meatspace via google. Let's say someone you know in real life doesn't know that you are the cool rationalists that you project on LessWrong. That person is like you. He is also on LessWrong. He also uses a nickname. You both know each other but you don't look like cool rationalists to each other. If one of you starts to be identifiable then the other can say: "Hey, let's meet up to have some cool conversation about rationality". That real life relationship can then also improve your online relationship. Your new real life rational friend is now likely to give your posts on LessWrong more attention. Both commenting on them and voting. Your online life also gets improved. There's synergy. Being willing to take real life responsibility for the actions of your online persona can increase the amount of trust that your online persona gets.
Sure, it isn't. But I've been using the moniker shokwave for nearly ten years now, so there's some cost to changing. I will, however, examine those costs more closely because you make a convincing argument.
Once thing I forgot: If you don't think you are seen as a cool person in real life, what's the reason? Are you hiding yourself to avoid rejection from the people around you? Do you think that you can only express yourself online? Is there something that you could change in meatspace to communicate your personality more effectively? If so, it would be worthwhile to investigate the costs of such a behavior change. Many geeks spent way to much energy on hiding their personality.
Oh, I'm pretty cool in real life too! It's not at all that I can only express myself online; it's precisely the opposite, that I can choose to not express myself online. This lets me craft my responses much more - it's like being given five minutes to come up with each line in a conversation.
The main practical reason I've found for caring about online privacy is to make it harder for people to obtain your passwords. For example, don't put your mother's maiden name on Facebook if it's one of the security questions someone can use to get into your bank account, but there are less dumb versions of this. The main difference between my meatspace and online personas is that I try not to say things online that I anticipate regretting in 20 years, and instead I try to say things that other people will think are cool so they will offer me jobs or something (one reason I use my real name). I'm less filtered in meatspace.
On LessWrong I use my surname and the first two letters of my lastname as nickname. That means, if someone Google's my name he won't find my LessWrong posts. If someone however knows me and reads LessWrong I think he can deduce my identity. When possible I also use an avatar image of myself to make it easier to recognise me. I'm doing Quantified Self community building and multiple people who are into Quantified Self participate on LessWrong. I might say things on LessWrong that offend somebody but I think that people here are generally able to accept people with different viewpoints and won't hold something I write here against me outside of LessWrong. My Quantified Self online identity is linked via speaking engagments and mainstream media interviews with my real life. Other links between my online identity and offline identity are found in facebook. A bunch of my facebook friends come from online sources. I was roommates for a few months with a guy where the first contact was online and where facebook was the medium that allowed me to know that he moved to my city and needed a place to live. With another good friend of mine it was similar. We had minimal online contact. Then we became facebook friends. In physical space we meet the first time accidently in a toastmasters club and were surprised that we are facebook friends. Today he's one of my best friends. Especially if you have interests that aren't shared by most people turning online relationships with people who share your interests into real world interactions is very worthwhile. The less walls that you build around yourself with privacy protections, the more likely you are to interact with people you know online in real life. So far I don't know of a relationship that I lost because of something that I wrote online. If something I write online however offends someone I know offline to the extend that the person wants to end the relationship, we probably don't have much shared interests anyway. I wa

Is Kessler syndrome relevant to existential risk? Kessler syndrome is the danger that if low-earth orbit becomes sufficiently crowded, a bad collision could result in a cascade effect. This would in a few hours or days destroy all satellites in LEO and would possibly render LEO unusable for decades. While this isn't an existential risk by itself, it may be relevant. Many existential risks can be dealt with in part by simply not having all are eggs in one basket- nuclear war, nanotech, and many others fall into this category. Meanwhile some existential risks such as large asteroids can only be detected and stopped by having well-developed space capabilities.

So is Kessler syndrome relevant to existential risk? Does this not substantially matter because there's already a fair bit of resources going in to preventing it?

I'm not an engineer, but I find it hard to believe that we couldn't clean up LEO at reasonable cost - if we really wanted to. We don't now because it is expensive with no obvious pay-off, but returning LEO to usability is a pretty big pay-off.
Given the amount of debris, and recent trends in launch costs, launch rates, space project completion times and budgets... I'd say decades sounds entirely reasonable. If we had a more serious launch infrastructure in place, it might be done in a few years. Right now, we might get a useful pilot result in a few years if we started a high-priority project today. It would be more years before the project was operating at a useful rate, and yet more to finish cleanup. Part of the problem is that very small objects moving at high velocity are problematic, and there simply is no easy way to catch them without creating more such objects. At present, many serious proposals actually involve handling every piece of debris individually. That's expensive, to say the least. Other proposals involve complicated stuff with deploying large objects like thin films, threads, etc., which is an activity with a very poor success rate thus far.
My point is that taking those trends as a given after a Kessler syndrome catastrophe - when substantially all technology that relies on LEO satellites no longer work - is a strange presumption. Current trends contain very little information about what would happen if LEO were unusable. If GPS stopped working, the US government would fund its replacement solely to regain the military advantage it provides. Likewise, telecommunications companies would be irrational if they refused to devote a substantial portion of their potential revenue to re-building the capacity that earns them billions in revenue. In short, the lack of resources currently devoted to clean-up may not be maximizing expected value - but there's substantial resources that would become available if failure to provide them would leave LEO unusable for multiple decades. Taxes would be raised, private bonds would be floated, environmental regulations would be waived or amended, but LEO would be made usable.
The problem is not the funding, it's the lack of anyone to give it to. The Apollo program had von Braun. We don't have a replacement for him. Musk is close, but unproven. The answer is not any of the other existing companies, or today's NASA, or JPL. That's not an organization you can just create with money. No one today knows now to build what we need in that time frame.
So there are a few proposals to dealing with this sort of thing, but most serious proposals are more preventative in nature. There are good engineering solutions for picking up old satellites and large pieces of debris. But small pieces of debris are not as easily dealt with- there have been some proposed solutions but no one is clear on if any of them are actually viable, at most right now there are a few toy models and back of the envelope calculations that suggest some solution will work. Right now, large nets seems like a promising idea but the problem of puncturing is serious.
At least, a lot of chunks would de-orbit, due to the energy loss.

This is a sockpuppet I have created. It is a ROT13ed version of my username, for comments that I would prefer to have not associated with a simple google search of my real name. Generally personal things that I don't mind random strangers on the internet knowing. I have only ever thought of two or three things that I would want to use this account for, and the alternative is simply not making those comments. As long as I never use this account for voting purposes, do you as a LW member feel this is an acceptable use of sockpuppetry?

Is this acceptable? [pollid:373]

I always thought "sockpuppetry" was only an issue if you used it to make it seem like more people agreed with you than actually do (e.g. by posting similar comments on the same article using multiple accounts). I see no problem with using multiple accounts as long as you don't use them for this sort of abuse.
Some LWers don't like Clippy [http://lesswrong.com/user/Clippy/] or other user-as-message accounts. I would predict some of those folks wouldn't like this incredibly-weak -anonymizer account either. The closest normative label in common usage for the disapproved behavior is "sock-puppet" - but you are correct that the ordinary meaning of that label does not cover the behavior at issue.
Hmm. I don't like Clippy because whoever maintains it isn't doing a good job (sorry, I mean a clippy job).
This is probably not effective. Similar attempts have been done in the past, and failed.

Can you give more details?

I think this paper (while mathematically interesting!) is rather oversold. A positive result to their proposed experiment says one of the following is true: A) we're simulated on a cubic grid B) we're not simulated, but True Physics has cubic structure C) (other non-obvious cause of anisotropy) Not only is it very difficult in my mind to distinguish between A and B, think what a negative result means; one of: A) we're simulated on a non-cubic grid B) we're simulated with a more complex discretization that deals with anisotropy C) we're not simulated, and True Physics doesn't have a cubic structure I think the only thing a cubic anistropy can tell us about is the structure of True Physics, not whether or not that true physics is based on a simulation.
... unless cubic anisotropy is more likely in a simulation than in not-a-simulation. How could we know that, though?
[-][anonymous]9y 6

Some perspectives from Prudentius

A delightful and short new post by Moldbug on Christianity and the Fall of Rome. The link to Peter Frost's take is very much worth reading as well. The post is also related to Frost's Genetic Pacification paper.

Did Christianity and evolution leave Late Antiquity Romans particularly vulnerable to bad consequences of the typical mind fallacy when dealing with non-pacified populations?

[-][anonymous]9y 6

I often find myself rooting for the villain in pop-culture stuff, especially if he's the sort of villain that has been putting together a massive plan that obviously took time, dedication, and quite a lot of hard work and ambition. The hero is often, by contrast, a slacking power-of-friendship layabout who has power/talent due to an intrinsic property: "the Chosen One".

Villains are sympathetic to me even when they really shouldn't be; in Avatar: The Last Airbender (spoilers follow) Sver Ybeq Bmnv'f tbny jnf yvgrenyyl gb ohea gur jbeyq, erohvyq... (read more)

[This comment is no longer endorsed by its author]Reply
I don't generally root for villains, but I do see a cultural problem of typically seeing virtue as passive. In a lot of fiction, virtuous people aren't doing much except for ordinary life and reacting to villains.
This goes back very far. In a world where there's little hope for improving the status quo, evil is the main thing which moves things. That's why for example the Ten Commandments have a lot more thou-shalt-nots than thou-shalls. This is probably connected to a general societal attitude of change being bad. Another example is how in Shakespeare a lot of the villains are more interesting than the heros. Iago and Othello would be the prototypical example. And in the two cases where Shakespeare made somewhat competent villains as protagonists (Richard III and arguably Lady Macbeth), there are scenes which almost feel inserted to remind the audience that these are bad people who will suffer for their actions.
[-][anonymous]9y 6

There's an old Google tech talk by Geoff Hinton about restricted Boltzmann machines that was my introduction to a bunch of topics in machine learning, and it still strongly informs a lot of my thinking about cognition. The main example is about labeling image from the MNIST handwritten digit database. Then, starting around 23:20, he shows that his RBMs are also generative; he can fix the state of a high level unit in the network (that normally discriminated images of 2s), crank forward his inference algorithm, and get out images of handwritten numbers &quo... (read more)

Have you seen Prof. Hinton's neural networks class on Coursera, or equivalent? If you can't get access to the materials now (since I don't think there's a scheduled re-run of the class yet), I can make them available to you if you want them.

The Sherlock Game

For those of you who watch the Sherlock BBC serial, here's a rationalist's game: try to extrapolate other hypotheses to explain the found evidence than those the protagonist came up with, and compare them in terms or relative likeliness. To give you an example of how it's done, from Pratchett;

[Vimes] distrusted the kind of person who'd take one look at another man and say in a lordly voice to his companion, "Ah, my dear sir, I can tell you nothing except that he is a left-handed stonemason who has spent some years in the merchant na

... (read more)

I was reading the comic "Heavenly Nostrils" yesterday, and it was a pretty good illustration of the idea of Mundane Magic.

On luminosity

With extensive observation of myself, I finally understood that familial dynamics were reinforcing maladaptive thoughts in myself that I was actively trying to remove. Thinking very carefully about what my situation would look like from someone else's perspective bolstered my suspicions that my situation was likely emotionally abusive. As such, I (with extensive assistance from my own social circles) have managed to remove myself from my childhood home.

While not necessarily useful in a carve-reality-at-the-joints sense (1), I have found the co... (read more)

Could anyone comment on the market for biomedical engineers? I'm specifically interested in regenerative medicine, so the common advice of "get a degree in MechE or EE and apply to biomedical companies" doesn't seem like it would apply.

I know Zvi has thought about this, as well as Scott Adams and probably some Berkeley Lesswrongers: I am wondering if there is a name for the "field" which may or may not exist that encompasses optimal modern housing and architecture, especially in the sense of communal/group living. If there's not an actual field, is there some sort of consensus or document similar to the NYLW Paleo document? I've found that I really enjoy living with other people and hanging out at houses that consist of a bunch of room-mates so I want to look into doing this in the best possible way.

I don't know if there's a consensus, but this sounds vaguely like the concept of "intentional communities".
Also "cohousing", which sometimes entails adapting or remodeling architecture to better suit the community needs.
What//where is this "Paleo document"?
I can't seem to find it. I hope I didn't imagine it!

Commencement speech by Ira Glass-- specifically about reporting, but it's also rather a lot about how to be agenty-- find ways to do your work better, fight the pull of mediocrity, do enough things so that luck has a chance to work.

Does somebody happen to have an overview of the current consensus on Dual N-Back? My understanding is that the impact on IQ is not solidly established. What about working memory? Is there solid evidence for transfer? Is it wort a) learning more about it b) actually spend time on training if you have a cognitively demanding job (analysis/programming)? Thank you!

Gwern has a comprehensive FAQ on dual-n back [http://www.gwern.net/DNB%20FAQ].
Sorry, it wasn't clear from how I asked the question but I wanted a 2 sentence summary.... Gwern's FAQ is a monumental piece of work but the question is if it is even worth reading 50k words long document about it?
Not unless you have ADHD or another condition plausibly exacerbated by lack of WM, or an interest in meta-analysis. The appendix, however, is of interest to all LWers who read medical or psychological studies (which is a lot of them).
Thank you vary much. I'll have a look at the appendix (the FAQ along more of your writings are on my ever expanding reading lists...). Thank you for all the work and thought you put into it!

Canadatheism Gentle whimsy or mind-killing challenge?

An untitled Christmas poem, in the style of Dr. Seuss, by Yvain.

[-][anonymous]9y 2

I'm looking for two things I think I found on this website.

One is a philosophy paper that argued that deontology was merely the rationalization of unconscious moral heuristics (or something like that...). Meaning that people have unconscious moral instincts that they arrive at unconsciously, then they try to justify them, therefore deontology. I think I remember getting access to it through LessWrong, though I don't remember, nor do I remember who it's by (maybe Joshua Greene)? Anyway it was really interesting and I'd be grateful if someone pointed me to i... (read more)

One is probably The secret joke of Kant's soul [http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-KantSoul.pdf], which is indeed by Joshua Greene.
[-][anonymous]9y 1

Found this gem in a singularity thread on 4chan/b/:

If the first ultraintelligent machine is based in any way on human thought then it will just be retarded really really fast.

Quick thought experiment for you all: A machine has been invented that can view the past (type in place/time/date and you can see what happened then with perfect accuracy). Would you allow the police to use it to investigate crimes?

The easiest answer, the one that leapt to my mind with a moment's thought, is "Yes, if they get a warrant." That would probably be the answer that fits best into the current American legal system. (I have even less understanding of the laws of other countries, so I can't make any claim about that.)
If I have the power to allow or disallow such a thing, do I also have the power to arrange enforcement of rules saying that's all the police can do with it?
I think police investigation would be a very minor concern compared to the massive social upheaval that the device would cause. A very similar idea was explored in The Light of Other Days [http://en.wikipedia.org/wiki/The_Light_of_Other_Days] - not a great book plot-wise, though.
I read the original short story in one of the Asimov collections but didn't realise it had been made into a full length novel.
The classic short story E for Effort [http://en.wikipedia.org/wiki/E_for_Effort] also discusses this.
What's the time limit? Can you just view what's happening 5 seconds ago anywhere in the world?
Yes, why not?
Yes. Do we allow independent historians to use the device? What happens to conventional accounts of WWII, etc?
Yes, modulo the Fourth Amendment rights of the participants. Under most circumstances, using the evidence of the machine in court would require obtaining a warrant in advance of turning it on.
Are we to assume it has the many, many safeguards needed to prevent it being used for other purposes?

Why is it fun to be bad? I've heard actors say its more fun playing the bad guy. Also, I find the thought of stealing millions of dollars and getting away with it, thrilling, but not very nice.

It may be that goal-orientation where there are no made-up rules is fun; as a good person there is need to follow some of the more stupid moral norms that made sense some puny two hundred years ago.
The more enemies you have the more impressive it is when you win. Villains automatically have society as an enemy.

I hope that deathism remains a powerful ideology, such that if radical life extension becomes possible, I can opt out with minimal social consequence. I may be generalizing from one example, but strongly suspect that LW overestimates the number of people who would genuinely prefer to live forever relative to those who could be socially coerced into it. I also suspect this is already true in the case of ordinary life extension, for which no deathist safeguards are in place.

I for one am willing that those whose considered, sober preference is death be allowed to die. I don't think this requires deathism, just voluntarism; I think keeping deathism a powerful and current force in our memeplex would be overkill (so to speak) for this desideratum.

Sometime in January I will create "Best of Rationality quotes, 2012 Edition". Should I put it in Main or Discussion? The first two editions (2009-10, 2011) were in Main.

Go with Main. It's now a tradition, and is of general interest.

What are people's thoughts on the Sapir-Whorf hypothesis (nature of language affects how people think).

If it is true are there lessons for teaching rationality in different linguistic communities or modifying language to increase rationality?

I know of no reliable evidence that differences between human languages correlate significantly with, let alone cause, differences in human cognition. That said, I do expect it's easier to coordinate activities within a group that has language devoted to the nuances of such activities, and that's as true of rationality as anything else. But I also expect that sort of jargon is no harder to construct in one natural language as another.
It seems that in weak formulations it can be confirmed. Have you read "Through the Language Glass" by Deutscher? Choosing better words for some situations does train you in some skills. It looks like people distinguish colours quicker if they have different names. For example, Russian speaker will notice the difference between "closer to sky blue" and "closer to navy blue" faster than English speaker because of habit to classify them as different colours. Deutscher cites a few different studies of that kind. Apparently, language can also change your default reactions (how you interpret omissions) in the sense that you can set up some scene on the table, then lead a person to another room and ask which table has the same scene as in the first room; whether the language uses north/south or left/right for path descriptions can be seen by answers. As for applications, it seems to say what you would try anyway — if you want to improve awareness of somethingm encourage saying it out loud every time.

There are a few short stories/essays that I'm pretty sure I came across on lesswrong but have been unable to find with my brief, lazy googling. If you know any of these off the top of your head it would be much appreciated.

  1. Story about a kingdom tormented by a dragon that demanded villagers be regularly sacrificed to it. People attempted to rationalize the situation and make excuses for the state of affairs. Eventually something is done about the dragon and the people lament that it took them so long. The dragon symbolized death.

  2. Essay about a brain in a

... (read more)

You may be thinking of Nick Bostrom's "The Fable of the Dragon-Tyrant", Daniel C. Dennett's "Where Am I?", and David Deutsch's "The Final Prejudice", respectively. Best wishes, the Less Wrong Reference Desk.

Thank you! Was just looking for #1 and wasn't sure what it was called. Searched LW for "dragon sacrifice". This came up. Wonderful.
Perfect, thanks!

Culture-toxic fundamentalism and the love for austerity

Where does that come from? The anti-hedonistic love of scarcity, spareness, composure, measure, rigour, effort, sacrifice, strain, toughness, drought, and grim-faced resolution? The hatred for art, for music, for wine and other drugs, for sex, for food? The notion that sloth, lust, gluttony, and pride are not just sins, but mortal sins? It's not exclusive to one culture, and it comes and goes with the ages, and it pops up all over the planet, all over the ages. Entire cultural movements surge with the ... (read more)

I've just found out that this mentality has a "cool" Western-subculture counterpart; "Straight Edge". Seems to place a lot of focus on abstinence from certain things and "cleaniness". A form of hygiene? Many of them are active, alternative artists, so being "straight edge" is not the same as being a "stick in the mud" "party pooper" "wet blanket"...
A Just So Story: Somewhat autistic, focused, competent leaders gain power, because they don't care about the same worldly temptations as neurotypicals. From their point of view, everyone else's failure to lead a moral or effective life is the fault of these "temptations", rather than a consequence of different values. They try to simultaneously simplify, purify, and optimize society by fighting these ills.
Okay. Well, then, why, precisely, is that a bad thing? I mean, we could do without the arts and the luxuries and all the vacuous, transient nonsense that people with free time like to do, but it seems this kind of rigourism also gets in the way of stuff that's actually useful, such as philosophical, scientific and historical/political investigation. Could it be that gratuitous fun things are intrinsically linked to curiosity, inquisitiveness, and a drive to know the truth? Are we looking at some sort of communicating vessels [http://en.wikipedia.org/wiki/Communicating_vessels] here?
I have a notion that asceticism is a pretty basic human drive-- the desire to feel that you're overcoming desires[1]. It seems to pop up in various forms in a lot of cultures. I think there are a couple of things going on there. One is that the ability to forego pleasure is frequently useful. It's possible that asceticism is simply too much of a good thing. Another angle is the notion that a basic tool of gaining power over people is to convince them you're so right about the world that they should take your orders about avoiding basic pleasures. Once you've pushed them that far, it's easy to control them in other ways. (Notion acquired from RAWilson's description of Reich's ideas.) I think there's something to this, and I also think we're so rich that various sorts of asceticism become ways of showing off. [1]There's such a thing as meta-asceticism-- overriding the desire to resist desire. It's essential to recovery from anorexia.
Eric Hoffer had some insights about this: in The True Believer, he hypothosizes that the primary thing that causes mass movements to crystalize is a rejection of the present brought on by feelings of intense frustration at the current state of the would-be fanatics' lives. Hoffer further hypothosizes that many early converts to mass movements are unsuccessful creative types (the whole "what if Hitler had gotten into art school?" thing) because nothing is more frusterating than watching other people's artistic and cultural endeavors flourish while your own attempts at self-expression languish. This causes them to reject the worldly pleasures of the present as meaningless; after all, who has time "for art, for music, for wine and other drugs, for sex and for food" when there's an entire world for your movement to conquer and purify. Hoffer also notes that the active phase of most mass movements correspond with a "cultural dark age" situated in between flowerings of the arts.
More from Hoffer:
Tell me more.
I wish I could, but that's as much as I remember. If you want to do more research, that's Robert Anton Wilson and Wilhelm Reich. Unfortunately, Wilson's books tend to overlap each other a lot, so I don't remember which one explained Reich's ideas. When Reich was writing, the tight social controls were on sex. I have another notion that a similar process is going on now with food.
Did you know humans frown on weight variances? If you want to upset a human, just tell them their weight variance is above or below the norm. [http://tvtropes.org/pmwiki/pmwiki.php/Main/YouAreFat]
[-][anonymous]9y 0

Some cool new videos by Aurini a fellow LessWrong user and reader:

Transhumanism, Mind Replication, Right Wing Attitudes, & Left Wing Opulence

An interesting take on Em World Scenario and Forager vs. Farmer distinction by Robin Hanson. It is obviously a political take however.

Third rate stuff. I only listened to the end to have solid ground to criticize it. Well, that and a case of akrasia. Speaking of the low-quality luxuries of our era, extended half-assed pontificating is mental and emotional white sugar. Onwards from the simple pleasure of insults to the more complex pleasure of addressing issues. Aurini splits the world into the left-wing and the right-wing minds, with all the conscientiousness on the right-wing side. He says that right-wingers mistrust pleasure and are dedicated to work and heroism, whereas the left thinks nothing is better than pleasure. He says the left wins because we're so temporarily wealthy that being the coolest person rather than the most useful person gets victory. This does not resemble the people I know-- there's conscientiousness on both sides of the political divide. Furthermore, in the spirit of Less Wrong, if you need to seem cool to win, and winning is important, then part of conscientiousness is learning how to look cool. He talks about ems being happy to spend ten years doing nothing but investigating a mathematical theorem. Part of what's wrong here is that ems are pretty much unmodified human minds living in computers. They're going to need varied stimulation as much as carbon humans do, though the whole process will be cheaper, and Aurini is probably right that there will be selection for people who need less down time. On the other hand, they might not want difficult art-- it takes more processing to appreciate. Maybe the ability to really enjoy old jokes is what's going to survive. The other thing is that he wants to preserve creativity, but that takes room to make mistakes-- not a world lived on the Malthusian margin.

I have a bunch of cards in an Anki 2.0 deck. How do I generate their reverses and add them to my deck? That is, if I have a card 'bestehen' -> 'to exist', I would expect to see a new card, 'to exist' -> 'bestehen'. There's a FAQ about it for 1.2, but I can't find anything on 2.0.

University of Southampton decided to stop teaching social work due to lack of credible research in the discipline. (May 2012) http://www.examiner.com/article/university-of-southampton-decides-to-stop-teaching-social-work

Beware! I think the blogger who wrote that article has an axe to grind,* and he's misinterpreted the University's letter. The key bit of the post is: But that "lack of credible and excellent international research" quotation doesn't appear in the university's letter — it's the blogger's own misleading paraphrase. The actual letter is rather more vague, but it looks like the university's closing the programme because it's not generating enough important-looking research to get a solid research ranking. This is very different to there being no credible social work research in general! The tipoff for me is the letter's reference to "internationally excellent research", which is a UK academic buzzword [https://google.com/search?q=internationally excellent research] (buzzphrase?). It comes from the UK's Research Excellence Framework [https://en.wikipedia.org/wiki/Research_Excellence_Framework] (formerly the RAE [https://en.wikipedia.org/wiki/Research_Assessment_Exercise]), which tries to grade individual university departments on their research. Presumably the University of Southampton, expecting its social work department to get a substandard REF rating, has preemptively scrapped the department to focus resources on other departments ("We are therefore channelling our resources into the University’s greatest research strengths"). * He writes that eugenics has become the objective of social services, and claims to have "created the discipline of social service crimes research". But Googling the phrase "social service crimes research" turns up no research, just chatter on Facebook, YouTube, and blogs. He's just dignified his disapproval towards social workers by putting a fancy label on it.
O.K., thanks.
[-][anonymous]9y 0

Is there a way to "sticky" a post on Google Groups? For example, if I want to (privately, for now) track local meetup attendees and topics for X period of time, and I want to sticky a post for the entire duration just in case anyone wants to retain anonymity or plausible deniability or whatever.

I believe so. If you look at https://groups.google.com/forum/?fromgroups#!forum/brain-training [https://groups.google.com/forum/?fromgroups#!forum/brain-training] you can see my FAQ is stickied.
Apparently only the group owner(s?) can do that [http://support.google.com/groups/answer/1046523?hl=en&ref_topic=2459438]. On a side note, "Display at the top", while easy to understand, was not anywhere close to a phrase I was looking for, and I definitely skipped right over it at least once.
Why should anyone but the group owners be permitted to do it?
You didn't actually say how to do it, only that it was possible. So I searched again with confidence, and that's what I found out.

About Newcomb's problem + something non-deterministic:

If the contents of box B are increased so that B > A, it seems like by basing the choice of one-boxing or two-boxing off of a quantum coin toss, one could limit Omega's predicting powers from 100% accuracy to a mere coin toss with 50% accuracy.

Where A has $1000 and B has $2000, the average payoff would be $1500 the coin toss ($0 or $3000) versus $1000 by one-boxing and $0 by two-boxing in a way Omega can predict.

Has something like this been considered as a possible resolution?

If you believe in MWI, both outcomes happen, though in different worlds, and Omega knows about both worlds, and stuffs the boxes accordingly.
That's a slightly different problem. How would it be a resolution to the original problem?
Yeah, It wouldn't be a way to win, since in the original problem you could throw a coin and base your decision on that. Average gain of $500,500 isn't so bad, but not nearly as good as $1,000,000 from one-boxing. You're right, it's not a resolution to the paradox, but if the situation is changed it's a possible way of winning. I guess I'm looking for ways to beat Omega, and I'm trying to figure out if this would be one of them. Something like "harnessing the power of random"?
It's called a mixed strategy Nash equilibrium [http://lesswrong.com/lw/2vs/mixed_strategy_nash_equilibrium/]. It's a very interesting topic on its own, but it doesn't have a whole lot to do with the decision theory paradoxes that Omega is used to show off.
Is it possible for Omega to develop a response to mixed strategies such that the original problem remains, pretty much unchanged?
I can't cite sources off-hand but this suggestion is reasonably standard but taken to be a bit of a cheat (it dodges the difficult question). For this reason it is often stipulated that no objective chance device is available to the agent or that the predictor does something truly terrible if the agent decides by such a device (perhaps takes back all the money in the boxes and the money in the agent's bank account).
Usually, it's just "choosing using a randomizing device will be treated the same as two-boxing."
In other words, the question becomes one of "Omega has two boxes box A and box B, which it fills based on what it thinks you will do. Box A has $1000 and box B has either $0 or $1000000 depending on whether Omega predicts you will take both boxes or only box B, respectively. If Omega predicts that you will do your best to be unpredictable, it will do something bad to you. Should you take box A, box B, or try to be unpredictable?" That question doesn't seem as interesting.
[-][anonymous]9y -2

Call supernatural metanatural then recycle old Catholic/Hindu/Taoist arguments in LW jargon. #WinningAtLessWrong #lifehacks

Well? Go ahead.
https://www.google.com/webhp?q=metanatural+site:lesswrong.com [https://www.google.com/webhp?q=metanatural+site:lesswrong.com] Not a whole lot of results. Or ... any.
Metanatural hasn't been used yet as far as I know. This is why I chose it.

On 15 December 2012 06:01:26PM, I received an unsolicited and badly-spelled PM from a "ZakaulRehman" (who does not appear to have sent any non private messages) on ... subatomic physics, I think. Has anyone else received similar messages?

I suggest using the report feature on the comment.
Oh, he's agreed not to contact me on the subject again. I was just curious if I was the only one or if this was some sort of sitewide spam.