I'm happy to see a push for increased empiricism and scientific effort on LW. But... I wish there were more focus on the word "how," and less focus on the word 'we.'
Three articles come to mind: To Lead You Must Stand Up, First, Try to Make it to the Mean, and Money: The Unit of Caring. (Only the first part of the second article will be directly relevant, but the latter parts are indirectly relevant.)
That is:
First, there's insufficient focus on what concrete steps you are taking to move the culture in that direction. (Writing blog posts exhorting action does not count for much. Do you think The Neglected Virtue of Scholarship would have shifted community actions as much if lukeprog hadn't followed it up by writing posts with massive reference lists?) The reference to yourmorals.org is fine, but what made that site important was a particular feature, not its goal or its structure. If you've thought of a similar feature that someone (ideally you) could code up, great! I will send as much karma as I can towards the person that makes that happen. But this is even more general than a call for better / easier rationality tests and exercises, and thus even less likely to cause co...
To Lead You Must Stand Up
A little over a week ago me and two other LWers started doing research on the possibilities of an online rationality class. The goal the project is to have an official proposal as well as a beta version ready in a few months. Besides this hopefully helping spread friendly memes and giving publicity, we aim to figure out if this can be used as a tool to make progress on the difficult problems of teaching and measuring rationality. Best way to figure it out is to try and use it that way as we iterate.
I name dropped the proposal in the OP but since we started so recently it felt odd writing an article about that first.
Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn't in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?
I kind of meant this under "attracting the right crowd" but I should have made it explicit.
...But I don't see you addressing the engineering problems with moving from one culture to the ot
Huh, that's not a bad idea, actually. Had there ever been an attempt by the SIAI or CFAR (I'm not sure which is responsible for what, to be honest) to Kickstarter some project related to rationality or AI ? Something with a clear, measurable goal, like, "give us a million dollars and we'll give you a Friendly Oracle-grade AI" -- though less ambitious than that, obviously.
I'm curious as to hear an example or two of what sort of experiments you had in mind (and the models they'd be testing) when writing this article. A brief, none-too-thorough attempt on my own part kept hitting a couple of walls. I agree with your sentiment that simple surveys may fall prey to sampling biases, and I wonder how we would acquire the resources to conduct experiments methodologically sound enough that their results would be significant, the way CFAR is doing them, with randomized controlled trials and the like.
It seems to me that most of the trepidation I associate with this problem comes from the difficulty of designing good experiments that are actually worth performing and would actually discriminate between two plausible hypotheses, not so much not knowing how to get enough research subjects.
What clearly can be useful is to create a list of models and ideas we've already assimilated that haven't been really tested or are based on research that still awaits replication.
I like the idea and agree this could be useful, but our first focus should be on reproducing well-established results to make sure that Less Wrong has the ability to so. Afterward, if we succeed, then it makes sense to proceed cautiously to replicating weak results or testing new hypotheses. Otherwise, if we don't succeed in reproducing well-established results, then we lack the ability to investigate more speculative ideas.
We face numerous additional burdens over experimentalists in academia, two of which stand out to me: (1) a currently unknown sampling bias and response rate, and (2) a general inability to blind our Less Wrong subjects to the hypotheses and predictions of our experiments.
It may be instructive to take a look at some of the recent papers on online psychometric assessment in personality psychology. They have worked on establishing the effects of online assessment (as opposed to traditional paper-and-pencil or face-to-face assessments) and proposed guidelines for doing experiments onlin...
LessWrong Science: We do what we must because we can.
For the good of all of us, except the ones who are dead and haven't opted for cryonics.
Incredibly good idea! Why:
I was told by a professional developmental psychologist that it's often the psychology assistants who have the creative ideas - the psychology training can get in the way. This may be due to problems like anchoring bias (where they're anchoring to all that existing information from their education), bias blind spots (I've seen these caused by the ego that one can get due to having a high-level degree), and confirmation bias (which can only happen if you have preconceptions to confirm) or others.
Specifically because many LessWron...
Lets see how the predictions made by your model hold up!
Name three examples. Or even one to start. Using a template like:
It seems that you are suggesting turning LW into some sort of alternative circuit for scientific pubblication.
That seems an inefficient thing to attempt. If you do publishable research, publish it to a peer-reviewed academic journal or conference.
You might use LW as a place to exchange preliminary ideas and drafts. Personally I'm a bit skeptical that it would yield a significant increase of productivity: scientific research typically requires an high level of domain specialization, it's unlikely that you would find many people with the relevant expertise on LW, and the discussion will be uninteresting for anybody else, but you might want to give it a try.
It seems that you are suggesting turning LW into some sort of alternative circuit for scientific publication.
You can call it that. I call it refining the art of human rationality. I don't think building new knowledge is something that magically only happens in a box designated Academia. Remember SI did years of research basically outside it, they only started publishing so they could attract more talent and as a general PR move, not because it was the most efficient way to do it. We are already an alternative circuit for scientific publication. This is exactly what we do every time we publish an article carrying some novel take on human rationality or some instrumentally useful advice. We are just bad at it.
That seems an inefficient thing to attempt. If you do publishable research, publish it to a peer-reviewed academic journal or conference.
You don't seem to have read the related articles I cited. I strongly suggest you do.
I would also recommend you read Why Academic Papers Are A Terrible Discussion Forum. As to your invoaction of the somewhat broken formal peer review process that came into existence in the 20th century and is sadly still with us (I recommend you search Vl...
No, but Academia is optimized for that and has hundreds of years of demonstrated effectiveness and accumulated experience.
Is it perfect? No.
Actually the features of academia I'm criticizing are much younger than that. The modern peer review system is something Einstein didn't have to deal with for example. If you think hundreds of years of scientific progress are a good track record for a system I have some news for you...
"Citizen science" is a fairly new term but an old practice. Prior to the 20th Century, science was often the pursuit of amateur or self-funded researchers such as Sir Isaac Newton, Benjamin Franklin, and Charles Darwin. By the mid-20th Century, however, science was dominated by researchers employed by universities and government research laboratories. By the 1970's, this transformation was being called into question. Philosopher Paul Feyerabend called for a "democratization of science."[33] Biochemist Erwin Chargaff advocated a return to science by nature-loving amateurs in the tradition of Descartes, Newton, Leibniz, Buffon, and Darwin—science dominated by "amateurship instead of money-biased technical bureaucrats."[34]
I'm guessing this post was down voted because of author not content because I can't find anything wrong with the latter.
But the fact that he got his degree with a boring trivial paper, when he had several of his greatest papers in hand, suggests that there was no fixing them.
Yes this is evidence towards him not being sure those papers could be fixed.
Getting a group of people to function together so that their output is smarter than any one of them is hard, a deep and unsolved problem.
Exactly, coordination is hard. Perverse incentives, Goodhart's law, agency dilemma, etc.
The normal outcome is that their output is dumber than any one of them.
See most non-profit organizations ever.
The scientific community solved this problem from the late seventeenth century to late nineteenth or early twentieth century. Although engineering continues to advance, and more powerful tools such as DNA readers continue to advance science, science itself seemed to run out of puff after Einstein.
While I think you are right for most fields, I would argue we see a relatively healthy culture and even functional institutions when it comes mathematics since they have be...
How about testing our ideas?
Actually judging clever articles by the rent they demonstrably pay in anticipated experience? This idea is too radical Konkvistador. Don't you know that hand waving or reading papers is fun and testing is like ... work?
A young and learning member calling reading papers "fun" without a second thought is already impressive progress when compared to the epistemic attitude of most people around us, I'd say.
LW posters have noticed many times that the most instrumentally rational people, hailed for making the world better or at any rate leaving a mark on it (Page & Brin, Warren Buffett, Linus Torvalds, maybe Thiel; among politicians either Gandhi, Churchill or Lee Kuan Yew - they wouldn't have got along! - and maybe some older ones like Alexander II of Russia or the people behind the Meiji Restoration...), rarely behave like Eliezer or Traditional Rationality would want them to. They exploited some peculiar factors, innate or unintentionally acquired advantages (genes, lucky upbringing, broad life experience) that LW attempts to emulate through some written advice and group meetings. Most haven't even heard of Bayes or can't name a couple of fallacies! :)
At this stage, if an LW user actually uses the letter and spirit of LW materials to gain rent in some complicated, important area (like education, career, interpersonal relations, "Luminosity", fighting akrasia) - well, that's ...
Maybe the result is that they stunt growth, but to infer intention from that is just an agency-fantasy. I would guess that the bereaucrats that actually think about the result have good intentions, even.
Eh, the Prussian school system was explicitly designed to create soldiers, and stunting intellectual growth is a part of that. It's not much of a stretch to call it intentional.
I doubt that many school officials or politicians today know about the influences of the Prussian school system on e.g. the United States school system, or would guess that their present systems bear features deliberately designed to stunt intellectual growth.
I suspect that they mostly see the system that they were themselves educated in as normal by default, and only think to question the appropriateness of features that are specifically brought to their attention, and then only contemplate changing them in ways that are politically practical and advantageous from their positions. Expecting them to try and design and implement a school system that best meets their stated goals is like expecting a person to specify to a genie exactly how they want their mother removed from a burning building so as to save her life. The problem and its solution space simply doesn't fall within the realms that they're inclined to actually think about.
You are postulating quite the conspiracy tho.
Not really. To militaristic Prussia of the time, creating good soldiers was simply the same as creating good citizens, and was considered a worthy goal. No conspiracy required, just doing what seemed obviously correct at the time. And then the Prussian system was so 'advanced' and 'modern' and 'successful' that others copied it.
American experts did not all agree with the 'military' goal, but it was believed by the relevant experts that the same sorts of virtues applied to factory workers.
Now people try to actually educate children via this system. It's like making minor tweaks to a torture device and wondering why it is ineffective at relieving headaches. You put some ibuprofen on the screws, tighten them some more, and subjects report slightly less intense headaches than last time.
John Taylor Gatto won the New York State teacher of the year award in 1991 (New York state's education website). His ambition to be a great teacher led him to the realization that the system itself is broken and he was so disgusted with it that he resigned. The claims that John Taylor Gatto makes are much worse than that they're defaulting to the teacher's password. You have no idea. Consider this: You obviously value rational thought. Learning about things like logical fallacies and biases is a no-brainer to you, right? Why are so many people learning them here, at LessWrong, for the first time? From what I know of American public schools, most of them don't teach these. What could cause our school systems to teach us square dancing and rote memorization of thousands of spellings of words for the sake of polish, but leave out basic pieces required for rational thought? Ask yourself this:
If you were making the curriculum, and you knew the kids would be turned lose into the world complete with the right to vote at 18 would you find any excuse good enough to let them out with no familiarity of logical fallacies, biases, etc.?
If your answer to this is "no" you alr...
Man, Gatto spurred off so much thought for me. That was in my early 20's so it's not all readily coming to mind right now, but wow. I feel like... he explained so much. I'm not sure why you say he's inspiring. So much of life that didn't make any sense began to make sense after that. But that was one of the worst existential crises I've ever experienced. To realize that your whole life you had been stifled by the thing you thought was teaching you: abominable. There are horrors worse than death. That is one of them.
When I was 17, I decided to tear my whole reality apart because I noticed that it contained too many flaws. This was excruciating and terrifying. When I was 18, I had the undignified experience of realizing I could not allow myself to vote because I wasn't taught to think critically and was still learning to. When I was in my early 20's, I discovered logical fallacies and went "SOMEBODY WROTE THIS ALL DOWN!!?!!?? Why didn't I know about this!?" I was a mess of a young woman - it took years of effort to put together a decently competent mind after all that.
Failing to teach reasoning skills in school is a crime against humanity.
My sense is that the education system struggles with the transition between learning-to-read and reading-to-learn.
Somewhat off-topic: high schools anywhere don't seem to explicitly teach the only essential skill a college student must need: learning to learn:
How do I figure out what I need to know for a given class, how do I figure out what I do not know, and how do I go about learning it efficiently?
is not a question students learn to ask or answer. Everyone who completes a post-secondary education tends to come up with some sort of implicit heuristics that get them through, few do it consciously.
The bad of New Atheism: Children playing with memetic weapons, with the safety off.
Lack of patience, overconfidence, more about signalling intelligence than about persuading religious people, lack of empathy. Those are the problems that came immediately to mind when I thought about it. That's not to criticize all of New Atheism, though. I think I like the basic idea of it.
Possible use for this new thing: Seeing as we have had much recent discussion about what behaviour is, and is not, creepy, we could create a long list of potential behaviour, and study which people think it's creepy or not. And if it makes a difference if you ask first. (Question 492: So is it okay if I ask you out on a Wednesday... while wearing a tutu, and there are exactly 3.9 people in the room?)
Once we had this useful dataset, we could evaluate potential rules for social interaction ("okay, under your plan, 42% of women under 25 say they'll hate you, but with my scheme, 50% say they'll like you!") etc.
There are many publicly available data sets and plenty of opportunities to mine data online, yet we see little if any original analysis based on them here. We either don't have norms encouraging this or we don't have enough people comfortable with statistics doing so.
In my case, I'm comfortable with statistics but don't know where to find the data for questions that interest me. The fact that much research is nearly inaccessible if you're not affiliated with a university or other large institution is also a problem.
You think you have a good map, what you really have is a working hypothesis
I most certainly don't think that. I'm not so sure many people on LW think that. The part of your map that talks about others' maps looks suspect.
Intellectual speculation isn't bad in itself. Actually, it's fun! It's only bad when people don't know that they're in fact merely speculating. I have nothing against people of LessWrong banding together to do experimental science but that wouldn't mean LessWrong as a whole was progressing on the path towards greater virtue. It would on...
Getting it done:
First we could pre-test them in an inexpensive way for the purpose of figuring out which ones are worth the money for independent research. Then, because LessWrong gets several million hits a year, an appeal to donate could be placed on LessWrong pages asking for donations to pay for high-quality research from an organization with credibility.
Related to: Science: Do It Yourself, How To Fix Science, Rationality and Science posts from this sequence, Cargo Cult Science, "citizen science"
You think you have a good map, what you really have is a working hypothesis
You did some thought on human rationality, perhaps spurred by intuition or personal experience. Building it up you did your homework and stood on the shoulders of other people's work giving proper weight to expert opinion. You write an article on LessWrong, it gets up voted, debated and perhaps accepted and promoted as part of a "sequence". But now you'd like to do that thing that's been nagging you since the start, you don't want to be one of those insight junkies consuming fun plausible ideas forgetting to ever get around to testing them. Lets see how the predictions made by your model hold up! You dive into the literature in search of experiments that have conveniently already tested your idea.
It is possible there simply isn't any such experimental material or that it is unavailable. Don't get me wrong, if I had to bet on it I would say it is more likely there is at least something similar to what you need than not. I would also bet that some things we wish where done haven't been so far and are unlikely to be for a long time. In the past I've wondered if we can in the future expect CFAR or LessWrong to do experimental work to test many of the hypotheses we've come up with based on fresh but unreliable insight, anecdotal evidence and long fragile chains of reasoning. This will not happen on its own.
With mention of CFAR, the mind jumps to them doing expensive experiments or posing long questionnaires with small samples of students and then publishing papers, like everyone else does. It is the respectable thing to do and it is something that may or may not be worth their effort. It seems doable. The idea of LWers getting into the habit of testing their ideas on human rationality beyond the anecdotal seems utterly impractical. Or is it?
That ordinary people can band together to rapidly produce new knowledge is anything but a trifle
How useful would it be if we had a site visited by thousands or tens of thousands solving forms or participating in experiments submitted by LessWrong posters or CFAR researchers? Something like this site. How useful would it be if we made such a data set publicly available? What if we could in addition to this data mine how people use apps or an online rationality class? At this point you might be asking yourself if building knowledge this way even possible in fields that takes years to study. A fair question, especially for tasks that require technical competence, the answer is yes.
I'm sure many at this point, have started wondering about what kinds of problems biased samples might create for us. It is important to keep in mind what kind of sample of people you get to participate in the experiment or fill out your form, since this influences how confident you are allowed to be about generalizations. Learning things about very specific kinds of people is useful too. Recall this is hardly a unique problem, you can't really get away from it in the social sciences. WEIRD samples aren't weird in academia. And I didn't say the thousands and tens of thousands people would need to come from our own little corner of the internet, indeed they probably couldn't. There are many approaches to getting them and making the sample as good as we can. Sites like yourmorals.org tried a variety of approaches we could learn from them. Even doing something like hiring people from Amazon Mechanical Turk can work out surprisingly well.
LessWrong Science: We do what we must because we can
The harder question is if the resulting data would be used at all. As we currently are? I don't think so. There are many publicly available data sets and plenty of opportunities to mine data online, yet we see little if any original analysis based on them here. We either don't have norms encouraging this or we don't have enough people comfortable with statistics doing so. Problems like this aren't immutable. The Neglected Virtue of Scholarship noticeably changed our community in a similarly profound way with positive results. Feeling that more is possible I think it is time for us to move in this direction.
Perhaps just creating a way to get the data will attract the right crowd, the quantified self people are not out of place here. Perhaps LessWrong should become less of a site and more of a blogosphere. I'm not sure how and I think for now the question is a distraction anyway. What clearly can be useful is to create a list of models and ideas we've already assimilated that haven't been really tested or are based on research that still awaits replication. At the very least this will help us be ready to update if relevant future studies show up. But I think that identifying any low hanging fruit and design some experiments or attempts at replication, then going out there and try to perform them can get us so much more. If people have enough pull to get them done inside academia without community help great, if not we should seek alternatives.