I do not say this lightly... but if you're looking for superpowers, this is the place to start.”

--Michael Curzi, summer 2011 minicamp participant

Who: You and a class full of other aspiring rationalists and world-optimizers, from around the world.

What: Two 3-day weekend minicamps and one 8-day minicamp, filled with hands-on activities for applying rationality to your life, your goals, and the making of a better world.  (See details in the FAQ.)

When and where: We're running three camps, so that we can do this for three sets of participants: May 11-13 and June 22-24 for the 3-day camps, and July 21-28 for the eight-day camp, all in the San Francisco Bay Area.

Why: Because you’re a social primate, and the best way to jump into a new way of thinking, make friends, and accomplish your goals is often to spend time with other primates who are doing just that. 

Other reasons:

  • Hang out and explore the Bay Area with two dozen other people like you who are smart, interesting, and passionate about rationality
  • Attend bonus sessions about style, body language, and confidence-building.
  • Get help charting out career paths; and, entirely optionally for those interested, connect with folks at the Singularity Institute about optimal philanthropy.


Eliezer Yudkowsky Anna Salamon Julia Galef
Andrew Critch Luke Muehlhauser Michael Smith

Cost:  $650 for the three-day programs; $1500 for the week-long program.  This includes lodging[1], meals, and tuition.  

(Note that this *still* isn't quite enough to make running minicamps sustainable in the long-run; a lodging + meals at retreat centers start at around $90 per person per night, the "three-day camps" include four nights, and these workshops take a staff of about 5 full-time people for over a month each prior to each workshop, most of us at $3k/month, counting curriculum development time (plus miscellaneous expenses).  We are trying to strike a compromise between "charge enough that we can run more camps" and staying affordable, especially for our start-up phase; costs will probably go up in following years.)

Three days (or a week) isn’t long enough to learn rationality, but it's long enough to learn how to learn rationality, and to get some momentum toward doing so.

Come meet us, and see what you can do.

Apply now.

Frequently Asked Questions:

1.  I’m older.  Should I still apply?

Yes!  We're aiming for a more diverse crowd and would love to add your wider set of experiences and skills.

2.  I’d like to come, but I’m not sure you’ll accept me.  Should I still apply?

Absolutely!   You can fill out our form in as little 10 minutes.  What’s the harm?[2]

3.  I’d like to come, but I can’t afford it.  Should I still apply?

Yes, you should definitely apply.  A limited number of scholarships will probably be available this time, and more may be available later.

(There's also an option on the application form if you want to apply but can't make any of the times - this just says that you want to be part of future minicamps and makes sure we have your application details.)

4.  What will we do, exactly?

We're still working out the details.  In our current model:

  • Daily schedule: Every day, you'll have five hours of core workshop sessions (mostly exercises, divided into morning and evening sessions), meals shared with other participants, and shared activities such as soccer, poker, karaoke, and trips to bay area sites.
  • Rationality: You'll practice many specific techniques (e.g. Fermi calculations, applying Bayes' theorem and cognitive biases to daily life, seeing how using fungibility can boost your goal achievement); develop a map of your rationality strengths and gaps; and learn how to continue learning rationality after the program. 
  • Social effectiveness:  Reading and using body language; developing a fashion sense; improving social courage; and understanding why social reality is important.
  • Individual meetings:  You'll be able to schedule one-on-one appointments to discuss career paths you may want to take (we can help with statistics on earnings in different professions, and strategy for getting in); how to start a LW meet-up or similar community; and, optionally for those interested, how to get involved in existential risks-reducing research and action.

5.  I’m new to all this.  Will it make sense?

If you’ve read at least fifteen posts from the core sequences, yes it will.  If you haven’t: why not read them now?

We’ll also aim for an atmosphere in which everyone is free to make mistakes and to try things, and in which people are receptive to a wide range of skill levels.

6.  I’ve already read the Sequences seventeen times, and also I’m a self-made billionaire with three PhDs.  Will I learn anything new?[3]

We hope so.  We’re covering a good range of material, with much more of a focus on  practice  and  exercise  than in the Sequences, incorporating new lessons learned since the LW material was written, and with some instructors who've developed their own takes on rationality.

7.  What evidence is there that I'll be glad I went?

After last year's minicamp, participants completed an anonymous exit survey.  (With the instructions: "We're asking you these questions to learn how to run camps; please be honest; it'll help us more if you're accurate than if you're positive.")  Here are their answers to the most relevant questions:

  • In answer to “Zero to ten, are you glad you came?”, the median answer was 10 (mean was 9.3).
  • In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9) [This was the response that was most positively surprising to me.].
  • In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9).
  • In answer to “Zero to ten, are you more motivated to learn epistemic rationality, than you were when you came?”, the median answer was 8.5 (mean 8.1).
  • In answer to “Zero to ten, have you become more skilled at modifying your emotions and dispositions?”, the median answer was 7 (mean 6.3).
  • In answer to “Zero to ten, are you more motivated to modify your emotions and dispositions, than you were when you came?”, the median answer was 9 (mean 8.3).
  • In answer to “Zero to ten, have you gained social skills since coming?”, the median answer was 7.5 (mean 7.2).
  • In answer to "Zero to ten, did you like spending time with the other participants?", the median answer was 9 (mean 8.8).

We also asked participants for testimonials -- statements designed to be shown to others, in case they wanted to recommend such camps.  They wrote:

“This was an intensely positive experience. This was easily the most powerful change self-modification I've ever made, in all of the social, intellectual, and emotional spheres. I'm now a more powerful person than I was a week ago -- and I can explain exactly how and why this is true.

At mini-camp, I've learned techniques for effective self-modification -- that is, I have a much deeper understanding of how to change my desires, gather my willpower, channel my time and cognitive resources, and model and handle previously confusing situations. What's more, I have a fairly clear map of how to build these skills henceforth, and how to inculcate them in others. And all this was presented in such a way that any sufficiently analytical folk -- anyone who has understood a few of the LW sequences, say -- can gain in extreme measures.”

--Matt Elder / Fiddlemath

“I expected a week of interesting things and some useful tools to take away. What I got was 8 days of constant, deep learning, challenges to my limits that helped me grow. I finally grokked that I can and should optimize myself on every dimension I care about, that practice and reinforcement can make me a better thinker, and that I can change very quickly when I'm not constrained by artificial barriers or stress.

I would not recommend doing something like this right before another super-busy week, because I was learning at 100% of capacity and will need a lot of time to unpack all the things I learned and apply them to my life, but I came away with a clear plan for becoming better. It is now a normal and easy thing for me to try things out, test my beliefs, and self-improve. And I'm likely to be much more effective at making the world a better place as well, by prioritizing without fear.

The material was all soundly-researched and effectively taught, with extremely helpful supplemental exercises and activities. The instructors were very helpful in and out of session. The other participants were excited, engaged, challenging, and supportive.

I look forward to sharing what I've learned with my local Lesswrong meetup and others in the area. If that's even 1/4 as awesome as my time at the Mini-Camp, it will make our lives much better.”

--Ben Hoffman / Benquo

“I really can't recommend this camp enough! This workshop broke down a complex and intertwined set of skills labelled in my brain as "common sense" and distinguished each part so that I could work on them separately. Sessions on motivation, cognition, and what habits to build to not fool yourself were particularly helpful. This camp was also the first example that I've seen of people taking current cognitive science and other research, decoding it, and showing people what's been documented to work so that they can use it too. It feels to me now as though the coolest parts of the sequences have been given specific exercises and habits to build off of. This camp, and the people in it, have changed my path for the better.”

--David Jones / TheDave

You can also read the full testimonials from everyone who chose to give one.

Apply now

(You can totally fill out the application in just 10 minutes, so you might want to fill in the blanks right now -- we'd like to announce the first acceptances (for May) in the next week)

[1] More exactly, we provide a bed in a shared room at a house or retreat center rented by SIAI.

[2] Sometimes people say they’re “afraid of wasting our time” by sending in an application.  In a word, no.  If you’re interested in us, we’re interested in you.  It takes just seconds to read someone’s form, and our experience shows that many of our highest-value people have been the ones who hesitated to apply.

[3] Okay, fine, this isn’t really a frequently asked question.  But seriously, we’ll be covering a lot that isn’t in the sequences -- and the flesh-and-blood experience of meeting other aspiring rationalists is hard to duplicate.

ETA:  CMR is still looking for good teachers and curriculum designers.  If you're interested, please especially consider coming to a minicamp; we're hoping to find some good hires there.

ETA2:  We will probably have answers to all applicants within about two weeks (i.e., by April 16 or so), with answers to the May folks probably earlier than the others.  If for some reason you need your application processed *faster* than this, please shoot me an email: annasalamon at gmail.


New Comment
238 comments, sorted by Click to highlight new comments since: Today at 10:59 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have a high opinion of the minicamp (after observing fiddlemath before and after the camp, anecdotally I'd say he "leveled up" in notable ways that would be worthwhile for me), and I'll probably apply. That being said:

This post gives off bad vibes to (my mental model of) outsiders- I wouldn't be comfortable showing it to a non-LessWrong person and saying "This is what I'll be doing". I'm holding the post to a pretty high standard, because signaling matters a lot for an event where you're asking money from people and putting them through an intensive program (it pattern-matches things that people are wary of, from "multilevel marketing seminar" to "Christian retreat").

Some suggestions:

  • Providing an estimated cost breakdown (from last year) rather than a vague statement ("most of it is meals and lodging") would go a long way toward showing that whatever this is, it's not an SIAI fundraiser.
  • A specific example of an exercise from last summer's minicamps would be much better than a description of how awesome the exercises are in general, both for reassuring people that there's content to it and making people excited (as I was when I
... (read more)

Here's another random idea:

When I read product or movie reviews, I tend to look for the negatives as much as (if not more than) the positives; I also pay attention to the rating distribution (especially if it's bimodal). If I can't find any negatives, I tend to assume that the product has been astroturfed, and move on.

So, did the SIAI ever receive any negative comments about the rationality minicamp ? If so, where can I read them ?

I posted earlier that the surveys were confidential, but actually, I just reread them, and there doesn't seem to be anything personal in the "Which parts of the camp didn't work particularly well for you, and what do you think we could do to improve?" column, which was basically the "what negative comments do you have?" column. So I pasted those answers into a new spreadsheet and posted them to the web; you can read participants' collected complaints here.

4Paul Crowley11y
This is very useful, thanks! These don't look too challenging to address - would be good to know more about what you've changed in response to some of the common themes there.
These are great, thanks !

If you or anyone wants to do a survey, I can give you the email addresses of the minicampers, and you can email them and see what you get and post it online (assuming you write in your email that you are looking for publishable comments, etc.). Let me know if you/anyone seriously wish to do this.

Many of the minicampers are on LW, also; the folks with testimonials above have linked LW accounts; but there is of course selection bias there.

Someone else above asked for the negatives as well. Didn't we all submit suggestions for improvement and criticisms last year? Are those publishable? If you don't have permission, you could just email people for permission to publish their criticisms. You can definitely publish any of my comments.
The survey was anonymous, so this is hard to ask permission to share individual comments, since I don't know who wrote them (and folks were assured that their submissions were confidential). You (since you're on the minicamps google group) could email that google group and collect criticisms, and publish them. Someone else could ask me for folks' email addresses and then do the same.

Anna says we're still looking at locations but it's looking at around $115/person/night just for lodging + meals, and that the 3-day camps actually include 4 nights the way everyone counts things and we have to purchase it. Anna also notes that she and Julia and Michael get $3k/month and this takes way more of their time than just the actual days. So definitely not a Singinst fundraiser. That data is available very easily so I'm posting it right now.

A specific example of an exercise from last year's minicamp that a lot of people liked was "Value of Information" which included the technical details of how to calculate VoI and exercises in being sensitive to particular forms of scope (how much does it cost, how long does it last, how often does it happen).

We're still working out the program which is why it's not posted even tentatively (we were just in the middle of some agonizing about priorities).



Wait, what? Can I just stay in a hostel and eat from the gorcery store?

Can I just stay in a hostel and eat from the gorcery store?

To make a rational decision, more information is necessary, such as: How much does the hostel cost? Does it have decent beds? Distance from hostel to the place of workshop. Local food costs. Etc. (Don't forget to include facts like if you sleep in a different place than majority, you deprive yourself of opportunity of some morning/evening informal chat.)

Generally: how much would I really save by "hostel & grocery store" and how much would it reduce my gains from the workshop?

Speaking for myself, I would like to cut some costs (together with the cost of flying it makes my salary for 2 months), but there is always a risk. Once I slept in a hostel with beds so bad that I really did not sleep much that night. Now if I imagine 9 such nights plus jet lag, and the resulting effect on my concentration and memory, I would get much less benefit per dollar spent.

Couldn't the SIAI at least provide this option ? Then, people could decide for themselves whether they want cold cereal or gourmet meals.
I went to an Esperanto thing that was set up like this once.
I have friends and relatives who live in the area. How central to the camp is the communal living aspect? What would you charge to commute to it, if that is possible?

I guess we'd charge about 1/2 of the total (noting that you'd still be having meals with the rest of us)... but I suspect commuting is harder than you think, given how intensively scheduled it is. Err on the side of applying, and we can discuss.

Also, if anyone's unable to afford camp for whatever reason, apply anyhow and check the "needs scholarship" box and we can see what can be worked out.

The Bay Area is rather sprawling. It can take 1.5 hours to go from Berkeley to San Jose during rush hour. If they don't live near where the camp is held, I expect you would regret the commute and find the experience more taxing and less relaxing than the participants staying on site.

Agree but... if I knew where in the bay area it's being held I could tell whether it's just around the corner, or a 1.5 hour commute.

It's still being nailed down and it looks like it will be different locations for different camps, but for now it looks like the one week long one is at a retreat center in the deep East Bay between Walnut Creek and Danville, with one weekend camp in Berkeley and one in the South Bay. Still subject to change until we officially announce.

This post gives off bad vibes to (my mental model of) outsiders.

I had the same impression; the post makes the minicamp sound like your average, run-of-the-mill, self-help seminar scam -- complete with testimonials and everything.

That not necessarily a bad thing. Lots of people pay for those. And such people are in need of rationality help!

This plan is so crazy, it just might work ! :-)

Good to know.

I mean, it kind of is a standard workshop (like ones on public speaking, or italian cooking, or, yes, self-help)... except that the content is about Bayesian math, and microeconomics, and the cognitive science of standard human error patterns and so on. And the people you get to network with are other LW-ers who are interested in actually applying this content to practical problems, and coming to embed Bayesian patterns into the details of one's day-to-day thoughts instead of just into the way you answer pen-and-paper questions.

But, yes, similar workshop format, different content. Maybe we should make the ad different too in some way. I wonder if my inclusion of the testimonials and survey data, in particular, may have been misleading -- I was trying to say "look, past participants (who were smart LW-ers a lot like you) liked it, so maybe you will too", but it may have come across as a stronger claim. I'd say come check it out if you're interested, or else wait for a subsequent year if you want to have seen "proof that this will definitively change your life" first or something (which we may or may not ever manage, though we're certainly working on it), and, meanwhile, whether you come or not, do keep contributing on LW, trying exercises yourself in your own life, and generally helping to figure out what rationality can be.

The difficulty stems from how much good stuff is mixed in with all the scams, making outside evaluation much harder. Most self help programs include a lot of the same basic items by necessity (fixing the big problems first). We also seem to understand implicitly that people's self judgement of the effects of these types of things is terrible, especially when those judgements are very close time-wise to the event itself (rationalization of expense and effort, unwillingness to signal disloyalty to new ingroup, even internally).
Thanks; this is helpful.

I attended the 2011 minicamp.

It's been almost a year since I attended. The minicamp has greatly improved me along several dimensions.

  1. I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I'm more aware of how I'm expressing my body language. It's not perfect control and I've not magically become an extrovert, but I'm better able to interact in random social situations successfully. Concretely: I'm able to sit and stand around people I don't know and feel and present myself as relaxed. I dress better and people have noticed and I've received multiple comments to that effect. I've chosen particular ways to present myself and now I get comments like 'you must play the guitar' (this has happened five times since minicamp haha). This is good since it loads the initial assumptions I want the person to load.

  2. I've intentionally hacked my affectation towards various things to better reach my goals. For years I never wanted to have children. My wife said (earlier this year, after minicamp) that she wanted to have kids. I was surprised and realized that given various beliefs (love for wife, more kids good for society, etc) I needed to

... (read more)
What about the cost? I would not call spending $1500 in a week insignificant. And as a baseline, I believe that being surrounded for a week by a group of people who believe strongly in some collection of ideas is a risk at least an order of magnitude higher than an economics lecture. I certainly expect that it would have a much stronger effect on me (as it seems it has had on you) than the lecture would, and I would most certainly not take a risk of this magnitude if I have any non-negligible doubts.

To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team -- and I was actively looking for them!

Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryonics than you'd find in any random sample of 30 people. I.e.: some amount >2 vs some amount close to 0. So we weren't a wildly heterogeneous group.

As for the instructors - Anna and Luke were both very open about the fact that the rationality-education process is in its infancy and among the various SIAI members there is discussion about how to proceed. I could be wrong, I interpreted Eliezer as being somewhat skeptical of the minicamp process. When he visited, he said he had almost no involvement related to the minicamp. I believe he said he was mainly a sounding board for some of the ideas. I'm interpreting his involvement in t... (read more)

I just wanted to say this (esp. the second part) is actually one of the most cogent posts about anything that I've read in quite some time, and as such, a self-referential example of the value of the camp. It should probably be more visible, and I recommend making it a discussion post about deciding whether/when to attend.
-1Paul Crowley11y
Nitpick - cRYonics. Thanks!
Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.
0Paul Crowley11y
You're not alone - it's a common mistyping!
4David Althaus11y
Rather off-topic, but I'm very interested in rational meditation-advice: Did they suggest specific techniques of meditation like e.g. vipassana or did they recommend some particular books on meditation?
Jasen Murray suggested specific techniques and specific resources, which I unfortunately cannot remember (I was not that interested in that part of RBC).
Thanks for that. It's fascinating to get a glimpse of what rationality looks like in the real world rather than just online interchanges. Note aside, I'm a big fan of your work. Reassures me to know rationalists are on the team for dota 2

Applied. Looks good. Might decide it's not worth it, but you make a good case.

One thing. 0 to 10 ratings are utterly useless. The median is almost always around 7, for almost anything. Please give us calibrated statistics, not subjective pseudo-quantities where most of the contribution is from noise and offset.

Reminds me of business planning types ranking alternatives 1..n and then treating the indexes as utilities. ick. TYPE ERROR.

We've actually noticed in our weekly sessions that our nice official-looking yes-we're-gathering-data rate-from-1-to-5 feedback forms don't seem to correlate with how much people seem to visibly enjoy the session - mostly the ratings seem pretty constant. (We're still collecting useful data off the verbal comments.) If anyone knows a standard fix for this then PLEASE LET US KNOW.

I'd suggest measuring the Net Promoter Score (NPS) (link). It's used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.

  1. "On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?"
  2. "What is the most important reason for your recommendation?

To interpret, split the responses into 3 groups:

  • 9-10: Promoter - people who will be active advocates.
  • 7-8: Passive - people who are generally positive, but aren't going to do anything about it.
  • 0-6: Detractor - people who are lukewarm (which will turn others off) or will actively advocate against you

NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.

NPS is a really valuable concept. Means and medians are pretty worthless compared to identifying the percentage in each class, and it's sobering to realize that a 6 is a detractor score.

(Personal anecdote: I went to a movie theater, watched a movie, and near the end, during an intense confrontation between the hero and villain, the film broke. I was patient, but when they sent me an email later asking me the NPS question, I gave it a 6. I mean, it wasn't that bad. Then two free movie tickets came in the mail, with a plea to try them out again.

I hadn't realized it, but I had already put that theater in my "never go again" file, since why give them another chance? I then read The Ultimate Question for unrelated reasons, and had that experience in my mind the whole time.)

Good anecdote. It made me realize that I had just 20 minutes ago made a damning non-recommendation to a friend based off of a single bad experience after a handful of good ones.
Here [http://dl.dropbox.com/u/29304719/Papers/The%20one%20Number%20You%20Need%20to%20Grow.pdf] is the evidence paper.
Right, I'd forgotten about that. I concur that it is used, and I work in market research sort of.
Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why. You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.
One idea (which you might be doing already) is making the people collecting the data DIFFERENT from the people organizing/running the sessions. For example, if Bob organizes and runs a session, and everyone likes Bob, but thinks that the session was so-so, they may be less willing to write negative things down if they know Bob is the one collecting and analyzing data. If Bob runs the sessions, then SALLY should come in at the end and say something like "Well we want to make these better, so I'M gathering information of ways to improve, etc" Even if Bob eventually gets the negative information, I think people might be more likely to provide it to Sally (one step removed) than to Bob directly. (Even better: Nameless Guy organizes a session. Bob teaches session (making sure everyone knows this is NAMELESS' session, and Bob is just the mouthpiece.)
Also, I would say that verbal comments are generally MUCH more useful than Likert scale information anyways. It's better to be getting good comments, and bad Likert scores than vice versa.
Back when I did training for a living, my experience was that those forms were primarily useful for keeping my boss happy. The one question that was sometimes useful was asking people what they enjoyed most and least about the class, and what they would change about it. Even more useful was asking that question of people to their faces. Most useful was testing to determine what they had actually learned, if anything.
I've seen "rate from 1 to 5, with 3 excluded", which should be equivalent to "rate from 1 to 4" but feels substantially different. But there are probably better ones.
In this category of tricks, somebody (I forget who) used a rating scale where you assigned a score of 1, 3, or 9. Which should be equivalent to "rate from 1 to 3", but...
1Eliezer Yudkowsky11y
We weren't getting a lot of threes, but maybe that works anyway.
Then maybe "1 to 4, excluding 3" or "1 to 5, excluding 4", to rule out the lazy answer "everything's basically fine". That might force people to find an explanation whenever they feel the thing is good but not perfect. If you start getting 5s too frequently, then it's probably not a good trick.
Why not go all the way and just use a plus-minus-zero system like LW ratings (and much of the rest of the internet)? Youtube had an interesting chart [http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html] before they switched from 5 star rating systems to the like-dislike system showing how useless the star ratings were. But that's non-mandatory so its very different.
Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why. You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.
You could have a rubric without any numbers, just 10 sentences or so where participants could circle those that apply. E.g. "I learned techniques in this session that I will apply at least once a week in my everyday life", "Some aspects of this session were kind of boring", "This session was better presented than a typical college lecture", etc.
You could try a variant of this [http://lesswrong.com/lw/71b/individual_deniability_statistical_honesty/] (give someone a d10 and a d6, hide roll from surveyor, if the d6 comes up 1 they give you a 1-10 rating based on the d10 and are otherwise honest) but this may not be useful in cases where people aren't deliberately lying to you, and is probably only worth it if you have enough sample size to wipe out random anomalies and can afford to throw out a sixth of your data. Or weight the die.
I'm not a pro, but you probably want to turn the data into a z-score (this class is ranked 3 standard deviations above the ranking for other self-help classes). If you can't turn it into a z-score, the data is probably meaningless. Also, maybe use some other ranking system. I imagine that people have a mindless cached procedure for doing these rankings that you might want to interupt to force acually evaluating it (rank is a random variable with mean = 7 and stddev = 1).

The median is almost always around 7, for almost anything.

An anecdote on a related note...

There was once a long-term online survey about patterns of usage of a particular sort of product (specifics intentionally obscured to protect the guilty). One screen asks something like "Which of these have you used in the past year", and it shows 4 products of different brands in random order and "None of the above", and respondents can select multiple brands. Different respondents answer every week, but the results are pretty consistent from one week to the next. Most respondents select one brand.

One week, they took away one of the brands. If it were tracking real usage, you'd expect all of the responses for that brand to have shifted over to "None of the above". Instead, all of a sudden people had used the other 3 brands about 4/3 as often as the previous week. It was exactly the result one would expect if practically everyone were answering randomly. That pattern kept up for a few weeks. Then the question was changed back, and the usage of all 4 brands went back to 'normal'.

Some of the effect could be accounted for by a substitution principle; instead of asking oneself for each option whether one's used it in the last year, it's easier to ask which of them one recalls using most recently (or just which of them seems most salient to memory), check that, and move on. If people do actually switch between products often enough, this would create that dynamic.

I tried to take that into account when reading. Please explain.

I tried to take that into account when reading.

I know, I did too, but that is really the sort of calculation that should be done by a large-scale study that documents a control distribution for 0-10 ratings that such ratings can be calibrated against.

treating the indexes as utilities

Please explain.

In my engineering school, we had some project planning classes where we would attempt to calculate what was the best design based on the strength of our preference for performance in a variety of criteria (aesthetics, wieght, strength, cost, etc). Looking back I recognize what we were doing as coming up with a utility function to compute the utilities of the different designs.

Unfortunately, none of us (including the people who had designed the procedure) knew anything about utility functions or decision theory, so they would do things like rank the different criteria, and the strength of each design in each criteria, and then use those directly as utility wieghts and partial utilities.

(so for example strength might be most important (10), then cost (9) then wieght (8) and so on. and then maybe design A would be best (10) in wieght, worst (1) in strength, etc)

I didn't know any de... (read more)

Despite anti-arbitrariness intuitions, there is empirical evidence that this is wrong. The Robust Beauty of Improper Linear Models [http://citeseerx.ist.psu.edu/viewdoc/download?doi=] (this is about something somewhat less arbitrary than using ranks as scores, but it seems like evidence in favor of that approach as well)
Dawes is not a reliable researcher; I have very little confidence in his studies. Check it [http://lesswrong.com/lw/3gv/statistical_prediction_rules_outperform_expert/3cu1]. (ETA: I also have other reasons to mistrust Dawes, but shouldn't go into those here. In general you just shouldn't trust heuristics and biases results any more than you should trust parapsychology results. (Actually, parapsychology results tend to be significantly better supported.) Almost all psychology is diseased science; the hypotheses are often interesting, the statistical evidence given for them is often anti-informative.)
Multicriteria objective functions are really hard to get right. Weighting features from 10 to 1 is actually a decent first approach- it should separate good solutions from bad solutions- but if you're down to narrow differences of the weighted objective function, it's typically time to hand off to a human decision-maker, or spend a lot of time considering tradeoffs to elicit the weights. (Thankfully, a first pass should show you what features you need to value carefully and which features you can ignore.)
If you have relatively few choices and properties are correlated (as of course they are [https://www.gwern.net/Everything]), I'm not sure how much it matters. I did a simulation of this for embryo selection with n=10 [https://www.gwern.net/Embryo-selection#robustness-of-utility-weights], and partially randomized the utility weights made little difference.
I'm not sure I understand what you mean by pseudo-quantities. So the problem is that these attributes were given rankings from 10 down to 1, rather than their weights that corresponded to their actual importance?
Right- that can cause this problem [http://xkcd.com/937/]. (Not quite the same dynamic, but you get the idea.)
"pseudo-quantity" is a term I just made up for things that look like quantities (they may even have units), but are fake in some way. Unlike real quantities, for which correct math is always valid, you cannot use math on pseudo-quantities without calibration (which is not always possible). Example: uncalibrated probability ratings (I'm 95% sure) are not probabilities, and you cannot use them in probability calculations, even though they seem to be numbers with the right units. You can turn them into real probabilities by doing calibration. (assuming they correllate well enough) More or less. Other ranking systems could be calibrated to get actual utility coeficients, but rank indexes loose information and cannot even be calibrated.
Probabilities can be empirically wrong, sure, but I find it weird to say that they're "not probabilities" until they're calibrated. If you imagine 20 scenarios in this class, and your brain says "I expect to be wrong in one of those", that just is a probability straight up. (This may come down to frequency vs belief interpretations of probability, but I think saying that beliefs aren't probabilistic at all needs defending separately.)
So the pseudo-quantities in your example are strength ratings on a 1-10 scale? I actually think that's acceptable, assuming the ratings on the scale are equally spaced, and the weights correspond to the spacing. For instance, space strengths out from 1 to 10 evenly, space weights out from 1 to 10 evenly (where 10 is the best, i.e., lightest), where each interval corresponds to roughly the same level of improvement in the prototype. Then assign weights to go along with how important an improvement is along one axis compared to the other. For instance, if improving strength one point on the scale is twice as valuable as improving weight, we can give strength a weight of 2, and computations like: * Option A, strength 3, weight 6, total score 2(3) + 6 = 12 * Option B, strength 5, weight 3, total score 2(5) + 3 = 13 make sense.
Still have one degree of freedom. What if you ranked from 10-20? or -5 to 5? As a limiting case consider rankings 100-110: the thing with the highest preference (strength) would totally swamp the calculation, becoming the only concern. Once you have scale and offset correctly calibrated, you still need to worry about nonlinearity. In this case (using rank indexes), the problem is even worse. Like I said, rank indexes lose information. What if they are all the same wieght but one is drastically lighter? Consider that the rankings are identical no matter how much difference there is. That's not right. Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity. This is not as hard as FAI, but it's harder than pulling random numbers out of your butt, multiplying them, and calling it a decision procedure.
I agree that ranking the weights from 1 to N is idiotic because it doesn't respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option's value: * Option A, strength 103, mass 106, total score 2(103) + 106 = 312 * Option B, strength 105, mass 103, total score 2(105) + 103 = 313 (I changed 'weight to 'mass' to avoid confusion with the other meaning of 'weight') I assume you mean using values for the weights that correspond to importance, which isn't necessarily 1-10. For instance, if strength is 100 times more important than mass, we'd need to have weights of 100 and 1. You're right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Remember that whenever you want a * for multiplying numbers together, you need to write \*.
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out. Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights. agree. Linearlity is a valid assumption The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.
This is a very good general point, one that I natively seem to grasp, but even so I'd appreciate it if you wrote a top-level post about it.

Since a couple of people want before/after information, here's some: Before minicamp: I was able to work around 5 hours per day on research.

After: 10 hours/day, sustainable for months.

After: Less afraid to try new professional directions than ever before, by a margin much wider than this trait has ever changed for me.

After: Secured $24,000 of grant money from DARPA to work on applications of algebraic geometry to machine learning, my first time trying out applied math. Loving it.

After: Difference in productivity was so noticeable that I'm volunteering my time as an instructor at the next few camps (I taught some at the last camp, too) because I expect it to have further positive, lasting effects on my professional / personal life.

After: Got a new dissertation advisor; many people around me seemed to think that was impossible or risque, but it has gone very well and been very refreshing, given my interests. (Before the camp I was more afraid to make what felt like a "sudden" change, which was actually something I had been thinking about for a year and was not sudden at all.)

Note: My experience at the camp may not have been typical, because I did teach a few sessions at t... (read more)

This is interesting to me, since we seem to be in about the same position academically (though you're a bit ahead of me). What was responsible for such a huge increase in productivity, or can that not be summarized? I need to research more myself, but I do not think I will be able to afford or attend the minicamp, so anything you'd be able to share would be appreciated.
If you want to attend but can't afford the fees, please do apply anyhow, and check the "need scholarship" box. Even if it turns out that we can't admit you this year, we'll at least know there are people out there who want to attend but can't afford it, and we can possibly take this information to potential donors as the Center for Modern Rationality gets on its own non-profit feet.
The particular changes I've made (like changing my advisor) have been very personalized for me, by me... but they have been fueled by a few root adjustments: 1) More curiosity about my life choices. Caused in part by being surrounded by a group of smart similar people doing very different things with their lives. 2) More willingness and desire to make up my mind more quickly and effectively about Big Life Decisions. Caused in part by Anna Salamon generating, on the spot, a steady stream of helpful questions for me that I could ask and answer to myself about career choices. I never came to any conclusions that she suggested (which I consider a good sign; I wouldn't expect someone else to know what I should do with my life from a few conversations), but she gave me a sense that more is possible [http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/] in terms of how quickly a person can generate important, answerable questions. 3) More curiosity / motivation to experiment with productivity hacks, until I found some that work for me (Getting Things Done system + Pomodoro technique). Caused by being surrounded by productivity-obsessed people for a week with lots of cool ideas that helped me internalize a belief in the existence of popular productivity hacks that would work for me. 4) More desire to Be Successful (which I'd had very little of throughout most of my life), caused by feeling like I was part of a community that I cared about and who might benefit in some small way from my success.

Here are some points, as I think of them.

The Good

  • It was fun. On par with the best one-week vacations I've had, but less fun than Hawaii.
  • I look significantly better, directly caused by the fashion workshops. My sister was briefly jealous because while I usually won in the raw-g department, she had always handily won in the looking-good department, and this is no longer true.
  • I took to heart Try Things, (hypothesis) directly caused by in-person admonitions by high-status instructors. Previously I had focused far too much on exploitation over exploration. Concrete example: I went to a code retreat with Arlo Belshee last weekend and my roommate did not, while any normal reference class would have said he was more likely to go, and it was super-useful.
  • I actually applied (and am applying) Gendlin and Tarski to the scary object of my inner mental life. I recommend Internal Family Systems as very useful though I have no direct comparisons I can make. If it turns out that it's actively harmful or even significantly worse than average mainstream psychotherapy I will update strongly towards standard-retreat-rather-than-awesome.
  • Directly after the minicamp, I made several time-management c
... (read more)

I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:

1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily.

I agree things got more disorganized, in fact, I remember on a couple occasions seeing the 'this isn't the outcome I expected' look on Anna's face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.

As for your ugly it's important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you're giving credibility to? Example: I talked to a lot of curious people that week about how Valve's internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve's initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.

Yes, this. Usually this risk is low, but here it was actually quite high. This particular instance was an Ugly example, because the category - ideas with close temporal association - was false. But there were many scary examples based on good categories. The most outlandish was meditation. Remember that other people's brains are part of evidence [http://lesswrong.com/lw/jl/what_is_evidence/], now witness quite a few people who have just spent the last few days on activities that convinced you they are pretty decent (compared to baseline, damn good) at doing their research, discarding bullshit, not strongly espousing ideas they don't strongly hold, examining the ideas they do hold, etc. etc... witness them say with a straight face that meditation, which you (I) assumed was a crock of mystic religion that just took a different turn than the Western religions you're familiar with... witness them say that meditation is super-useful. Then watch your brain say "Bull! Wait, they're good at things. Maybe not bull? Hey, argument from authority, bull after all! Wait, argument from authority is evidence... :S I... have to take this seriously..." IFS, NVC, nootropics? Guess I have to take them seriously too. (I exaggerate slightly, but my feelings were stronger than I think they should have been, so that story is in line with how I felt, if not precisely what my beliefs were)
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn't seem unreasonable to me that folks interested could explore it and see what turns up. Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I'd be paying opportunity cost to cover unknown territory with unknown payoffs. I don't feel oddly attached to any beliefs here. Maybe I'll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately. I don't see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like "fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let's take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?" This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn't going to turn me into a mystic any more than Curzi's anecdotes about his buddy's nootropics overdoses will turn me into a juicer. I didn't take away the message 'meditation is super-useful.' I took away the message 'meditation is something some people are messing with to see what works.' I'm less worried about that than if someone said 'eating McDonalds every day for every meal is something some
Are you familiar with the study (studies) about meditation and brain health? I've seen one or two crop up, but I've not read the actual studies themselves - just summaries. IIRC, it appears to reduce the effects of aging. The other reason I consider meditation possibly worth pursuing is that it appears to be an effective "mindhack" in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it's been shown to be effective in reducing stress and anxiety, and - more impressively - chronic pain, or even sensations like "chilly". How useful this is is more debatable: while I'm waiting in line, shivering, I probably won't be able to meditate effectively, or have the time to.
Hm, super-useful was a bad term. The actual impressions I got were "obviously coherent and not bs, and with high enough mean+variance that the value of investigation is very high". Not necessarily the value of any one specific person investigating, but the value of it being investigated. So I went a bit further than your to believe the top of the curve was a) grossly useful and b) of non-negligible likelihood.
It strikes me that you may want to take a step further and consider mysticism itself as a functionally useful brain-hack much like meditation. It's very possible that mystical texts [http://esr.ibiblio.org/?p=2596] could be used to bring out a mental stance conducive to rationality. The Litanies of Tarski and Gendlin are fairly obvious examples, and I'd even argue that HP:MoR seems to be fulfilling that role as a kind of shared mythology tapping into well-understood tropes, at least for the subset of rationalists who like Harry Potter fanfiction.
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs. So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don't approach at all. If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick. If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing -- slightly reduce the size of the tag or create a branch through more specific concept definition and separation. So I don't really see value in attaching the mysticism tag to things that don't directly warrant it. What you call a mystical litany I'd call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts. So no, I won't consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of 'warning sign' . It's not the only warning sign, but it's a useful one.
I can see why you would consider what you call "mysticism", or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright. Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice [http://lesswrong.com/lw/8x5/ritual_report_nyc_less_wrong_solstice_celebration/] on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It's hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.
This does sound plausible: 1. that the people who decompartmentalise crazy and do crazy stuff - fundies, cultists, fundie cultists - have a strong aversion to ambiguity, subtlety, irony; 2. that groups with weird ideas who are not averse to ambiguity, subtlety or irony are less likely to do crazy stuff. The first I think is obvious, the second as a positive result would be somewhat surprising and worthy of investigation. I also suspect that a lot of romantic objection to rationality and science is that they see science as an example of group 1 holding that anything that can't be measured doesn't exist and throwing away important detail. I wonder how we would meaningfully gather numbers on such things.
I think mysticism is inherently irrational, and thus seriously participating in "mysticism itself" is counter-productive if you wish you become more rational. But I say "seriously participating", because as you say, perhaps mystical aliefs can be used to produce useful mental states - as long as it is recognized that that's what you're doing, and you don't ascribe any special significance to the mystical aspects (i.e., you recognize that the same effect can probably be achieved without any such relics; it's just a matter of preference). Like those neopagans you mention, I am both an atheist and a Wodanist. I use Wodan as a symbol of various ideals, and the devotions, rituals, symbols, etc. involved to remind myself of these. My actual beliefs are entirely atheistic and materialistic, but I enjoy the trappings and history behind Germanic paganism of this sort; thus, the main reason behind my Wodanism is simply enjoyment. Useful? Yes, as a reminder or way to encourage yourself (e.g., "though I am tempted to waste my money, I will be self-disciplined like my patron god") - but that's entirely apart from any mystical aspects.
I agree with this as far as rational belief is concerned, and on a denotational level. But I'm not sure whether one can achieve the very tangible benefits of enacting rituals involving such "gods" as Pan, Wodan or Hermes/Thoth without alieving that the gods are really there at some level--if only as archetypes of one's unconscious psychology--so that one can relate to them on their own terms. As long as the "gods" are not literally considered as supernatural entities (whatever that might mean) believing in them needs not be any more irrational than believing in any other features of our psychology. But successfully channeling a god might require us to connote that belief in ways that will seem quite foreign to a rationalistic, logically-oriented mental stance.
What are your criteria for this?
Well, that gets rather complicated. Think of it as the extent to which the religion appeals and encourages irrationality, and this causes its followers to be instrumentally irrational in verifiable ways. I'm not talking about self-identified moral or ethical systems here, but rather obviously crazy beliefs like "Our god will reward you with a heavenly garden and 42 virgins if you become a martyr" or "You need to purify yourself from the tiny spiritual beings which were brought to Earth by an all-powerful alien millions of years ago". Stuff like that will appeal to human utility/reward functions in fairly obvious ways, assuming that it is truly, fervently believed.
As an aside, what are IFS and NVC? Edit: Ah, found links. IFS: http://en.wikipedia.org/wiki/Internal_Family_Systems_Model [http://en.wikipedia.org/wiki/Internal_Family_Systems_Model] NVC: http://en.wikipedia.org/wiki/Nonviolent_Communication [http://en.wikipedia.org/wiki/Nonviolent_Communication]
Relatedly, I wonder what minimum consecutive length of time you need to get a lot out of this. How would the returns from three spaced-apart day-long workshops compare to those from a single three-day workshop? (This would of course work better with a group of people who don't need to travel a significant distance.) Is the New York meetup group [http://lesswrong.com/lw/4ul/less_wrong_nyc_case_study_of_a_successful/] what happens if you take this sort of thing, break it into small chunks and spread it out over time? People who attended minicamp can probably provide more informed speculation on these matters than I can.
A couple of notes: a) Two of the changes I made account for most of the gains: cutting the tail of my gaming (not just Dominion) and buying a car. There were other changes but they were all an order of magnitude smaller. b) The process I used does not require minicamp (but was caused by minicamp). You can do it now, in a couple hours. Write down everything you do in the 24*7 hours in a week. Look at the biggest chunks of time. There are two basic types: things you kinda have to do, and things you do because you want to. For those you have to do, ask (be curious!) how you can spend less time doing them, and then see if any of your methods are net positive. For those you want to do, ask how much better spending all of this time is than spending half of this time. Wherever the value starts to drop off sharply, just cut back to that amount. This is one of those examples of trusting that something is well worth investigating because people you recently came to trust say it's well worth investigating. Finding out that it wasn't would cause me to take a step back and wonder again "have I been brainwashed? are my defenses truly up like I feel they are? was the minicamp actually awesome or just the standard glow of decently-run retreats, 'cause if it wasn't actually awesome then the halo effect is an invalid bias, not a useful heuristic".
Anna et. al have been doing a lot of ongoing work on developing better rationality exercises. The focus of these camps will definitely have less stuff to learn (much less lecture format) and much more time spent practicing.
Thanks for this; it's detailed and doesn't shy from pointing out the Bad and the Ugly (though it seems like there isn't much of those!). One thing that made me curious, however: How did you determine this? Edit: Oh, I see you explain this below.

Request/advice: please consider taping the sessions. This will be useful to:

  • improve them in the future
  • package them as courseware, possibly for sale

I agree with this.

I would rather see them for free on YouTube or something. It would help me and others decide if it was something we'd want to try out ourselves.

Without having attended one, and as someone who has been reading OB/LW ever since Eliezer started posting at OB, it seems like the largest benefit I would get out of such a thing is the social networking benefits. If I'm right, and if I'm typical, you wouldn't be removing motivation for most potential camp attendees because they wouldn't get the biggest benefit ...person-to-person networking and friendship-building.

I'd say it was likely that those, whose motivation to attend was removed by feeling like they'd already got everything out of the camps by watching the videos, would be more than counteracted by interest raised in the camps by the videos.

Unless the videos make the camp seem boring or not worthwhile, of course!

Videotapes aren't interactive, and it seems like the biggest benefit from the workshop would be actually engaging with the people and exercises involved.
We'll / I'll totally consider this. Though note that most of the session-minutes will be composed of practice and exercises, not of watching a lecture; and so the value of watching on YouTube would be circumscribed.
I realize these will not be very useful out of the box, but considering how a number of Stanford classes were successfully ported to long-distance format (with interactive exercises, quizzes, etc), this might be a good first step in the refinement process. I think analyzing your performance via video is underrated outside of sports.
I recently started drafting a post around this exact premise! Any interest in collaborating?
This is an interesting concept - I look forward to the post. Some quick notes: In my experience, people instinctively balk at being recorded, or listening/watching a recording of themselves. I think there's something unnerving about it, and in some cases probably indicates low self-confidence. Perhaps something to do with mirror-neurons as well?
I'm not bothered by being recorded (provided I know who is going to see the video), but I feel somewhat uncomfortable watching the video afterwards.
If it matters, we've been filming our Saturday test sessions for our own use (watching the videotapes to learn how to teach, after setting a webcam up across the room), but that's quite different than making usable video for you-tube.
I can't contribute much other than the raw observation :(. I've seen this done by a guy from Dale Carnegie who was teaching presentation skills, and noticed some benefit from watching a couple of presentations I recorded myself. I imagine the benefit would be multiplied if I was going to give this presentation again and again, like someone who is planning a curriculum ^^^. Looking forward to your post!
Well, it's potentially one vector for folks to learn how to do the practice and exercises.

In the long run, we are working anyhow to port the exercises into a form that will work well at local LW meet-ups.


I attended minicamp last year, and I followed up with almost all of the attendees since then. I have had periodic Skype chats to see how it impacted their lives, so I can pretty confidently say that the minicamp:

  • Had a dramatic positive impact on some people
  • Had a significant (noticeable) positive impact on almost everyone
  • Had no noticeable negative effects on anyone

It definitely had a positive impact on me, but I represent more of a median result than an outlier. Since minicamp, I:

  • Sold my company, and am on track to make more money in the time since minicamp than I've made in the past few years. The decisions that lead to this were a direct result of the systematic decision and planning process I implemented because of minicamp

  • Turned down a job offer at Google to work at an even more awesome company. I both learned about--and got an interview with--this company directly because of a contact from minicamp

  • Improved in a hundred small ways (again, directly attributable to notes I made at minicamp), from fashion (I now regularly get compliments where I got none before) to health (I use less time to exercise and eat, yet feel much better)

There were definitely parts of the... (read more)

I'm curious to hear more about that point. Do you mean to say that you explicitly implemented a system that designated how to make those kinds of decisions?
Sort of. I meant to say that I decided to make explicit long term, medium term, and short term goals, regularly check their progress, estimate their difficulty and likelihood, and also had a better sense of the space of other opportunities, all as a direct result of minicamp (there was a session or two on goals, sessions on estimation and prediction calibration, and in general while there I realized that I sucked at seeing opportunity costs). After I did all those things, it effectively resulted in a systematic decision and planning process, since I had a much better sense about what tasks had the highest expected payoffs for my goals, and I simply work on those first.

In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9).

That's not something to ask people, that's something you ought to actually measure before and after, otherwise what kind of rationalists are you.

Would you like to help us develop our rationality metrics? It's a fairly difficult problem. We can't just give people the CRT before and after a camp.

The main problem is that a test tests ability to take the test, independently of what its makers intended. The more similar tests are to each other, the more taking the first is training for the second, and the easier it is to teach directly to the test rather than to the skill that inspired the test. The less similar the before and after tests are, the less comparable they are. Rationality training is particularly tricky because one is to learn formal models of both straight and twisted thinking, recognize when real-life situations resemble those patterns, and then decide how much formal treatment to give the situation, as well as how much weight to give to one's formal model as against one's feelings, reflexive thoughts, and so on. Traditional classroom tests are set up to best test the first bit, knowledge of the formal models, if one did solve the problems inherent in testing. Even to the extent one can ask people about how one ought to react in the field, e.g. when to use which sort of calculation, that is still a question with a correct answer according to a formal model and one is still not testing the ability to apply it! These problems resemble those the military has faced in its training and testing. They use indoctrination, simulations, and field tests. Decision making is tested under uncomfortable conditions, ensuring probable good decision making under most circumstances. In general, knowing what they do is likely to be helpful. The problems with tests are not intractable. One can limit the gain on the second test from having taken the first test by saturating the test taker with knowledge of the test before it is taken the first time, though few would be motivated. One can try to make a test similar to the skill tested, so ability at the test is well correlated with the skill one intends to test. One can try to devise very different sorts of tests that measure the same thing (I doubt that will work here). One component of a useful classroom test m
Ah, yes, that is indeed the first thing one should work on, otherwise the MW (Must Win) interpretation of Rationality is little better than the MW (Many Worlds) interpretation of Quantum Mechanics. I didn't realize that, after all this time, there are still no objective metrics to measure the success of the course. I wish I had good ideas as to how to experimentally measure rationality, but alas. Hopefully other forum regulars do. Or maybe EY can spend some time thinking about it. I guess an obvious way to start is to score a particular behavior based on some objective criteria, like the pass/fail on those sunk cost situations Anna (?) linked here some time ago. It's not nearly as good as actually putting people into the circumstances where they have to apply their newly learned skills (such as detecting confusion, recognizing cognitive dissonance, what have you), but it's a start. As a next step, my guess is that if you look through the standard psychological experiments (maybe something less drastic and notorious than the Stanford prison experiment), you will find quite a number of them that can be cheaply replicated in a controlled setting like a mini-camp. I'm sure that gwern can dig up a whole whack of them in no time flat. Or maybe you are already doing this, for all I know. The important thing is that the participants should be inside the situations, not outside of them, and hopefully unaware that they are being tested. I guess it is sort of similar to giving two sets of CRTs, before and after.

In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9) [This was the response that was most positively surprising to me.].

How long after the camp ended did you ask that question? If not very long, the answers don't surprise me at all. Asking such a question a year after the camp would be more interesting.

The question was asked on the last day of minicamp. We'll include something similar in the upcoming re-survey.

This sounds great. A couple questions:

  1. Why do you ask for my LW username? Will I be judged for poorly thought out comments or misfired jokes?

  2. What is the difference between the 3 day and the week long? How do I decide?

I'm not sure what to say about 3-day vs. week-long. We know week-long worked last year; 3-day seems worth trying, because, if it works well, it'll allow the material be accessible to a much larger set. We'll give you the best three days of stuff, but there'll be less total stuff and less chance to bond. I guess I'd come for the week if you have time for it, and would come to the 3-day if you can fit that but not the other? In any case, don't delay your application while considering. You can always email in later with a changed timing preference.
I'd like to second the second question. Should I be worried about the 3 day camp attempting to cram in too many useful techniques or the week long camp having filler?
The week-long camp will not have filler. At the minicamp, there were exercises that I suspected weren't useful -- the "rationalizing game" stands out in my mind -- but probably a bigger problem was trying to pack so many things into a week's span. I definitely had the feeling that I needed to be taking good notes, because I couldn't possibly absorb everything in the various sessions during those sessions.
They ran our camp for 10 weeks or so. Sure, some of that was filler. But they have enough there for at least a week.
That's great to hear. On the other extreme, could I get enough out of a three day retreat to bootstrap future learning?

Feedback: I'm interested in this but will not attend.

Why: I'm a professional in another city with limited vacation time. I could potentially go to the Bay Area for this, but it'd be expensive in money, time, and vacation not spent elsewhere. I believe it might still be worth doing it, but am not convinced.

However, I AM convinced that if one were held in my city (in this case, Seattle) for a similar price, I would be very interested. The cost could be offset because lodging/travel by the instructors would be paid instead of the attendees. If the workshops were something like Thursday/Friday evening and all weekend, so much the better.

Suggestion for the future: Check interest for doing these locally in other major cities and run the numbers to see if it's worth it. It might not make sense, but if it did, count me in!

8Eliezer Yudkowsky11y
This may happen in the future, depending on how this year goes; we're pretty sure it's not happening this year, so it's not a good thing to wait on.

Thanks. I doubt I will go this year for the reasons I listed above. Next year when I have more vacation time built up I'd consider doing it.

Although if you'd like to include "read advance chapters of HPMOR" into the benefits, I'm in.

8Eliezer Yudkowsky11y
...I don't know if I'll have the next chapter by May 13, and if there are other advance chapters after that they'll probably be horrible cliffhangers, but I'll consider it.
I was mostly kidding (but I can't deny that it would be an awesome perk).
3Eliezer Yudkowsky11y
Actually, quick check - was it clear from the text that there are two 3-day weekend camps and one 1-week camp? Hopefully a 3-day camp wouldn't be that expensive in terms of vacation time not spent elsewhere. Go ahead and ignore this comment if it was already clear, but if it wasn't clear from the text let me know and I can try to emphasize it more strongly.
FWIW, it was not clear from a skim.
3Eliezer Yudkowsky11y
Okay, went in and fixed.
Yes, it was clear to me. I would prefer the weeklong one but am considering the weekends. The cost of airfare is the same. Another question: Is there any particular reason why you are including the hotel costs into the fee? I can see the marketing value from "single number", but for those already in the Bay Area (or with friends/family there), reducing that cost by a bit and staying elsewhere would be helpful. If activities/teambuilding are going late, that makes sense, but that was not clear on a single read through (I could read through again to find out, but figure the feedback on this not being clear might be helpful).
Clarification: the weekend/weeklong distinction was NOT clear from the dates listed, I saw dates and skimmed for more information. It wasn't until I got to the costs section that I realized it and went back up to the dates.
It is clear now. However I'd like to mildly dispute "Hopefully a 3-day camp wouldn't be that expensive in terms of vacation time not spent elsewhere". ...some of us come from another country (quite a few actually) - and the travel time alone is equal to the 3 days we'd spend on the actual camp. Plus a day or so adjusting to jetlag enough that we could even stay awake for the first day.... I'd estimate that for me to attend even the three day camp would be, at minimum, a seven day vacation. Factor in the significant cost of the airfares, and we're starting to talk a lot of vacation-opportunity-cost. Obviously, attending the longer camp would be a better signal-to-noise ratio... but would still be a serious undertaking. PS: that's not to say, of course, that's it's not still worth the effort. Personally I'm looking into my options to see if I can pull it together to get to the week-long one this time... The main point is just not to forget that a significant number of LW readers are in a quite different situation to your own and thus the difficulty of attending should not be inadvertently trivialised.

I attended the minicamp last summer, at more personal expense than most participants, since I flew in from europe (I did have other things to do in California, so the cost wasn't entirely for minicamp).

If you want an analogy with minicamp, think of an academic summer school. At the most important level, I think the only thing that really separates minicamp (or an academic summer school) from christian camps is that the things they teach at minicamp (and summer schools) are mostly correct.

I go to summer schools to learn from people who have thought about things that I care about, in greater depth than I have. If you don't believe that will be true, don't go. You should be able to make a reasonable guess whether you think you have things to learn by looking at the instructors posts on less wrong.

I definitely agree with many things that the other participants said. I found that minicamp gave me a sense that things that normal people consider insoluble are often not, and a well thought out series of actions can lead you to places that most people would not believe. I also found it inspiring to be around a group of people that really care about improving themselves - something that I h... (read more)

7b) Is there any evidence I'll be glad I went that a Christian retreat could not produce just as easily?

Edit: Okay, 15 seconds to this being downvoted was a little hasty.

I know that this is mere anecdote; and that after doesn't strictly imply because of. But, since the mini-camp, people who know me would probably agree that:

  • I am more likely to try new things; in particular, I now have the habit of trying new habits to see what works and what doesn't. This has helped in a handful of little ways:
    • I've stopped biting my nails.
    • I've stopped drinking soda
    • I maintain a journal to get better information about myself
    • I use Anki to memorize facts, instead of just thinking it's a good idea. This has made my work rather more efficient.
  • I have more time and energy for both my academic work and other activities I enjoy.
  • I meet people more easily, and have more friends.

To emphasize the last point, uncomfortably personally: I am no longer cripplingly unable to examine my own sexuality, ask women out, or engage in relationships. (I'm still inexperienced for my age, though this improves over time.) These changes are due to techniques I learned at mini-camp: not lessons of the form "how to pick up women", but "how to be right about yourself".

Also, I suspect my writing has improved.

There are also internal, mental changes; and I suspect that the rate at which my agency improves has increased. But you'd get the same report in different words from someone after a Christian brainwashing retreat, so I suppose these are pretty weak evidence for you.

Hey, I'm glad to hear that :)

Finding people who could converse at a high level about the most important topics in the world was more fulfilling than I could have imagined. You can get some of this at a meetup - and I've been to meetups in Chicago, St. Louis, and the Bay - but the level of fulfillment I got at the mini-camp was the greatest by far.

Again, forgetting all the rationality training - there were moments at mini-camp when everyone was hanging out and I would literally have trouble deciding where to stand in a room because every conversation going around me was so ridiculously interesting that I couldn't stand choosing where to place myself. I felt like a wealth of knowledge was being spilt around me, and if I didn't scramble to consume as much as possible I'd miss some lifechanging insight and regret it forever. It was so beautiful it hurt.

Again, forgetting all the rationality training - there were moments at mini-camp when everyone was hanging out and I would literally have trouble deciding where to stand in a room because every conversation going around me was so ridiculously interesting that I couldn't stand choosing where to place myself. I felt like a wealth of knowledge was being spilt around me, and if I didn't scramble to consume as much as possible I'd miss some lifechanging insight and regret it forever. It was so beautiful it hurt.

Wow. That's like the opposite of most parties.

5Wei Dai11y
Can you describe the difference between a typical conversation at the mini-camp, and a typical conversation on LW? (Would it be accurate to say that you're more impressed with the former than the latter? I'm curious to find out why if that's the case.)
It would be accurate to say I'm more impressed with the former than the latter. I think the majority of this effect is caused by a) the conversations being in person, which is a better format than this nested Reddit thing, and b) the fact that we were together so long. That said, the conversations were also more enjoyable and interesting than conversations I've had at meetups (which have often been fantastic). I'm not exactly sure why - perhaps experiencing the somewhat rigorous mini-camp generated a sense of camaraderie, and thus friendship? After trying to adjust for the above effects, it also does seem to me that any residual difference in quality could have to do with the group that was selected. Luke did mention to me that they tried to choose a relatively extroverted set of people for the first mini-camp. Also, the level of professional success at the mini-camp was higher than most other groups I've been in, including meetups. (I also think the median age of the mini-camp must have been higher than the median ages of the meetups I've attended. At 21, I was one of the youngest there.)
3Wei Dai11y
So it's more about the form of the conversations, and less about the content? A problem I have with in-person group conversations is that I'd occasionally find that whoever is speaking is rambling or just not being as interesting as I hope, and wish there was some way to politely signal the person to make their point quickly and give someone else a turn. And then when I get a chance to speak, I'd fear that I'm not being as interesting as I had expected to be when I decided to speak up, and other people are thinking that I should stop talking. I'm curious if other people have had this problem and how they dealt with it.
An experiment I tried once, when I was helping mediate a 60-person round-robin discussion group (1), was to give everyone in the room four colored index cards: red, blue, green, and white, and assign them meanings by convention: red = "I disagree with what the speaker is saying" green = "I agree with what the speaker is saying" blue = "I have a question about what the speaker is saying" white = "I do not care about what the speaker is saying" My theory was that by establishing a communication channel that supported multiple simultaneous inputs, I could get the flow control to be a lot more efficient. The experiment mostly failed, in that people didn't use the cards, so I can't really speak to results. It still seems plausible to me, and I haven't seen it done elsewhere. === 1 - Don't try this at home.
1Wei Dai11y
I think people already do something like this, using facial expressions and body language. Using your cards probably felt redundant, condescending (implying the speaker can't read the standard signals), weird, or too explicit (e.g., when you want to signal disagreement/disinterest but also want plausible deniability). So I guess I was hoping for some tips on how to read/send the usual signals, and what to do when someone rambles on despite sending the usual signals. Another idea I just thought of is to have a smartphone app that allows one to send a covert anonymous signal to the speaker (but it would probably take too much work to get everyone to set it up and use it).
Certainly. Those mechanisms weren't working terribly reliably in a conversation that involved 60 people, which is precisely why I'd been looking for ways to augment the normal mechanisms.
Basically. But I think the form of the conversations leads to much better content, and more depth of exploration, and clearer / faster communication. I honestly find that this is difficult. I think it's easier to learn how to politely interrupt, or just be careful about the groups one hangs out in, or speak in smaller groups. That is interesting. I try to keep my points short, when possible. I think short points also facilitates better communication; shorter back-and-forth periods enable people to ask for the specific information they need, and closes inferential gaps.

As an attendee, my personal data might be relevant:

I have gained practice deliberately acquiring new habits and soliciting useful feedback. Before camp I had no specific plans for self-improvement other than "work harder", and now I actually keep track of what works and what doesn't. For instance, I am deliberately improving my public speaking skills by giving talks on Minicamp material once a week to a limited audience. I would place a bet that the "alternate universe me" who instead attended Inspirational Retreat X (IRX) would not have had lasting effects nearly a year later.

I am deliberately extending my network of contacts. Speaking to new people was a skill that I didn't have pre-Minicamp. On this point, "alternate universe me" could have reasonably acquired similar skills from IRX, but I have relatively strong reason to believe that those skills would be much more of a black box than they are now. I usually leave workshops inspired, but I can tell if it's a poor workshop when I try to apply the skills I learned and discover that it's not as easy as it seemed to be according to the instructor's examples. There is a difference between "explain... (read more)

4Paul Crowley11y
People reporting back from a Christian retreat are likely to report effects that Christians approve of - that they're asking Jesus to help them decide in their daily life, that they feel a more full and whole relationship with God, etc. But those things (where they don't require the existence of a God) are likely to be true - they really are doing those things.
If you went to a Jehovah's Witness retreat, and were in an accident, and you were conscious enough to refuse a blood transfusion, you'd be glad for having learned what you did at the retreat, even if you knew the refusal would be fatal. In general, anything that is compelling and affects your decisions will make you glad for it, and its being compelling is probably not inversely related to its being true. So I'm not too concerned that my tentative answer to this question is "no."
I'm concerned, however, that the camp can't produce evidence of the kind, "Before the minicamp, Mary Sue was in rehab for crack. A year later, she's clean and has a successful web consultancy." (Exaggerating the expected magnitude of change, of course.) Religious retreats don't produce this, and tend to produce results more like, "Immediately after the retreat I felt really good, and a year later I do awesome on unobservable metrics!"

Before the bootcamp, I'd just barely managed to graduate college and didn't have the greatest prospects for finding a job. (Though to be fair, I was moving to SF and it was a CS degree.)

At the bootcamp, I founded (and then folded) a startup with other bootcampers, which was profoundly educational and cost a couple months of time and <$100.

Now, <1 year after the bootcamp, I'm doing programming and design work on the new SimCity, which is as close to a dream job for me as could reasonably be expected to exist.

I can't attribute all my recent success to the bootcamp, because I was pretty awesome beforehand, but it really did dramatically improve my effectiveness in a number of domains (my girlfriend is grateful for the fashion tips I picked up, for example). Other specific things I've found useful include meditation, value of information calculations, and rejection therapy.

Replace "glad I went" with a better criterion- that question deserves a good response.
"Is there evidence this will be worthwhile according to my values now, independently of how it might change my values?" "Is there evidence that this is instrumentally useful for more than warm fuzzies?" "Is there evidence that for the probable benefit of this event the costs are substantially optimized for it? I.e., if the benefit is substantially social, even if this would be worth flying around the world for, a program could actually be optimized for social benefits, and/or I could attend a closer/cheaper/shorter program with similar benefits to me." "Regardless of anyone's intent, what is this program optimized for?" "How's the food?"
Why our kind can't cooperate [http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/].
The cooperation has actually been happening; it's just that it was achieved by ostracizing the guy who asked if you were adhering to the principles expected of that kind.

Note that your original comment has positive and rising karma at this point. I have a high estimation of the minicamps (partially because I'm friends with fiddlemath, who really has appeared to level up since last summer in noticeable ways), but I'm glad that you're stubborn about making SIAI/CMR show you some good evidence.

There are ways of making that point without saying it sounds like a "Christian brainwashing retreat."

Sorry, "Christian retreat" didn't convey the idea, and in any case I gave a link to a better explanation of the part of conceptspace I was trying to refer to. I'll take it out since the link should suffice.

Thanks for your gracious apology. :)
If you don't care whether the cooperation is doing useful work, then sure. Otherwise, criticism seems to be a necessary evil.
For the purpose of causal inference / intervention evaluation, you must ask if a Christian retreat would have had this effect on those participants. Perhaps Christians feel closer after a Christian event, but I find Christian events somewhat alienating because I'm not Christian. I don't find aspiring rationalist events alienating, in part because I'm an aspiring rationalist. It's fun to hang out with people who have common interests, and depending on who you are, that group is a different group... for me, it's rationalists. Part of the point of the camp is that it has a similar bonding effect that any coming together of people with a deep common interest or aspiration can have, and in this case, the common aspiration is rationality. Plus, at the camp, I did internalize skills and attitudes that have helped me a lot over the past (I.e. I've improved much more over the past year than I have in previous years), for example, looking more vigilantly for fungibility between my time and money, looking more at the reasons I do things and finding more effect ways to pursue those reasons... Those particular effects I wouldn't expect from a Christian camp, just as the particular effect of feeling close to Jesus is not an effect I'd expect from a rationality camp. I just happen to prefer the "rationality" effects, and these camps are for people with similar such preferences. Seriously, it's fun :)
If the primary motivation for attending is the emotional rewards of meeting others with interest in rationality and feeling that you've learned how to be more rational, then yes, a Christian brainwashing retreat would make you glad you attended it in the same way, if and only if you are/became Christian (since non Christians likely wouldn't enjoy a Christian brainwashing retreat.) That said, as many of us have little/no data on changes in rationality (if any) of attendees, attending is the only real option you have to test whether it might. Confirmation bias would make a positive result weak evidence, but it'd be relatively important given the lack of other evidence. Luckily even if the retreat doesn't have benefits to your objective level of rationality it sounds worthwhile on the undisputed emotional merits. I think what SilasBarta is trying to ask is do we have any objective measurements yet from the previous minicamp that add weight to the hypothesis that this camp does in fact improve rationality or life achievement over either the short or long term? If not then I'm still curious, are there any plans to attempt to study rationality of attendees and non-attendees to establish such evidence?
Yes, that's an oft-repeated goal, and as Eliezer mentions in a sibling, there's a one-year follow-up planned but it has not yet been a year.
Right, it's been nearly a year since the last one. The long-term evidence is out there. How are attendees doing in their lives now vs how they were doing before? I'm pretty sure there's been enough time to find this information out by now.
It's hard to get objective evidence on this, because the participants were all pretty exceptional people to start off with, and there were so few of them, but there is an effort underway to collect what data we can from those that attended the longer Boot Camp - hopefully we'll be able to report back within a month.
-6Eliezer Yudkowsky11y

It's April 16th today... Any update on when the acceptances will be sent out?

I received an email on the 19th asking for additional information about myself. So I'm guessing that as of the 19th they were still not done selecting.

Since several people have asked for negative reviews in addition to positive ones, I'll share my mixed review, starting with the bad:

The first minicamp was divided into two types of sessions: epistemic rationality, and social effectiveness.

The epistemic rationality sessions always seemed "right", but what we did didn't translate directly into making improvements in my day-to-day rationality. The exercises were fun and seemed like they were "on the right track," but I am still putting time and energy into figuring out how to turn "knowing" about rationality into actually making good decisions. (Other minicamp participants reported spectacular gains, so perhaps I'm just a slow learner in that respect.)

On the other hand, the instructors did and do seem quite serious about making things better along that axis, so I would expect this coming minicamp to be superior in some ways.

The social effectiveness training was much more concrete and I was able to apply it immediately. I've gotten measurable results - compliments, and most shockingly to me, strangers smile at me on the street. On the other hand, I don't think that should be Rationality Org's comparative ... (read more)

There are a lot of glowing recommendations from past participants here. In fact, I have not noticed a single criticism from a past participant. This reeks of self-censorship, because even if rationality mini-camp was overall an awesome experience, it is very unlikely that it was perfect.

In order to do my small part to counteract the social pressure not to criticize those with high status, I hereby pre-commit to upvoting any comment from a past participant that contains criticism of the mini-camp. This includes comments that are generally positive but include criticism of specific aspects.

In this comment, GuySrinivasan reports The Good, The Bad, and The Ugly.

I can definitely understand your perspective. I pretty much ONLY read the negative parts of reviews--if there is NOTHING bad, that is a bad sign in itself. I also commented positively below, but since you asked, here are my complaints about the last minicamp: * A little disorganized. Apparently knowing about the planning fallacy does not make you immune to it ;) I suspect this will be fixed for this year. * Large number of college students (graduate and undergraduate). I would have liked to see a wider range of attendees. Again, this was probably partly due to the short notice for last year. * Some sessions were not valuable to me. However, most of those were valuable to others, so I think this is due more to the fact that brains are different than that the sessions were poorly done. Actually, I'm pretty sure we all gave detailed feedback afterward (including lots of suggestions for improvements). Could Anna or someone post links to those too? Perhaps seeing the minor details that were negative will help people get a better sense for how useful it was overall.

“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”

Sing karaoke...

Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...

Fund it.

A question for Eliezer, Anna or Luke: under which circumstances would you prefer people came to minicamp, and under which circumstances would you prefer people just gave that money (inc. travel) to the SI?

Not that I'm necessarily going to follow this advice, just curious which factors you think are relevant.

Vaguely related question: I'm sure I remember there being a minicamp in New York in the past (EDIT: I was wrong). What's the expected length of time before the next east coast camp?

People seem to be getting the impression that these minicamps may be primarily fundraising opportunities for the Singularity Institute. So, by way of explanation:

  • At my request, our Board of Directors put Anna in charge of developing Rationality Group and gave her a separate budget for it.
  • Rationality Group has been developing lessons, testing them, iterating the lessons in response to feedback, researching how to run a successful non-profit in this space, networking, and hiring founding members for a couple months now.
  • Rationality Group hopes to learn how to run these kinds of camps without losing money. (Minicamp paid for itself as a result of future donations from minicamp participants who plausibly wouldn't have donated nearly as much had they not had this in-person encounter with us, but we'd like to learn how to run these kinds of minicamps without losing money while not counting future donations.)
  • The lessons are now more tested and practiced than what we put together for minicamp, but we always appreciate opportunities to get feedback on them from additional participants.
  • Who knows? Maybe we'll want to hire some of the participants to be part of our team.
  • Above all, we wan
... (read more)
Oops. I wasn't thinking along the lines of "the people we most want to come to minicamp are the people who are most easily brainwashed into giving us money". Sorry if I gave that impression. I was more thinking along the lines of "when should I buy utilons, and when should I buy self-improvement in order that I can acquire more resources and/or more effectively turn resources into utilons?" Minor point: can you clarify whether Rationality Group is the same thing as the Center for Modern Rationality?

can you clarify whether Rationality Group is the same thing as the Center for Modern Rationality?

They are the same. We're still testing names. I've talked with several marketing firms, and right now my assistant is tracking down additional companies who focus on market-testing organization names.

If in doubt, apply. You can always decide to not come after being accepted, gaining info, etc. (Applying is a 10-15 minute process.)

As to money vs. minicamp: come to minicamp. Especially if you've had little to no in-person contact with this community before. You'll learn more, acquire more money-making skills, make friends, and get into a better position for long-term influence. At least, that's my impression.

We haven't run any on the east coast; at some point we'd like to run do this, probably after refining our curriculum, growing our instructor-set, etc., but probably not in the next 12 months.

I would definitely go to one of these again. Are you considering repeats? If so how much do you expect repeats to get out of it?

We're considering it, though I'm not sure yet how much overlap there will be. We're also considering letting some past folk volunteer at the upcoming sessions. Contact me (via email or the application form, either way) with your interests.

My primary partner is curious about this sort of thing, since unsurprisingly I talk about it all the time. Should I be thinking about going on my own or both of us going? We live in London UK.

Depending on your partner, I'd consider coming together. If you come with a close associate of any sort, you'll have someone in your life afterward who can help you remember to practice the techniques in daily life. Do mark on your application if you / your partner much prefer to come with a particular other person, so that we can evaluate your applications jointly.
0Paul Crowley11y
Thanks! She's reminded me of some practical reasons she can't go this time. Hopefully we'll be bringing this to London before long. When do you hope to answer applications? The sooner I can book flights the better :)
Someone at the last minicamp brought his partner, and she seemed to like it. She was highly educated (a PhD student at Harvard in a mathematical science), and wasn't much into LessWrong before coming.

Thanks for putting this together - I am intrigued by a mini-camp.

One of the questions in the application form is tickbox for "Interested in potentially working for CMR".

Could someone give some more detail on that question?

A google of "site:lesswrong.com CMR" didn't give me anything useful.

Ah - I found some details for the Eliezer's potential Center for Modern Rationality: http://hpmor.com/modern-rationality/ [http://hpmor.com/modern-rationality/]

Out of curiosity, what does the menu look like? Is it based around a specific, dare I say "rational" diet? Paleo, Bulletproof, Shangri-la?

The food at RBC was awesome, mostly due to a small number of attendees being very good cooks. Many of us also had similar fitness goals with regards to gaining muscle, so we had a 25 pound tub of protein, and cooked high-protein meals. My diet has been nowhere near as good since leaving RBC.
I helped make some of the food last time. I would call that menu "college random" ;) It was basically left as a problem for us to solve. I assume that this time they will have it straightened out (and is probably part of the higher price), but I am also curious.
I'd love to know this - heck, I'd teach a diet course if anyone is interested (those at the last NYC mega-meetup listened to an hour lecture/qa of my nutritional philosophies :)

I understand that you are expected to have read at least part of the sequences, but what sort of general education is necessary, if any? What kind of math should a participant be able to do in order to get optimal utility out of the event? I am seriously considering flying out to attend, and would like to know if I need to review anything :)

Consider giving an example of the sort of decision making procedure that is taught in camp, with the subject of the example whether one should attend the camp.


Write down all the reasons you think you are considering on a sheet of paper, in pro and con columns. Circle those that do not refer to consequences of going or not going to camp. Then shut your eyes to think for two minutes and think of at least five alternatives that you are likely to do instead of camp. Make pro and con lists for the most likely three of these. Then circle non-consequences. G... (read more)

Is there somewhere on LW to report this kind of thing? A spam or admin notification thread?

See this ticket [http://code.google.com/p/lesswrong/issues/detail?id=272]. Currently, one of the moderators has to notice personally, though you could for example send a PM to me.
I see now that there's a "report" button in my inbox - I knew I'd seen it somewhere on this site. Cheers, I'll PM in future.
In practice, the report button doesn't seem to do much. There's a list of reported things, but I don't think any mods look at it.
Since "report" buttons currently don't appear in most places, the lists of reported items go without update for months, so there's not much incentive to look at them. The last reported item in Main is of 12 June 2011. But I've got the pages bookmarked, so if the feature is resurrected it'll serve its purpose.

Could you specify location of the minicamp? Or suggest several possible locations? I just want to calculate trip time from San Francisco International Airport, I also hope it will be useful information for all applicants outside of USA, like me.

The previous camps were held in Berkeley, and I'm fairly sure these will be held there as well. They also picked up the camp participants from the airport last time.

As somebody about as far from the Bay Area as a USian can get, and who has been hearing great things from a friend in the area about the CMR workshops… when will there be online workshops and/or camps for the northeast or the east coast in general?

I'm not sure. Maybe a year from now for east cost, and some longer period of time for online (since online exercises need more development before they work well stand-alone)? I probably wouldn't wait. Lots of people flew in last time from Europe.
Thanks for responding. I'm trying not to underestimate the value offered here, but the degree of non-monetary cost of going varies fairly widely. For those of us who would have to disrupt things fairly severely to go, would getting meetup groups, etc. trained on the curriculum be a means to train more people, sooner, even if removing the intensive camp setting means that the participants take more time overall?
We are working on that. The trouble is that it is easier to design exercises that work in a local setting, with skilled and knowledgable instructors, than to design exercises that can be ported. We are giving some preference in minicamps admissions to folks who run or plan to run meet-ups, and we're interested in hiring people who will be more able to design exercises and port them to meet-ups; if you have friends in these categories, send them our way.

Regarding effectiveness skepticism - what do we expect the camps to achieve? Is it testable, and can we agree how to test it in advance?

Thanks for this post, I may be interested in attending.

I live in the Bay Area, however, and I would be happy to drive back and forth, so I would be more inclined to pay less and not receive lodging. Is there any chance that the option of doing so would be available to me?

EDIT: Oops, I just now read this comment which brings up the same question. OK.

Another question I realized is probably more relevant: What has been the median age for attendees of these events? Are they demographically young college age students, or what?

RBC was mostly young college age students (oldest participant was 29 years old if I recall correctly. I believe I was the first or second youngest at 21). The median age for the minicamps was probably a bit higher, but not substantially so.

Photos from the May Minicamp are up on Facebook :) Thanks to everyone for the amazing time, and new friendships :D

Just booked plane tickets - I will be there for the May one! :)

Is it too late to apply for this?

The possibility of flying out for this only became apparent to me very late on. I submitted an application as soon as I knew it was an option for me, which was the 22nd of April. Since it seems like applicants are already getting answers, I'm resigned to the possibility that my application was submitted too late to be considered.

Is that in fact the case? If so, it's probably worth modifying or removing the "Apply Now" links. If not though, I'll adjust my holiday plans accordingly.

Are applications still being accepted? I had been busy the previous days, so I only just submitted mine.

In the LW community apparently most people don't know how to dress well. I read some testimonials, and part are in some sense about achieving goals like been more sociable, and not to be a anti-social nerd.

There a re skeptics too, however for me the effectiviness of (decision theory + psychological tests) is clear, only in the few comments a read. Maybe they are overly optimistic, but the feel of improvement and the evidence in their professions indicate that the cost/benefit is superior of the most camps/groups out there.

I applied, but accidentally submitted it with an obviously incomplete current/past employment field. What would be a sensible way to remedy this?

Email our new executive assistant, stephen p cole at gmail, with your details, and ask him to fix it for you.
Thank you.