This is just a guess, but I think CFAR and the CFAR-sphere would be more effective if they focused more on hypothesis generation (or "imagination", although that term is very broad). Eg., a year or so ago, a friend of mine in the Thiel-sphere proposed starting a new country by hauling nuclear power plants to Antarctica, and then just putting heaters on the ground to melt all the ice. As it happens, I think this is a stupid idea (hot air rises, so the newly heated air would just blow away, pulling in more cold air from the surroundings). But it is an idea, and the same person came up with (and implemented) a profitable business plan six months or so later. I can imagine HPJEV coming up with that idea, or Elon Musk, or von Neumann, or Google X; I don't think most people in the CFAR-sphere would, it's just not the kind of thing I think they've focused on practicing.
There's a difference between optimizing for truth and optimizing for interestingness. Interestingness is valuable for truth in the long run because the more hypotheses you have, the better your odds of stumbling on the correct hypothesis. But naively optimizing for truth can decrease creativity, which is critical for interestingness.
I suspect "having ideas" is a skill you can develop, kind of like making clay pots. In the same way your first clay pots will be lousy, your first ideas will be lousy, but they will get better with practice.
...creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.
If this is correct, this also gives us clues about how to solve Less Wrong's content problem.
Online communities do not have a strong comparative advantage in compiling and presenting facts that are well understood. That's the sort of thing academics and journalists are already paid to do. If online communities have a comparative advantage, it's in exploring ideas that are neglected by the mainstream--things like AI risk, or CFARish techniques for being more effective.
Unfort...
Definitely agree with the importance of hypothesis generation and the general lack of it–at least for me, I would classify this as my main business-related weakness, relative to successful people I know.
A few nitpicks on choice of "Brier-boosting" as a description of CFAR's approach:
Predictive power is maximized when Brier score is minimized
Brier score is the sum of differences between probabilities assigned to events and indicator variables that are are 1 or 0 according to whether the event did or did not occur. Good calibration therefore corresponds to minimizing Brier score rather than maximizing it, and "Brier-boosting" suggests maximization.
What's referred to as "quadratic score" is essentially the same as the negative of Brier score, and so maximizing quadratic score corresponds to maximizing predictive power.
Brier score fails to capture our intuitions about assignment of small probabilities
A more substantive point is that even though the Brier score is minimized by being well-calibrated, the way in which it varies with the probability assigned to an event does not correspond to our intuitions about how good a probabilistic prediction is. For example, suppose four observers A, B, C and D assigned probabilities 0.5, 0.4, 0.01 and 0.000001 (respectively) to an event E occurring and the event turns out to occur. Intuitively, B's prediction is on...
If CFAR will be discontinuing/de-emphasizing rationality workshops for the general educated public, then I'd like to see someone else take up that mantle, and I'd hope that CFAR would make it easy for such a startup to build on what they've learned so far.
We'll be continuing the workshops, at least for now, with less direct focus, but with probably a similar amount of net development time going into them even if the emphasis is on more targeted programs. This is partly because we value the existence of an independent rationality community (varied folks doing varied things adds to the art and increases its integrity), and partly because we’re still dependent on the workshop revenue for part of our operating budget.
Re: others taking up the mantel: we are working to bootstrap an instructor training; have long been encouraging our mentors and alumni to run their own thingies; and are glad to help others do so. Also Kaj Sotala seems to be developing some interesting training thingies designed to be shared.
Feedback from someone who really enjoyed your May workshop (and I gave this same feedback then, too): Part of the reason I was willing to go to CFAR was that it is separate (or at least pretends to be separate, even though they share personnel and office space) from MIRI. I am 100% behind rationality as a project but super skeptical of a lot of the AI stuff that MIRI does (although I still follow it because I do find it interesting, and a lot of smart people clearly believe strongly in it so I'm prepared to be convinced.) I doubt I'm the only one in this boat.
Also, I'm super uncomfortable being associated with AI safety stuff on a social level because it has a huge image problem. I'm barely comfortable being associated with "rationality" at all because of how closely associated it is (in my social group, at least) with AI safety's image problem. (I don't exaggerate when I say that my most-feared reaction to telling people I'm associated with "rationalists" is "oh, the basilisk people?")
I had mixed feelings towards this post, and I've been trying to process them.
On the positive side:
On the negative side:
Even the title does this! It's a slightly odd grammatical construction which looks an awful lot like CFAR’s new focus: AI Safety; I think without being more up-front about alternative interpretation it will sometimes be read that way.
Datapoint: it wasn't until reading your comment that I realized that the title actually doesn't read "CFAR's new focus: AI safety".
I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it's high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.
I am annoyed by this post because you describe it as, "we had a really good idea and then we decided to post this instead of getting to that idea".
I don't see the point of building anticipation. I like to quote, "start as close to the end, then go forward"
To coordinate we need a leader that many of us would sacrifice for. The obvious candidates are Eliezer Yudkowsky, Peter Thiel, and Scott Alexander. Perhaps we should develop a process by which a legitimate, high-quality leader could be chosen.
Edit: I see mankind as walking towards a minefield. We are almost certainly not in the minefield yet, at our current rate we will almost certainly hit the minefield this century, lots of people don't think the minefield exists or think that fate or God will protect us from the minefield, and competitive pressures (Moloch) make lots of people individually better off if they push us a bit faster towards this minefield.
I disagree. The LW community already has capable high-status people who many others in the community look up to and listen to suggestions from. It's not clear to me what the benefit is from picking a single leader. I'm not sure what kinds of coordination problems you had in mind, but I'd expect that most such problems that could be solved by a leader issuing a decree could also be solved by high-status figures coordinating with each other on how to encourage others to coordinate. High-status people and organizations in the LW community communicate with each other a fair amount, so they should be able to do that.
And there are significant costs to picking a leader. It creates a single point of failure, making the leader's mistakes more costly, and inhibiting innovation in leadership style. It also creates PR problems; in fact, LW already has faced PR problems regarding being an Eliezer Yudkowsky personality cult.
Also, if we were to pick a leader, Peter Thiel strikes me as an exceptionally terrible choice.
Leading a business and leading a social movement require different skill sets, and Peter Thiel is also the only person on the list who isn't even part of the LW community. Bringing in someone only tangentially associated with a community as its leader doesn't seem like a good idea.
If Alyssa Vance is correct that the community is bottlenecked on idea generation, I think this is exactly the wrong way to respond. My current view is that increasing hierarchy has the advantage of helping people coordinate better, but it has the disadvantage that people are less creative in a hierarchical context. Isaac Asimov on brainstorming:
If a single individual present has a much greater reputation than the others, or is more articulate, or has a distinctly more commanding personality, he may well take over the conference and reduce the rest to little more than passive obedience. The individual may himself be extremely useful, but he might as well be put to work solo, for he is neutralizing the rest.
I believe this has already happened to the community through the quasi-deification of people like Eliezer, Scott, and Gwern. It's odd, because I generally view the LW community as quite nontraditional. But when I look at academia, I get the impression that college professors are significantly closer in status to their students than our intellectual leadership.
This is my steelman of people who say LW is a cult. It's not a cult, but large status differences might be a socio...
If anyone's mind is in a place where they think they'd be more productive or helpful if they sacrificed themselves for a leader, then, with respect, I think the best thing they can do for protecting humanity's future is to fix that problem in themselves.
Hi Anna, could you please explain how CFAR decided to focus on AI safety, as opposed to other plausible existential risks like totalitarain governments or nuclear war?
Is this an admission that CFAR cannot effectively help people with problems other than AI safety?
I intend to donate to MIRI this year; do you anticipate that upcoming posts or other reasoning/resources might or should persuade people like myself to donate to CFAR instead?
This post makes me very happy. It emphasizes points I wanted to discuss here a while ago (e.g. collective thinking and the change of focus) but didn't have the confidence to.
In my opinion, we should devote more time to hypothesis testing on both individual and collective rationality. Many suggestions to improve individual rationality have been advanced on LW. The problem is we don't know how effective these techniques are. Is it possible to test them at CFAR or at LW meetings ? I've seen posts about rationality drugs - to take an example - and even though...
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
I get the impression that 'new ways of improving thinking skill' is a task that has mostly been saturated. The reasons people perhaps don't have great thinking skill might be because
1) Reality provides extremely sparse feedback on 'the quality of your/our thinking skills' so people don't see it as very important...
great post. i like it. feeling great when reading your post
<a href="http://supersmashflash2s.com">Super Smash Flash</a>
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together.
I do guess then that this effort is guided by an ideal that has been already outlined? Do you define "improving" in relation to, e.g., Bayesian reasoning?
The catch-22 I would expect with CFAR's efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.
The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical "I always try to learn from my mistakes and improve my thinking".
The latter mindset is the one most urgently...
Do you believe that the Briers score is definitely the best way to model predictive accuracy or do you just point to it because it's a good way to model predictive accuracy?
A bit about our last few months:
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
OK, so I told you the other day that I find you a difficult person to have discussions with. I think I might find your comments less frustrating if you made an effort to think of things I would say in response to your points, and then wrote in anticipation of those things. If you're interested in trying this, I converted all my responses using rot13 so you can try to guess what they will be before reading them.
Oh yes. For example, Physical Review Letters is mostly interested in the former, while HuffPo -- in the latter.
UhssCb vf gelvat gb znkvzvmr nq erirahr ol jevgvat negvpyrf gung nccrny gb gur fbeg bs crbcyr jub pyvpx ba nqf. Gur rkvfgrapr bs pyvpxonvg gryyf hf onfvpnyyl abguvat nobhg ubj hfrshy vg jbhyq or sbe lbhe nirentr Yrff Jebatre gb fcraq zber gvzr trarengvat ulcbgurfrf. Vg'f na nethzrag ol nanybtl, naq gur nanybtl vf dhvgr ybbfr.
V jbhyq thrff Culfvpny Erivrj Yrggref cevbevgvmrf cncref gung unir vagrerfgvat naq abiry erfhygf bire cncref gung grfg naq pbasvez rkvfgvat gurbevrf va jnlf gung nera'g vagrerfgvat. Shegurezber, V fhfcrpg gung gur orfg culfvpvfgf gel gb qb erfrnepu gung'f vagrerfgvat, naq crre erivrj npgf nf n zber gehgu-sbphfrq svygre nsgrejneqf.
That's not true because you must also evaluate all these hypotheses and that's costly. For a trivial example, given a question X, would you find it easier to identify a correct hypothesis if I presented you with five candidates or with five million candidates?
Gur nafjre gb lbhe dhrfgvba vf gung V jbhyq cersre svir zvyyvba pnaqvqngrf. Vs svir ulcbgurfrf jrer nyy V unq gvzr gb rinyhngr, V pbhyq fvzcyl qvfpneq rirelguvat nsgre gur svefg svir.
Ohg ulcbgurfvf rinyhngvba unccraf va fgntrf. Gur vavgvny fgntr vf n onfvp cynhfvovyvgl purpx juvpu pna unccra va whfg n srj frpbaqf. Vs n ulcbgurfvf znxrf vg cnfg gung fgntr, lbh pna vairfg zber rssbeg va grfgvat vg. Jvgu n ynetre ahzore bs ulcbgurfrf, V pna or zber fryrpgvir nobhg juvpu barf tb gb gur evtbebhf grfgvat fgntr, naq erfgevpg vg gb ulcbgurfrf gung ner rvgure uvtuyl cynhfvoyr naq/be ulcbgurfvf gung jbhyq pnhfr zr gb hcqngr n ybg vs gurl jrer gehr.
Gurer frrzf gb or cerggl jvqrfcernq nterrzrag gung YJ ynpxf pbagrag. Jr qba'g frrz gb unir gur ceboyrz bs gbb znal vagrerfgvat ulcbgurfrf.
I would like to suggest attaching less self-worth and less status to ideas you throw out. Accept that it's fine that most of them will be shot down.
I don't like the kindergarten alternative: Oh, little Johnny said something stupid, like he usually does! He is such a creative child! Here is a gold star!
V pvgrq fbzrbar V pbafvqre na rkcreg ba gur gbcvp bs perngvivgl, Vfnnp Nfvzbi, ba gur fbeg bs raivebazrag gung ur guvaxf jbexf orfg sbe vg. Ner gurer ernfbaf jr fubhyq pbafvqre lbh zber xabjyrqtrnoyr guna Nfvzbi ba guvf gbcvp? (Qvq lbh gnxr gur gvzr gb ernq Nfvzbi'f rffnl?)
Urer'f nabgure rkcreg ba gur gbcvp bs perngvivgl: uggcf://ivzrb.pbz/89936101
V frr n ybg bs nterrzrag jvgu Nfvzbi urer. Lbhe xvaqretnegra nanybtl zvtug or zber ncg guna lbh ernyvmr--V guvax zbfg crbcyr ner ng gurve zbfg perngvir jura gurl ner srryvat cynlshy.
uggc://jjj.birepbzvatovnf.pbz/2016/11/zlcynl.ugzy
Lbh unir rvtugrra gubhfnaq xnezn ba Yrff Jebat. Naq lrg lbh unira'g fhozvggrq nalguvat ng nyy gb Qvfphffvba be Znva. Lbh'er abg gur bayl bar--gur infg znwbevgl bs Yrff Jebat hfref nibvq znxvat gbc-yriry fhozvffvbaf. Jul vf gung? Gurer vf jvqrfcernq nterrzrag gung YJ fhssref sebz n qrsvpvg bs pbagrag. V fhttrfg perngvat n srj gbc-yriry cbfgf lbhefrys orsber gnxvat lbhe bja bcvavba ba gurfr gbcvpf frevbhfyl.
I told you the other day that I find you a difficult person to have discussions with
Yes. This is unfortunate, but I cannot help you here.
if you made an effort to think of things I would say in response to your points, and then wrote in anticipation of those things
I think it's a bad idea. I can't anticipate your responses well enough (in other words, I don't have a good model of you) -- for example, I did not expect you to take five million candidate hypotheses. And if I want to have a conversation with myself, why, there is no reason to involve you ...