[ Question ]

Are there technical/object-level fields that make sense to recruit to LessWrong?

by Raemon 1mo15th Sep 201947 comments

26


LessWrong is about learning rationality, and applying rationality to interesting problems.

An issue is that solving interesting problems often requires fairly deep technical knowledge of a field. To use rationality to help solving problems (especially as a group), you need both people who have skills in probability/meta-cognition/other-rationality skills, as well as the actual skills directly applicable to whatever problem is under discussion.

But if you show up on LW and post something technical (or even just "specialized") in a field that isn't already well represented on the forum, it'll be hard to have meaningful conversations about it.

Elsewhere on the internet there are probably forums focused on whatever-your-specialization is, but those places won't necessarily have people who know how to integrate evidence and think probabilistically in confusing domains.

So far the LW userbase has a cluster of skills related to AI alignment, some cognitive science, decision theory, etc. If a technical post isn't in one of those fields, you'll probably get better reception if it's somehow "generalist technical" (i.e. in some field that's relevant to a bunch of other fields), or if it somehow starts one inferential unit away from the overall LW userbase.

A plausibly good strategy is to try to recruit a number of people from a given field at once, to try to increase the surface area of "serious" conversations that can happen here.

It might make most sense to recruit from fields that are close enough to the existing vaguely-defined-LW memeplex that they can also get value from existing conversations here.

Anyone have ideas on where to do outreach in this vein? (Separately, perhaps: how to do outreach in this vein?). Or, alternately, anyone have a vague-feeling-of-doom about this entire approach and have alternate suggestions or reasons not to try?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

7 Answers

As I've been talking about on my shortform, I'd be excited about attracting more "programmer's programmers". AFAICT, a lot of LW users are programmers, but a large fraction of these users either are more interested in transitioning into theoretical alignment research or just don't really post about programming. As a small piece of evidence for this claim, I've been consistently surprised to see the relatively lukewarm reaction to Martin Sustrik's posts on LW. I read Sustrik's blog before he started posting and consistently find his posts there and here pretty interesting (I am admittedly a bit biased because I was already impressed by Sustrik's work on ZeroMQ).

I think that's a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and "making beliefs pay rent", which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes. For a longer example, see this blog post about reproducing a deep RL paper, which discusses how noticing confusion helped the author make progress (CFAR is specifically mentioned). LW-style thinking has also helped me stop obsessing over much of the debate around some of the more mindkiller-y topics in programming like "should you always write tests first", "are type-safe languages always better than dynamic ones". In my ideal world, LW-style thinking applied to fuzzier questions about programming would help us move past these "wrong questions".

Programming already has a few other internet locuses such as Hacker News and lobste.rs, but I think those places have fewer "people who know how to integrate evidence and think probabilistically in confusing domains."

Assuming this seems appealing, one way to approach getting more people of the type I'm talking about would be to reach out to prominent bloggers who seem like they're already somewhat sympathetic to the LW meme-plex and see if they'd be willing to cross-post their content. Example of the sorts of people I'm thinking about include:

  • Hillel Wayne: who writes about empiricism in software engineering and formal methods.

  • Jimmy Koppel: who writes about insights for programming he's gleaned from his "day job" as a programming tools researcher (I think he has a LW account already).

  • Julia Evans: Writes about programming practice and questions she's interested in. A blog post of hers that seems especially LW-friendly is What does debugging a program look like?

Last, I do want to include add a caveat for all this which I think applies to reaching out to basically any group: there's a big risk of culture clash/dilution if the outreach effort succeeds (see Geeks, MOPs, and sociopaths for one exploration of this topic). How to mitigate this is probably a separate question, but I did want to call it out in case it seems like I'm just recommending blindly trying to get more users.

In multiple LessWrong surveys biorisk was rank has a more probable existential risk then AGI. At the same time there's very little written on LessWrong about biorisk. If we could recruit people into our community who could represent that topic well, I think it would be very valuable.

Minor conflict of interest disclaimer: I've recently become much more interested in computational biology and therefore have a personal interest in having more content related to biology in general on LW.

I'd be excited about having more representation from the experimental sciences, e.g. biology, certain areas of physics, chemistry, on LessWrong. I don't have a good sense of how many total LW users come from these fields, but it certainly doesn't seem like many prominent posters/commenters do. The closest thing to a prominent poster who talks about experimental science is Scott Alexander.

My sense from random conversations I've had over the years is that there's a lot of tacit but important knowledge about how to do experimental research and lab work well that isn't written down anywhere and could make for interesting complementary content to the wealth of content on LW about the connection between rationality and doing theory well. There's also an untapped treasure trove of stories about important discoveries in these areas that could make for good LW post series. I'd love to see someone take me through the history of Barbara McClintock's discoveries or the development of CRISPR from a rationalist perspective (i.e. what were the cognitive strategies that went along with discovering these things). There are books on discoveries like this of course, but there are also books on most of the material in the Sequences.

Having more LWers from experimental sciences could also provide a foundation for more detailed discussion of X-risks outside of transformative AI, bio-risks in particular.

In terms of attracting these sorts of people, one challenge is that younger researchers in these areas in particular tend to have long hours due to the demands of lab work and therefore may have less time to post on LW.

I would be interested in seeing more applied fields, and also specializations which operate at the intersection of multiple fields. Some examples include:

  • Operators, in the sense of people with executive responsibility. I have enjoyed reading the after-action reports from the organizing experiences and foundation-forming to come from this website and EA.
  • Finance, which is essentially the field of applied distribution of risk. We have finance people on here, but there seems to be little content in terms of top-level posts from them (the easiest way to tell there are finance people present is to look at the top-level finance posts and then look at the criticism in the comments).
  • Industrial or Systems Engineering, which are fields dedicated to integrating other fields and applied optimization of the group all together.

The adjacent memeplex of Effective Altruism seems to have a bunch of operations and finance people in it.

We might consider trying to target people who are connected to teaching or instruction in their area of expertise somehow. I expect the average level of engagement with a new subject is quite a bit deeper here than in most other communities, so we might be in a position to offer an audience of motivated learners as an enticement to them. Simultaneously, the instruction experience will help with the problem of technical posts having too high a threshold to engage with them.

I'd make an argument for 'soft-sciences' and humanities. Philosophy, cultural anthropology, history, political science, sociology, literature, and maybe even gender studies. Computer science, mathematics, economics, and other STEM-heavy fields are already pretty well represented within the current LW community.

The focus on group rationality and developing a thriving community seems like it could benefit from the expertise these fields bring to the table. This might also reduce the amount of 'reinventing the wheel' that goes on (which I don't necessarily think is a bad thing but also consumes scarce cognitive resources).

Further, I think there's a case to be made that a lot of the goals of the rationalist movement could be furthered by strengthening connections to serious academic fields that are less likely to come into memetic contact with rationalist ideas. If nothing else, it would probably help raise the sanity waterline.

Aside from what's already here, I can think of a few "character profiles" of fields that would benefit from LessWrong infrastructure:

  • Hard fields that are in decent epistemic health but could benefit from outsiders and cross pollination with our memeplex (e.g. economics).
  • Object level things where outside experts can perform the skill but the current epistemological foundations are so shaky that procedural instructions work poorly (e.g. home cooking).
  • Things that are very useful where good information exists but finding it requires navigating a lemon market (e.g. personal finance).
  • Fields that have come up regularly as inputs into grand innovations that required knowledge from multiple areas (e.g. anything Elon needed to start his companies)

I don't think the bottleneck is lack of recruitment though, the problem is that content has no place to go. As you rightly point out, things that aren't interesting to the general LW audience get crickets. I have unusual things I really want to show on LessWrong that are on their 5th rewrite because I have to cross so many inferential gaps and somehow make stuff LW doesn't care about appealing enough to stay on the front page.


Finance. Trading specifically.