Forum participation as a research strategy

Previously: Online discussion is better than pre-publication peer review, Disincentives for participating on LW/AF

Recently I've noticed a cognitive dissonance in myself, where I can see that my best ideas have come from participating on various mailing lists and forums (such as cypherpunks, extropians, SL4, everything-list, LessWrong and AI Alignment Forum), and I've received a certain amount of recognition as a result, but when someone asks me what I actually do as an "independent researcher", I'm embarrassed to say that I mostly comment on other people's posts, participate in online discussions, and occasionally a new idea pops into my head and I write it down as a blog/forum post of my own. I guess that's because I imagine it doesn't fit most people's image of what a researcher's work consists of.

Once I noticed this, the tension is easy to resolve - in this post I'm going to proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.

Benefits of Forum Participation (FP)

FP takes little effort / will power

In other words it feels more like play than work, which means I rarely have issues with not wanting to do something that I think is important to do (i.e., akrasia), the only exception being that writing posts seems to take more effort so occasionally I spend my time writing comments when I perhaps should write posts instead. (This is the part of this post that I think may be least likely to generalize to other people. It could be that I'm an extreme outlier in finding FP so low-effort. However it might also be the case that it becomes low effort for most people to write comments once they've had enough practice in it.)

FP is a good way to notice missing background knowledge and provides incentives to learn missing knowledge

If you read a post with an intention to question or comment on it, it's pretty easy to notice that it assumes some background knowledge that you lack. The desire to not ask a "stupid" question or make a "stupid" comment provides powerful incentive to learn the miss knowledge.

FP is a good way to stay up to date on everyone else's latest research

It's often a good idea to stay up to date on other people's research, but sometimes one isn't highly motivated to do so. FP seems to make that easier. For example, I wasn't following Stuart's research on counterfactual oracles, until the recent contest drew my attention and desire to participate, and I ended up reading the latest posts on CO in order to understand the current state of the art on that topic, which turned out to be pretty interesting.

Arguments that are generated in reaction to some specific post or discussion can be of general value

It's not infrequent that I come up with an argument in response to some post or discussion thread, and later expand or follow up that argument into a post because it seems to apply more generally than to just that post/discussion. Here is one such example.

FP generates new ideas via cross-fertilization

FP incentivizes one to think deeply about many threads of research, and often (at least for me) an idea pops into my head that seems to combine various partial ideas floating in the ether into a coherent or semi-coherent whole (e.g., UDT), or is the result of applying or analogizing someone else's latest idea to a different topic (e.g., "human safety problem", "philosophy as high complexity class").

FP helps prepare for efficiently communicating new ideas

FP is a good way to build models of other people's epistemic states, and also a good way to practice communicating with fellow researchers, both of which are good preparation for efficiently communicating one's own new ideas.

My Recommendations

Comment more

To obtain the above benefits, one just has to write more comments. It may be necessary to first overcome disincentives to participate. If you can't, please speak up and maybe the forum admins will do something to help address whatever obstacle you're having trouble with.

Practice makes better

If it seems hard to write good comments, practice might make it easier eventually.

Think of FP as something to do for yourself

Some people might think of commenting as primarily providing a service to other researchers or to the research community. I suggest also thinking of it as providing a benefit to yourself (for the above reasons).

Encourage and support researchers who adopt FP as their primary research strategy

I'm not aware of any organizations that explicitly encourage and support researchers to spend most or much of their time commenting on forum posts. But perhaps they should, if it actually is (or has the potential to be) a productive research strategy? For example this could be done by providing financial support and/or status rewards for effective forum participation.

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 7:43 AM

I broadly agree, but there are good reasons for more traditional research as well:

  • In many research areas, ideas are common, and it isn't clear which ideas are most important. The most useful contributions come from someone taking an idea and demonstrating that it is viable and important, which often requires a lot of solitary work that can't be done in the typical amount of time it takes to write a comment or post.
  • FP often leads to long, winding discussions that may end with two researchers agreeing, but the resulting transcript is not great for future readers. In contrast, traditional research produces more crisp distillations of an idea that are useful for communicating with an entire field. (I anticipate people saying that academic papers are incomprehensible. They are incomprehensible to outsiders, but often are relatively easy to read to people in the field. I typically find academic papers in my area to be significantly better at telling me what I want to know than blog posts, though the ideal combination is to first read the blog post and then follow it up with the paper.)

Nevertheless, I do think that FP is a better strategy for intellectual progress in AI alignment, which at least currently feels more ideas-bottlenecked than paper-bottlenecked.

In many research areas, ideas are common, and it isn’t clear which ideas are most important. The most useful contributions come from someone taking an idea and demonstrating that it is viable and important, which often requires a lot of solitary work that can’t be done in the typical amount of time it takes to write a comment or post.

Agreed. My recommendations aren't meant to be universally applicable. (ETA: Also, one could marginally increase one's forum participation in order to capture some of the benefits, and not necessarily go all the way to adopting it as one's primary research strategy.)

FP often leads to long, winding discussions that may end with two researchers agreeing, but the resulting transcript is not great for future readers.

There's nothing that explicitly prevents people from distilling such discussions into subsequent posts or papers. If people aren't doing that, or are doing that less than they should, that could potentially be solved as a problem that's separate from "should more people be doing FP or traditional research?"

Also, it's not clear to me that traditional research produces more clear distillations of how disagreements get resolved. It seems like most such discussions don't make for publishable papers and therefore most disagreements between "traditional researchers" just don't get resolved in a way that leaves a public record (or at all).

There's nothing that explicitly prevents people from distilling such discussions into subsequent posts or papers. If people aren't doing that, or are doing that less than they should, that could potentially be solved as a problem that's separate from "should more people be doing FP or traditional research?"

FYI this is something the LW team thinks about a bunch and I expect us to have made some serious effort towards incentivizing and simplifying this process in the coming year.

Just got a calendar reminder to check if this happened - my impression is that any such efforts haven't really materialised on the site.

This is actually a major motivation for the wiki/tagging system we are building. Also, you might have noticed all the edited transcripts we’ve been publishing, and the debates we’ve started organizing, which is also part of this. I‘be experimented a lot over the last year with UI for potentially directly distilling comment threads, but all of them ended up too clunky and messy to ever make me excited about them, though I still have some things I might want to give a shot, but overall I am currently thinking of tackling this problem in a slightly more indirect way. 

Hm, I percieved Raemon to be referring more specifically to turning forum discussions into posts, or otherwise tidying them up. I think that's importantly different to transcribing a talk (since a talk isn't a discussion), or a debate (since you only have a short period of time to think about your response to the other person). I guess it's possible that the tagging system helps with this, but it's not obvious to me how it would. That being said, I do agree that more broadly LW has moved towards more synthesis and intertemporal discussions.

I'd add "The LessWrong 2018 Review" to the list of things that are "sort of exploring the same direction". I agree my particular prediction about mechanical tools for distilling comments didn't materialize, but we did definitely allocate tons of effort towards distillation as a whole.

Yeah, and I experimented a bunch with that (directly turning forum discussions into posts) and mostly felt like it didn't really work that well. I mostly updated that there needs to be a larger synthesis step, though I still have some guesses for more direct things that could work. Ben spent some hours distilling the discussion and comments on a bunch of posts, which we should get around to posting (I just realized we never published them). 

Re tagging: In general the tagging system that we are building has a lot in common with being a wiki (collaboratively editable descriptions, providing canonical definitions and references, and providing good summaries of existing content), and I expect it to grow into being more of a wiki over time (the tagging use-case was a specific narrow use-case that seemed easy to get traction on, but the mid-term goal is to do a lot more wiki-like stuff). And I think from that perspective it's more clear how it helps with distillation.

I went and published such a distillation (which attempts to summarise the post What Failure Looks Like and distill its comments).

There's nothing that explicitly prevents people from distilling such discussions into subsequent posts or papers. If people aren't doing that, or are doing that less than they should, that could potentially be solved as a problem that's separate from "should more people be doing FP or traditional research?"

  1. Doing these types of summarize feels like a good place to start out if you are new to doing FP. It is a fairly straight-forward task, but provides a lot of value, and helps you grow skills and reputation that will help you when you do more independent work later.
  2. It might be useful for more experienced researchers/posters to explicitly point out when they are leaving this kind of value on the table. ("This was an interesting conversation, it contains a few valuable insights, and if I didn't have more pressing things to work on, I would have liked to distill it to make it more clear. If someone feels like doing that, I will happily comment on the draft and signal boost the post.")
There's nothing that explicitly prevents people from distilling such discussions into subsequent posts or papers. If people aren't doing that, or are doing that less than they should, that could potentially be solved as a problem that's separate from "should more people be doing FP or traditional research?"

Agreed. I'm mostly saying that empirically people don't do that, but yes there could be other solutions to the problem, it need not be inherent to FP.

Also, it's not clear to me that traditional research produces more clear distillations of how disagreements get resolved.

I agree you don't see how the disagreement gets resolved, but you usually can see the answer to the question that prompted the disagreement, because the resolution itself can be turned into a paper. This is assuming that the resolution came via new evidence. I agree that if a disagreement is resolved via simply talking through the arguments, then it doesn't turn into a paper, but this seems pretty rare (at least in CS).

In many research areas, ideas are common, and it isn't clear which ideas are most important. The most useful contributions come from someone taking an idea and demonstrating that it is viable and important, which often requires a lot of solitary work that can't be done in the typical amount of time it takes to write a comment or post.

Interesting. Definitely not an expert here, but I could imagine FP being a good tool in this case... if the forum is an efficient "marketplace of ideas", where perspectives compete and poke holes in each other and adapt to critics, and the strongest perspectives emerge victorious, then this seems like it could be a good way to figure out which ideas are the best? Some say AI alignment is like software security, and there's that saying "with enough eyes all bugs are shallow". If security flaws tend to be a result of software designers relying on faulty abstractions or otherwise falling prey to blind spots, then I would expect that withstanding a bunch of critics, each critic using their own set of abstractions, is a stronger indicator of quality than anything one person is able to do in solitude.

(It's possible that you're using "important" in a way that's different than how I used it in the preceding paragraph.)

FP works well when it is easy to make progress on ideas / questions through armchair reasoning, which you can think of as using the information or evidence you have more efficiently. However, it is often the case that you could get a lot more high-quality evidence that basically settles the question, if you put in many hours of work. As an illustrative example, consider trying to estimate the population of a city via FP, vs. going out and doing a census. In ML, you could say "we should add such-and-such inductive bias to our models so that they learn faster", and we can debate how much that would help, but if you actually build it in and train the model and see what happens, you just know the answer now.

Hm, you think data soundly beats theory in ML? Why is HARKing a problem then?

HARKing does the right two steps in the wrong order -- it first gets the data, and then makes the hypothesis. This is fine for generating hypotheses, but isn't great for telling whether a hypothesis is true or not, because there are likely many hypotheses that explain the data and it's not clear why the one you chose should be the right one. It's much stronger evidence if you first have a hypothesis, and then design a test for it, because then there is only one result out of many possible results that confirms your hypothesis. (This is oversimplified, but captures the broad point.)

I wouldn't say "data beats theory", I think theory (in the sense of "some way of predicting which ideas will be good", not necessarily math) is needed in order to figure out which ideas to bother testing in the first place. But if you are evaluating on "what gives me confidence that <hypothesis> is true", it's usually going to be data. Theorems could do it, but it seems pretty rare that there are theorems for actually interesting hypotheses.

I actually think there is an interesting philosophical puzzle around this that has not fully been solved...

If I show you the code I'm going to use to run my experiment, can you be confident in guessing which hypothesis I aim to test?

  • If yes, then HARKing should be easily detectable. By looking at my code, it should be clear that the hypothesis I am actually testing is not the one that I published.

  • If no, then the resulting data could be used to prove multiple different hypotheses, and thus doesn't necessarily constitute stronger evidence for any one of the particular hypotheses it could be used to prove (e.g. the hypothesis I preregistered).

To put it another way, in your first paragraph you say "there are likely many hypotheses that explain the data", but in the second paragraph, you talk as though there's a particular set of data such that if we get that data, we know there's only one hypothesis which it can be used to support! What gives?

My solution to the puzzle: Pre-registration works because it forces researchers to be honest about their prior knowledge. Basically, prior knowledge unencumbered by hindsight bias ("armchair reasoning") is underrated. Any hypothesis which has only the support of armchair reasoning or data from a single experiment is suspect. You really want both.

In Bayesian terms, you have to look at both the prior and the likelihood. Order shouldn't matter (multiplication is commutative), but as I said--hindsight bias.

Curious to hear your thoughts.

[There are also cases where given the data, there's only one plausible hypothesis which could possibly explain it. A well-designed experiment will hopefully produce data like this, but I think it's a bit orthogonal to the HARKing issue, because we can imagine scenarios where post hoc data analysis suggests there is only one plausible hypothesis for what's going on... although we should still be suspicious in that case because (presumably) we didn't have prior beliefs indicating this hypothesis was likely to be true. Note that in both cases we are bottlenecked on the creativity of the experiment designer/data analyst in thinking up alternative hypotheses.]

[BTW, I think "armchair reasoning" might have the same referent as phrases with a more positive connotation: "deconfusion work" or "research distillation".]

My solution to the puzzle is a bit different (but maybe the same?) Let's suppose that there's an experiment we could run that would come out with some result . Each potential value of is consistent with hypotheses. There are potential hypotheses.

Suppose Alice runs the experiment and then chooses a hypothesis to explain it. This is consistent with Alice having a uniform prior, in which case she has a chance of having settled on the true hypothesis. (Why not just list all hypotheses? Because Alice didn't think of all of them -- it's hard to search the entire space of hypotheses.)

On the other hand, if Bob chose a hypothesis to test via his priors, ran the experiment, and then was consistent with that hypothesis... you should infer that Bob's priors were really good (i.e. not uniform) and the hypothesis is correct. After all, if Bob's hypothesis was chosen at random, he only had a chance of getting a that was consistent with it.

Put another way: When I see the first scenario, I expect that the evidence gathered from the experiment is primarily serving to locate the hypothesis at all. When I see the second scenario, I expect that Bob has already successfully located the hypothesis before the experiment, and the experiment provides the last little bit of evidence needed to confirm it.

Related: Privileging the hypothesis

If I show you the code I'm going to use to run my experiment, can you be confident in guessing which hypothesis I aim to test?

Under this model, I can't be confident in guessing which hypothesis you are trying to test.

My solution to the puzzle: Pre-registration works because it forces researchers to be honest about their prior knowledge. Basically, prior knowledge unencumbered by hindsight bias ("armchair reasoning") is underrated.

It's possible that Alice herself believes that the hypothesis she settled on was correct, rather than assigning it a probability. If that were the case, I would say it was due to hindsight bias.

Any hypothesis which has only the support of armchair reasoning or data from a single experiment is suspect.

Yeah, I broadly agree with this.

[BTW, I think "armchair reasoning" might have the same referent as phrases with a more positive connotation: "deconfusion work" or "research distillation".]

I definitely do not mean research distillation. Deconfusion work feels like a separate thing, which is usually a particular example of armchair reasoning. By armchair reasoning, I mean any sort of reasoning that can be done by just thinking without gathering more data. So for example, solving a thorny algorithms question would involve armchair reasoning.

I don't mean to include the negative connotations of "armchair reasoning", but I don't know another short phrase that means the same thing.

Interesting. I think you're probably right that our model should have a parameter for "researcher quality", and if a researcher is able to correctly predict the outcome of an experiment, that should cause an update in the direction of that researcher being more knowledgable (and their prior judgements should therefore carry more weight--including for this particular experiment!)

But the story you're telling doesn't seem entirely compatible with your comment earlier in this thread. Earlier you wrote: "However, it is often the case that you could get a lot more high-quality evidence that basically settles the question, if you put in many hours of work." But in this recent comment you wrote: "the experiment provides the last little bit of evidence needed to confirm [the hypothesis]". In the earlier comment, it sounds like you're talking about a scenario where most of the evidence comes in the form of data; in the later comment, it sounds like you're talking about a scenario where most of the evidence was necessary "just to think of the correct answer - to promote it to your attention" and the experiment only provides "the last little bit" of evidence.

So I think the philosophical puzzle is still unsolved. A few more things to ponder if someone wants to work on solving it:

  • If Bob is known to be an excellent researcher, can we trust HARKing if it comes from him? Does the mechanism by which hindsight bias works matter? (Here is one possible mechanism.)

  • In your simplified model above, there's no possibility of a result that is "just noise" and not explained by any particular hypothesis. But noise appears to be a pretty big problem (see: replication crisis?) In current scientific practice, the probability that a result could have been obtained through noise is a number of great interest that's almost always calculated (the p-value). How should this number be factored in, if at all?

    • Note that p-values can be used in Bayesian calculations. For example, in a simplified universe where either the null is true or the alternative is true, p(alternative|data) = p(data|alternative)p(alternative) / (p(data|alternative)p(alternative) + p(data|null)p(null))

    • My solution was focused on a scenario where we're considering relatively obvious hypotheses and subject to lots of measurement noise, but you convinced me this is inadequate in general.

  • I'm unsatisfied with the discussion around "Alice didn't think of all of them". I know nothing about relativity, but I imagine a big part of Einstein's contribution was his discovery of a relatively simple hypothesis which explained all the data available to him. (By "relatively simple", I mean a hypothesis that didn't have hundreds of free parameters.) Presumably, Einstein had access to the same data as other contemporary physicists, so it feels weird to explain his contribution in terms of having access to more evidence.

    • In other words, it feels like the task of searching hypothesis space should be factored out from the task of Bayesian updating. This seems closely related to puzzles around "realizability"--through your search of hypothesis space, you're essentially "realizing" a particular hypothesis on the fly, which isn't how Bayesian updating is formally supposed to work. (But it is how deep learning works, for example.)
But the story you're telling doesn't seem entirely compatible with your comment earlier in this thread.

The earlier comment was comparing experiments to "armchair reasoning", while the later comment was comparing experiments to "all prior knowledge". I think the typical case is:

Amount of evidence in "all prior knowledge" >> Amount of evidence in an experiment >> Amount of evidence from "armchair reasoning".

If Bob is known to be an excellent researcher, can we trust HARKing if it comes from him?

I would pay a little more attention, but not that much more, and would want an experimental confirmation anyway. It seems to be that the world is complex enough and humans model it badly enough (for the sorts of things academia is looking at) that past evidence of good priors on one question doesn't imply good priors on a different question.

(This is an empirical belief; I'm not confident in it.)

In your simplified model above, there's no possibility of a result that is "just noise" and not explained by any particular hypothesis.

I expect that if you made a more complicated model where each hypothesis had a likelihood , and was high for hypotheses and low for the rest, you'd get a similar conclusion, while accounting for results that are just noise.

I know nothing about relativity, but I imagine a big part of Einstein's contribution was his discovery of a relatively simple hypothesis which explained all the data available to him.

I agree that relativity is an example that doesn't fit my story, where most of the work was in coming up with the hypothesis. (Though I suspect you could argue that relativity shouldn't have been believed before experimental confirmation.) I claim that it is the exception, not the rule.

Also, I do think it is often a valuable contribution to even think of a plausible hypothesis that fits the data, even if you should assign it a relatively low probability of being true. I'm just saying that if you want to reach the truth, this work must be supplemented by experiments / gathering good data.

In other words, it feels like the task of searching hypothesis space should be factored out from the task of Bayesian updating.

Bayesian updating does not work well when you don't have the full hypothesis space. Given that you know that you don't have the full hypothesis space, you should not be trying to approximate Bayesian updating over the hypothesis space you do have.

Bayesian updating does not work well when you don't have the full hypothesis space.

Do you have any links related to this? Technically speaking, the right hypothesis is almost never in our hypothesis space ("All models are wrong, but some are useful"). But even if there's no "useful" model in your hypothesis space, it seems Bayesian updating fails gracefully if you have a reasonably wide prior distribution for your noise parameters as well (then the model fitting process will conclude that the value of your noise parameter must be high).

Do you have any links related to this?

No, I haven't read much about Bayesian updating. But I can give an example.

Consider the following game. I choose a coin. Then, we play N rounds. In each round, you make a bet about whether or not the coin will come up Heads or Tails at 1:2 odds which I must take (i.e. if you're right I give you $2 and if I'm right you give me $1). Then I flip the coin and the bet resolves.

If your hypothesis space is "the coin has some bias b of coming up Heads or Tails", then you will eagerly accept this game for large enough N -- you will quickly learn the bias b from experiments, and then you can keep getting money in expectation.

However, if it turns out I am capable of making the coin come up Heads or Tails as I choose, then I will win every round. If you keep doing Bayesian updating on your misspecified hypothesis space, you'll keep flip-flopping on whether the bias is towards Heads or Tails, and you will quickly converge to near-certainty that the bias is 50% (since the pattern will be HTHTHTHT...), and yet I will be taking a dollar from you every round. Even if you have the option of quitting, you will never exercise it because you keep thinking that the EV of the next round is positive.

Noise parameters can help (though the bias b is kind of like a noise parameter here, and it didn't help). I don't know of a general way to use noise parameters to avoid issues like this.

Thanks for the example!

[+][comment deleted]2y30

John Maxwell and I went back and forth on this question a bunch. I was initially on the con side and updated in the direction that increasing lurker ratios and common knowledge generation are big enough considerations that forum participation is probably pretty good/helpful. He was initially on the pro side and I think updated in the direction that without an anchor of scholarship (textbooks, research review, and structured notes) and longer/more in depth conversations online activity can acquire a veneer of productivity that is harmful to both real productivity and mental health.

Can you summarize the evidence/arguments that were brought up during your discussions?

Maybe I remember the conversation differently than Romeo, because I remember being on the pro side.

Participation inequality is a thing. Here is one estimate for LW. Here is a thread from 2010 where Kevin asked people to delurk which has over 600 comments in it. Anecdotally, I'm no longer surprised by experiences like:

  • Friend I didn't know very well recently came to visit. Says my ideas about AI are interesting. Friend is just learning to program. I didn't even know they read LW, much less that they knew anything about my AI ideas.

  • Write a comment on the EA Forum. Go to an event that evening. I talk to someone and they're like "oh I saw your comment on the EA Forum and it was good".

  • I visited Europe and met an EA from the Czech Republic. He says: "I've probably read more words by you than by William MacAskill because you post to the EA Forum so much."

My impression is this is a contrast to academia, where virtually no one reads most academic publications. I suspect acquiring an online following allows for greater total influence than ascending the academic ladder, though it's probably a different kind of influence (people are less likely to cite your work years later? In the spirit of that, here is a LW thread from years ago on papers vs forums.)

However:

  • With great power comes great responsibility. If you are speaking to a big audience, spreading bad info is harmful. And debunking bad info is valuable.

  • I agree that the "anchor of scholarship" is valuable. I view FP as a guilty pleasure/structured procrastination. I don't necessarily endorse it as the highest value activity, but if I'm going to be goofing off anyway it's a relatively useful way to goof off, and sometimes it feels like my contributions are pretty valuable/I get valuable ideas from others. (I suspect the best forum writing is not only academically rigorous and innovative, but also entertaining to read, with clickbaity titles and interesting anecdotes and such, so you can capture the cognitive surplus of forum users as effectively as possible. Additionally, the best questions to ask forums are probably those that benefit from crowdsourcing/multiple perspectives/creativity/etc.)

Seems like the ideal is a balance between FP and scholarship. If you're all FP, you're likely just a clueless person spreading cluelessness. If you're all scholarship, others don't get to share in your wisdom much.

Right, I'm saying I started off con, you started off pro, and both of us nudged towards the other along the dimensions mentioned. I probably updated more than you.

Which is 100% how I initially read your comment, but if you look at your comment it actually says you were both con :-)

This was more than 2 years ago so unfortunately I don't recall.

This is one of those "so obvious when someone says it" things that are not at all obvious until someone says them. Well done!

I would note that many of these factors apply as benefits of office-chat participation (OP) as well. The main benefit of FP absent from OP, I suppose, is preparing you for efficient written communication, but the rest seem feature in both. The fact that their benefits overlap explains why remote researchers benefit so much more than others from FP.

What about "FP is a good way to stay up to date on everyone else’s latest research" and "FP generates new ideas via cross-fertilization"? It seems like FP allows someone to follow, participate in, and cross-fertilize among many more lines of research than OP. (I should clarify that I don't think everyone should do FP all the time. There are pros and cons to written vs verbal discussions so someone already in an office environment might be best served to do some of each, or some people within a research institute can do some of each in order to better spread and cross-fertilize ideas.)

I agree that some people can benefit from doing both, although getting everyone online is a hard collective action problem. I just claim that many researchers will satisfy with OP. At MIRI/FHI/OpenAI there are ~30-150 researchers, who think about a wide range of areas, which seems broadly comparable to the researchers among LessWrong/AF's active users (depending on your definition of 'researcher', or 'active'). Idea-exchange is extended by workshops and people moving jobs. Many in such a work environment will fund that FP has unacceptably low signal-noise ratio and will inevitably avoid FP...

Many in such a work environment will fund that FP has unacceptably low signal-noise ratio and will inevitably avoid FP...

I think FP has a better signal-cost ratio than workshops I've been to, in part because people tend to be more willing to talk about half-baked ideas in private, and in part because if I see some content online that I'm not interested in, I can quickly skip over it, while directly signaling disinterest to someone IRL is liable to hurt their feelings and accrue social cost to myself.

(I still try to attend workshops once a while, in part to physically meet people, in part to talk to people who rarely participate online, in part to get people's private opinions that they don't share online.)

I do think there are other powerful disincentives for FP though, and agree that it's kind of an uphill battle to get more people online.

This feels like a key piece of "how to build a modern institution of intellectual progress."

I agree with this. Recently I started blogging about ML and (in future posts) AI safety. I intended this to primarily be a learning experience. I found self-teaching myself material without the aid of a public forum to be pretty boring. This way I feel much more engaged. It also adds an adversarial aspect since I am forced to perform a mental check of "does what I wrote actually make sense" lest someone correct me. I hypothesize that this helps destroy a lot of beginner errors, and strengthens my ability to communicate in the process. Also writing is just a lot of fun too.

I actually found it pretty funny that you posted this the day I started blogging.

Completely agree.

This is an excellent post, I wonder if I should comment more on posts instead of being just a lurker. Since that is still somewhat undecided, at the very least, this post does helps me on wanting to close the gap or to "to learn missing knowledge".

This has similarly been my approach. As best I can tell writing papers for academic publication is nice but, especially in the AI safety space, is not really the best way to convey and discuss ideas. Much more important seems to be being part of the conversation about technical ideas, learning from it, and adding to it so others can do the same. I put some small amount of effort into things outside FP mostly because I believe it's a good idea for reputation effects and spreading ideas outside the forum bubble, but not because I think it's the best way to make intellectual progress.

It's also nice because the feedback loops are shorter. I can comment on a post or write my own, have a discussion, and then within weeks see the ripples of that discussion influencing other discussions. It helps me feel the impact I'm having, and motivates me to keep going.

Probably the only thing superior in my mind is doing practical work, e.g. building systems that test out ideas. Unfortunately many of the ideas we talk about in safety are currently ahead of the tech so we don't know how to build things yet (and for safety sake I think it's fine to not push on that too hard since I expect it will come on its own anyway), so until we are closer to AGI forum participation is likely one of the high impact activities one can engage in (I'm similarly positive about doing the face-to-face equivalent of talking at conferences and having conversations with interested folks).

I think that, among those who've done serious thought about how intellectual progress happens, it was pretty well known that in some domains a lot of research is happening on forums, and that forum participation as a research strategy can work. But in the broader world, most people treat forums as more like social spaces, and have a model of research works that puts it in distant, inaccessible institutional settings. Many people think research means papers in prestigious journals, with no model of where those papers come from. I think it's worth making common knowledge that getting involved in research can be as simple as tweaking your forum subscriptions.

I have the same feeling that commenting on posts takes very little will power but on the other hand writing ideas I care about into their own posts takes a lot of will power. I don't think that you are a unique case, and would expect that most people feel that commenting is much easier.

Long-time listener, first-time caller here! I think this is an interesting viewpoint, and I wonder how you decide if you're making forward progress. With standard publications, media presentations, or whatever, you can identify that you've contributed Idea X to Field Y when you're published, but it seems harder to know that for yourself when the main contribution is forum posts. I'm interested in your views on this. 

On a forum you can judge other people's opinions of your contributions by the karma (or the equivalent) of your posts, and by their comments. Of course there's a risk that people on some forum liking your posts might represent groupthink instead of genuine intellectual progress, but the same risk exists with academic peer review, and one simply has to keep this risk/uncertainty in mind.

I don't have that much experience with forums - when I was in research I learned mostly from reading scientific papers + googling stuff to understand them. But I definitely agree that being more active and engaged in the discussion is helpful.

Aside from the topic of research, I used to be very passive on my social nets and basically just consumed content created by others. But after I became more active I feel like I am getting more value out of it and at the same time spend less time there, as formulating questions or ideas takes effort. So it's a natural constraint.

I definitely agree! Historically, lots of scientists have made their best contributions in letters or in commentary. I'm trying to comment and reach out more too. :)