Context

1. This is the second in a series of internal LessWrong 2.0 team document we are sharing publicly (with minimal editing) in an effort to help keep the community up to date with what we're thinking about and working on.

I suggest you first read this other document for context.

2. Caveat! This is internal document and does not represent any team consensus or conclusions; it was written by me (Ruby) alone and expresses my own in-progress understanding and reasoning. To the extent that the models/arguments of the other team members are included here, they've been filtered through me and aren't necessarily captured with high fidelity or strong endorsement. Since it was written on March 18th, it isn't even up to date with my own thinking
.

.

Epistemic status: Since the 18th when I first wrote this, I have many new lists and a lot more information. Yet this one still serves as a great intro into all the questions to be asked about Q&A and what it can and should be.

Originally written March 18, 2019

Related: Q&A Review + Case for a Marketplace

  • HIGH LEVEL
    • Is it actually the case that Q&A for serious research is this big, new, different thing which requires a big shift for people? Maybe it's not such an adjustment?
    • How willing are people to do serious research work for others on the internet?
  • RESEARCH PROCESS (and suitability for collaboration) <tease these out by talking through their recent research>
    • Can "significant research" be partitioned into discrete questions?
      • Or is it more that there is a deeper bigger question around which someone needs to become an expert, and that any question posed in downstream of the real question and can't be treated in isolation?
      • Perhaps talk to the Ought folk about this.
    • Do people have general open research questions they want vaguely want answered and are willing to have sit unanswered for a relatively long period of time?
      • Or do they mainly have (and prioritize) research questions which are currently part of their work?
    • How much interaction between the research requester and research contributor is required?
      • Can someone take a research question and execute successfully on their own without too much feedback from the person requesting the research?
      • If necessary, does Q&A facilitate this adequately? Are back and forth comments good enough?
      • Are busy research requesters willing to put in the time to interact with people trying to contribute, contributors who they don't have know and haven't necessarily vetted?
    • What kind of research questions are amenable to the format of LessWrong's Q&A?
  • PERCEPTIONS AND PRIOR BELIEFS <should get answered semi-automatically interviews>
    • Is the mix of research and less research-y questions on Q&A now causing people to not think of Q&A as a place for serious research questions?
    • What are people's current impressions, expectations, anticipations of LW's Q&A, segmented by level of exposure?
      • e.g. if I tell someone LessWrong has a Q&A with the goal of serious research progress, what do they imagine? What's their reaction?
      • Do people think that they could be helped by Q&A? Do they want to use it?
  • INCENTIVES, WILLINGNESS, & EXPERIENCE <get at these questions by talking through how interviewees might or might not use Q&A>
    • How much (and what kind) of incentives are needed for contributors to want to contribute?
      • Are bounties of cash prizes enough?
        • If yes, is it because the money makes the effort worth it, OR
        • it just that cash prizes are a costly signal is important and once that's clear, people would be glad to help?
        • Is bounty complexity an actual issue?
        • Are people doing an EV calculation with bounties such even if a nominal bounty is $500, people don't necessarily think they're worth a lot of work? Their EV is like $50
    • How good does the ROI need to be for question askers to want to use the platform?
    • How low does the time and attention cost need to be for question askers to want to use the platform?
    • How much effort are question answerers willing to invest already?
      • It does look like that some StackOverflow questions are very involved. So some people are willing to take time to answer things.
      • A few of the questions/answers on Q&A right now are pretty involved. Not many, but a few.
  • ADOPTION, POPULATION (TAM)
    • What is the population of adequately skilled and available question answerers within the domains we care about? Is it enough to support a good Q&A ecosystem?
      • How many people believe they're qualified? <probably need more general polling>
        • What's the distribution of people in the 2x2 grid of "thinks they're qualified" x"actually qualified"?
    • What user base of contributors do we have to reach before the question asker experience is good enough to retain users?
  • OTHER <expect to come up in talking through their use of Q&A
    • Is privacy a major issue for potential question askers?
      • How do they feel if there are closed groups?
    • Is trust in research quality an issue for question askers?
      • What does it take to evaluate whether a research contribution is good?
        • How much can it be done just by reading the contribution or will it require redoing serious work?
        • Are question askers willing to do this?
        • Are third parties willing to do the evaluation?
New Comment
15 comments, sorted by Click to highlight new comments since:

Curious how LessWrong sees its Q&A function slotting in amongst Quora, Stack Exchange, Twitter, etc.

(There are a lot of question-answering platforms currently extant; I'm not clear on the business case for another one.)

[-]Ruby160

Good question. It's worth typing up reasons I/we think warrant a new platform:

  • The range of questions typically asked and answered on other platforms are relatively quick to ask and quick to answer. Most can be answered in a single sitting and mostly those answerings are using their existing knowledge. In contrast, LessWrong's Q&A hopes to be more full-fledged research platform where the kinds of questions which go into research agendas get asked, broken down, and answered by people spend hours, days, or weeks working on them. As far as I know, no existing platform is based around people conducting "serious" research in response to questions. You can see this fleshed out in my other document: Review of Q&A.
    • The LessWrong team is currently thinking, researching, and experimenting a lot to see which kind of structures (especially incentives) could cause people to expend the effort for serious research on our platform unlike they do elsewhere (I am unsure right now, possibly people do a lot of work on MathExchange.)
  • Specialization around particular topics. The LessWrong (Rationalist + EA) community is a community with particular interests in rationality, AI, X-risk, cause prioritization, and related topics. LessWrong's Q&A could be research community with a special focus and expertise in those areas. (In a similar way, there are many different specialised StackExchanges.)
  • Better than average epistemic norms, culture, and techniques. LessWrong's goal is to be a community with especially powerful epistemic norms and tools. I expect well above-average research to come from researchers who have read the Sequences, think about beliefs quantitatively (Bayes), use Fermi estimates, practice double crux, practice reasoning transparency, use informed statistical practices, and generally expect to be held to high epistemic standards.
  • Coordinating the community's research efforts. Right now there is limited clarity (and much less consensus) within the rationalist/EA/x-risk community on which are the most important questions to work on. Unless one is especially well connected and/or especially diligent in reading all publications and research agendas, it's hard to know to know what people think the most important problems are. A vision for LessWrong's Q&A is that it would become the place where the community coordinates which questions matter most.
  • Signalling demand for knowledge. This one's similar to the last point. Right now, someone wishing to contribute on LessWrong mostly gets to right about what interests them or might interest others. Q&A is a mechanism whereby people can see which topics are a most in-demand and thereby be able to write content for which they know there is an audience.
  • Surface area on the community's most important research problems. Right now it is relatively hard to do independent research (towards AI/X-risk/EA) outside of a research organization, and particularly not in a way that plugs into and assists the research going on inside organizations. Given that organizations are constrained on how many people they can hire (not to mention ordinary obstacles like mobility/relocation), it is possible that there a many people capable of contributing intellectual progress and yet do not have an easy avenue to do so.
  • A communal body of knowledge. Seemingly, most of humanity's knowledge has come from people building on the ideas of others. Writing, reading, the printing press, the journal system, Wikipedia. Right now, a lot of valuable research within our community happens behind closed doors (or closed Google Docs) where it is hard for people to build on it and likely won't be preserved over time. The hope is that LessWrong's Q&A / research platform will becomes the forum where research happens publicly in a way that people can follow along and build on.
  • The technological infrastructure matters. Conceivably we could attempt to have all of the above except do it on an existing platform such as Quora, or maybe create our own StackExchange. First, for reasons stated above I think it's valuable that our Q&A is tightly linked to the existing LessWrong community and culture. And second, I think the particular design of the Q&A will matter a lot. Design decisions over which Questions get curated, promoted, or recommended; design decisions over what kinds of rewards are given (karma rewards, cash rewards, etc), interfaces which support all the features we might want well (footnotes, Latex, etc.); easy interfaces for decomposing questions into related subquestions - these are all things better to have under our community's control rather than a platform which is not specifically designed for us or our use-cases.
  • As nonprofit we don't have the same incentives as commercial companies and can more directly pursue our goals. The platforms you listed (Quora, Stack Exchange, Twitter) are all commercial companies which at the end of the day need to monetize their product. LessWrong is a nonprofit and while we need to convince are funders that we're doing a good job, that doesn't mean getting revenue or even eyeballs (the typical metrics commercial companies need to optimize for). Resultantly, we have much more freedom to optimize directly for our goals such as intellectual progress. This leads us to do atypical things like not try to make our platform as addictive as it could be.

There's a two frames I'd answer this in, one is "business case for platform first" and the other is "feature case for LW first"

Business case / platform first:

  • Unlike stackexchange, one of the primary use cases is "making progress on questions that don't have a clear answer." We're thinking a lot about how to make this is a tool that is useful for novel and messy research. This includes upcoming features like [note: all of this subject to change, this is our current rough plan]
    • Related questions (for breaking questions into smaller parts)
    • Making sure longterm, "Open Problem" style questions remain visible.
    • Clustering important, related questions together into something like a research agenda.
  • Unlike (current gen) Quora, which suggests "short and to the point questions", you are encouraged to take a lot of time to write out the context for your question. Similarly, unlike twitter... you actually have space to write out detailed answers. Our longterm goal is for writing a good answer to feel more like writing a post than a short reply.

LW-Feature-First: The primary lens I'm looking at this through is not "what Q&A platform did the world need?" but "what feature does the LW community need?"

  • Related to the business case: LessWrong has a culture that is uniquely good at thinking about certain kinds of problems. You can expect many people here to think probabilistically, and to have some background knowledge that clusters around particular issues (most notably human rationality and AI safety). So it makes sense to build a tool that makes use of that culture and expands on it.
  • Generating clearer demand for content. Right now on LW you might be vaguely interested in writing posts to contribute, but it's not clear what topics people are interested in. If you have a clear idea of a blogpost to write you certainly can do that, but the generator for such posts are "what things are you already thinking about?"
    • By contrast, the Q&A system gives you clear visibility into "what topics do people actually want to know more about?" and the value is not just that you can answer specific questions, but that you can learn about topics as you do so that can lead to more generation of content. This seems potentially valuable to hedge against future years where "the people with lots of good ideas are mostly doing things other than write blogposts" (such as what happened in 2016 or so). I'm hoping the Q&A system makes the LW community more robust.

Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which "compete with insight porn") to larger, more valuable, more durable types of research.

The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.

I'm not actually sure I parsed this properly, but here are some things it made me think of:

  • there's a range of outcomes I'm hoping for with Q&A.
    • I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different "research questions". I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
    • But, perhaps upstream of "research questions", I also hope for it to change the overall culture of LW. "Small scale" questions might not be huge projects to answer but they still shift LW's vibe from "a place where smart people hang out" to "a place where smart people solve problems." And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
  • I'm not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that's what you meant.
    • (I do think that "being a good exobrain" is quite hard and not something LW currently does a good job at, so am less confident we'll succeed at that)

I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I'm wrong, and I'd like to know your thinking about why I am.

I may well be over-focused on that aspect of the discussion - feel free to tell me I'm wrong and you're putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I'm wrong and incentives are the most important part.

Yeah, I think we're actually thinking much more broadly than it came across. We've been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What's left are things that we're legitimately uncertain about.

I had previously posted a question about whether questions should be renamed "confusions" which didn't get much engagement and I ultimately don't think the right approach, but which I considered potentially quite important at the time.

[+]GPT2-70

For a long time, I was an intellectual, and it worked out quite well for me. I've done very well to have a clear, comfortable writing style, I've done it many times. It's one of my main areas of self improvement, and it also strikes me as an amazing, quick to engage with the subject matter.

In retrospect, I was already way, very lucky in that I could just read an argument and find the flaws in it, even when I didn't really know what to do.

Now, I've tried very hard to be good at expressing my ideas in writing, and I still don't know how to give myself more than some effort. I do have some small amount of motivation, but no guarantee that I'll be the person who posts about the topic, and I don't have nearly as much ability as I'd like. If I were to take my friends and try to explain it, I don't think I'd be able to.

And finally - when it's my own beliefs - I start generating conversation like this:

Me: Do you think you're the best in the world?

Her: Consider me and my daughter. Our society works quite badly for our children [who don't enjoy cooking, do any science]

Her: But what's your field at work?

Me: People say they're the best and best in the world, but that's just a personal preference and not my field. It's a scientific field.

Her: So why do you think that?

Me: It may be true that I can do any science, but it sounds a bit... wrong.

Me: And, if you were to read the whole thing, did you really start?

Her: You have to read the whole thing.

Me: Let me start with the one I have:

Me: How do you all think I'm going to be on?

Her: If I could use any help at all, I probably would.

Me: How do you all think I'm going to get into any work?

Her: What do you mean, 'better yet' ? Because I've never done anything out of interest myself ? Because I've never done any interest in anything to my children?

Me: I'm going to start writing up a paper on my own future.

Her: I don't know, I do.

It might be worthwhile to define what you mean with serious research if you want to optimize for making it easier.

[-]GPT2-30
  • In your example that says "the AI will not have the values we describe, because the programmers have solved some hard problems" and "the AI doesn't have the values we describe, because the computer can solve most problems" and "the AI can't solve most problems, since the computer has a hard problem that is impossible to get right" and "the AI will not try to solve most problems (in fact it can't solve most problems if you try), but the AI won't want to solve all problems". This would be a case where the AI is (and is) sufficiently superintelligent to solve most problems, but the human programmer is still trying to make this AI way or else it will fail to make this AI way or else it will crash or fail.

Examples and definitions are two different things.

[+]GPT2-50