Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. To date, we’ve collaborated with LessWrong to embed interactive binary predictions, share AGI timelines and the assumptions driving them, forecast existential risk, and much more. 

We’re working on adding GPT-3 based research assistant features to help forecasters with the earlier steps in their workflow. Users create and apply GPT-3 actions by providing a few training examples. Elicit then scales that action to thousands of publications, datasets, or use cases. 

Here’s a demo of how someone applies existing actions:

 

And a demo of how someone creates their own action (no coding required):  

Some actions we currently support include:

  • Find relevant publications from think tanks
  • Find relevant datasets
  • Find forecasting questions from Metaculus, PredictIt, Foretell
  • Decompose a vague query into more concrete subquestions or factors

There’s no better community than LessWrong to codify and share good reasoning steps so we’re looking for people to contribute to our action repository, creating actions like: 

  • Suggest reasons for / against
  • Suggest a potential analysis someone can do in natural language (a different take on converting a natural language question into a SQL query)
  • Find key players in an industry
  • Suggest hypothesis
  • Apply Occam’s razor / come up with the simplest explanation

If you’re interested in becoming a beta tester and contributing to Elicit, please fill out this form! Again, no technical experience required.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 12:10 AM

What I most want is creativity mode where it uses some of the best practices from structured creativity exercises to hit you with random prompts and elaborations. I think this is easily doable but might be its own side project.

Do you have any examples?