Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. To date, we’ve collaborated with LessWrong to embed interactive binary predictions, share AGI timelines and the assumptions driving them, forecast existential risk, and much more.
We’re working on adding GPT-3 based research assistant features to help forecasters with the earlier steps in their workflow. Users create and apply GPT-3 actions by providing a few training examples. Elicit then scales that action to thousands of publications, datasets, or use cases.
Here’s a demo of how someone applies existing actions:
And a demo of how someone creates their own action (no coding required):
Some actions we currently support include:
- Find relevant publications from think tanks
- Find relevant datasets
- Find forecasting questions from Metaculus, PredictIt, Foretell
- Decompose a vague query into more concrete subquestions or factors
There’s no better community than LessWrong to codify and share good reasoning steps so we’re looking for people to contribute to our action repository, creating actions like:
- Suggest reasons for / against
- Suggest a potential analysis someone can do in natural language (a different take on converting a natural language question into a SQL query)
- Find key players in an industry
- Suggest hypothesis
- Apply Occam’s razor / come up with the simplest explanation
If you’re interested in becoming a beta tester and contributing to Elicit, please fill out this form! Again, no technical experience required.