Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. To date, we’ve collaborated with LessWrong to embed interactive binary predictions, share AGI timelines and the assumptions driving them, forecast existential risk, and much more.
We’re working on adding GPT-3 based research assistant features to help forecasters with the earlier steps in their workflow. Users create and apply GPT-3 actions by providing a few training examples. Elicit then scales that action to thousands of publications, datasets, or use cases.
Here’s a demo of how someone applies existing actions:
And a demo of how someone creates their own action (no coding required):
Some actions we currently support include:
There’s no better community than LessWrong to codify and share good reasoning steps so we’re looking for people to contribute to our action repository, creating actions like:
If you’re interested in becoming a beta tester and contributing to Elicit, please fill out this form! Again, no technical experience required.