Ought will host a factored cognition “Lab Meeting” on Friday September 16 from 9:30AM - 10:30AM PT. We'll share the progress we've made using language models to decompose reasoning tasks into subtasks that are easier to perform and evaluate. This is part of our work on supervising process, not outcomes....
Ought is an applied machine learning lab. We’re building Elicit, the AI research assistant. Our mission is to automate and scale open-ended reasoning. To get there, we train language models by supervising reasoning processes, not outcomes. This is better for reasoning capabilities in the short run and better for alignment...
We can think about machine learning systems on a spectrum from process-based to outcome-based: * Process-based systems are built on human-understandable task decompositions, with direct supervision of reasoning steps. * Outcome-based systems are built on end-to-end optimization, with supervision of final results. This post explains why Ought is devoted to...
Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. To date, we’ve collaborated with LessWrong to embed interactive binary predictions, share AGI timelines and the assumptions driving them, forecast existential risk, and much more. We’re working on adding GPT-3 based research assistant...
Ought’s mission is to automate and scale open-ended reasoning. Since wrapping up factored evaluation experiments at the end of 2019, Ought has built Elicit to automate the open-ended reasoning involved in judgmental forecasting. Today, Elicit helps forecasters build distributions, track beliefs over time, collaborate on forecasts, and get alerts when...
Context This is a place to explore visions of how AI can go really well. Conversations about AI (both in this community and disseminated by mainstream media) focus on dystopian scenarios and failure modes. Even communities that lean technoutopian (Silicon Valley) are having an AI hangover. More broadly, many people...