coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)
It does - ty!
I think the discord link is broken?
Alignment-focused policymakers / policy researchers should also be in positions of influence.
I'd add a bunch of human / social topics to your list e.g.
Research methodology / Scientific “rationality,” Productivity, Tools
I'd be really excited to have people use Elicit with this motivation. (More context here and here.)
Re: competitive games of introducing new tools, we did an internal speed Elicit vs. Google test to see which tool was more efficient for finding answers or mapping out a new domain in 5 minutes. We're broadly excited to structure and support competitive knowledge work and optimize research this way.
This is exactly what Ought is doing as we build Elicit into a research assistant using language models / GPT-3. We're studying researchers' workflows and identifying ways to productize or automate parts of them. In that process, we have to figure out how to turn GPT-3, a generalist by default, into a specialist that is a useful thought partner for domains like AI policy. We have to learn how to take feedback from the researcher and convert it into better results within session, per person, per research task, across the entire product. Another spin on it: we have to figure out how researchers can use GPT-3 to become expert-like in new domains.
We’re currently using GPT-3 for classification e.g. “take this spreadsheet and determine whether each entity in Column A is a non-profit, government entity, or company.” Some concrete examples of alignment-related work that have come up as we build this:
We'd love to talk to people interested in exploring this approach to alignment!
Ought is building Elicit, an AI research assistant using language models to automate and scale parts of the research process. Today, researchers can brainstorm research questions, search for datasets, find relevant publications, and brainstorm scenarios. They can create custom research tasks and search engines. You can find demos of Elicit here and a podcast explaining our vision here.
We're hiring for the following roles:
Each job description contains sample projects from our roadmap.
Research is one of the primary engines by which society moves forward. We're excited about the potential language models and ML have for making this engine orders of magnitude more effective.
"Remember that you are dying."
Do you have any examples?
When Elicit has nice argument mapping (it doesn't yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
Yes that is very aligned with the type of things we're interested in!!
Lots of uncertainty but a few ways this can connect to the long-term vision laid out in the blog post:
I see what you're saying. This feature is designed to support tracking changes in predictions primarily over longer periods of time e.g. for forecasts with years between creation and resolution. (You can even download a csv of the forecast data to run analyses on it.)
It can get a bit noisy, like in this case, so we can think about how to address that.