coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)
When Elicit has nice argument mapping (it doesn't yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
Yes that is very aligned with the type of things we're interested in!!
Lots of uncertainty but a few ways this can connect to the long-term vision laid out in the blog post:
I see what you're saying. This feature is designed to support tracking changes in predictions primarily over longer periods of time e.g. for forecasts with years between creation and resolution. (You can even download a csv of the forecast data to run analyses on it.)
It can get a bit noisy, like in this case, so we can think about how to address that.
you mean because my predictions are noisy and you don't want to see them in that list?
try it and let's see what happens!
this is too much fun to click on
Haha I didn't find it patronizing personally but it did take me an embarrassingly long time to figure out what Filipe did there :) Resource allocation seems to be a common theme in this thread.
Yes! For example I am often amazed by people who are able to explain complex technical concepts in accessible and interesting ways
Yes-anding you: our limited ability to run "experiments" and easily get empirical results for policy initiatives seems to really hinder progress. Maybe AI can help us organize our values, simulate a bunch of policy outcomes, and then find the best win-win solution when our values diverge.
I love the idea of exploring different minds and seeing how they fit. Getting chills thinking about what it means for humanity's capacity for pleasure to explode. And loving the image of swimming through a vast, clear, blue mind design ocean.