ozziegooen

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences

Beyond Questions & Answers
Squiggle
Prediction-Driven Collaborative Reasoning Systems

Wiki Contributions

Comments

I'm curious why this got the disagreement votes.
1. People don't think Holden doing that is significant prioritization?
2. There aren't several people at OP trying to broadly figure out what to do about AI?
3. There's some other strategy OP is following? 

Also, I should have flagged that Holden is now the "Director of AI Strategy" there. This seems like a significant prioritization.

It seems like there are several people at OP trying to figure out what to broadly do about AI, but only one person (Ajeya) doing AIS grantmaking? I assume they've made some decision, like, "It's fairly obvious what organizations we should fund right now, our main question is figuring out the big picture." 

Ajeya Cotra is currently the only evaluator for technical AIS grants.

This situation seems really bizarre to me. I know they have multiple researchers in-house investigating these issues, like Joseph Carlsmith. I'm really curious what's going on here.

I know they've previously had (what seemed to me) like talented people join and leave that team. The fact that it's so small now, given the complexity and importance of the topic, is something I have trouble grappling with.

My guess is that there are some key reasons for this that aren't obvious externally.

I'd assume that it's really important for this team to become really strong, but would obviously flag that when things are that strange, it's likely difficult to fix, unless you really understand why the situation is the way it is now. I'd also encourage people to try to help here, but I just want to flag that it might be more difficult than it might initially seem.

Thanks for clarifying! That really wasn't clear to me from the message alone. 

> Though if you used Squiggle to perform an existential risk-reward analysis of whether to use Squiggle, who knows what would happen

Yep, that's in the works, especially if we can have basic relative value forecasts later on.

If you think that the net costs of using ML techniques when improving our rationalist/EA tools are not worth it, then there can be some sort of argument there.

Many Guesstimate models are now about making estimates about AI safety.

I'm really not a fan of the "Our community must not use ML capabilities in any form", not sure where others here might draw the line. 
 

I assume that in situations like this, it could make sense for communities to have some devices for people to try out.

Given that some people didn't return theirs, I imagine potential purchasers could buy used ones.

Personally, I like the idea of renting one for 1-2 months, if that were an option. If there's a 5% chance it's really useful, renting it could be a good cost proposition. (I realize I could return it, but feel hesitant to buy one if I think there's a 95% chance I would return it.)

Happy to see experimentation here. Some quick thoughts:

  • The "Column" looked a lot to me like a garbage can at first. I like the "+" in Slack for this purpose, that could be good.
  • Checkmark makes me think "agree", not "verified". Maybe a badge or something?
  • "Support" and "Agreement" seem very similar to me?
  • While it's a different theme, I'm in favor of using popular icons where possible. My guess is that these will make it more accessible. I like the eyes you use, in part because are close to the icon. I also like:
    • 🚀 or 🎉 -> This is a big accomplishment. 
    • 🙏 -> Thanks for doing this.
    • 😮 -> This is surprising / interesting. 
  • It could be kind of neat to later celebrate great rationalist things by having custom icons for them, to represent when a post reminds people of their work in some way. 
  • I like that it shows who reacted what, that makes a big deal to me. 

I liked this a lot, thanks for sharing.

Here's one disagreement/uncertainty I have on some of it:

Both of the "What failure looks like" posts (yours and Pauls) posts present failures that essentially seem like coordination, intelligence, and oversight failures. I think it's very possible (maybe 30-46%+?) that pre-TAI AI systems will effectively solve the required coordination and intelligence issues. 

For example, I could easily imagine worlds where AI-enhanced epistemic environment make low-risk solutions crystal clear to key decision-makers.

In general, the combination of AI plus epistemics, pre-TAI, seems very high-variance to me. It could go very positively, or very poorly. 

This consideration isn't enough to change p(doom) under 10%, but I'm probably be closer to 50% than you would be. (Right now, maybe 40% or so).

That said, this really isn't a big difference, it's less than one order of magnitude. 

Quick update: 


Immersed now supports a BETA for "USB Mode". I just tried it with one cable, and it worked really well, until it cut out a few minutes in. I'm getting a different USB-C cable that they recommend. In general I'm optimistic.

(That said, there are of course better headsets/setups that are coming out, too)

https://immersed.zendesk.com/hc/en-us/articles/14823473330957-USB-C-Mode-BETA-

Happy to see discussion like this. I've previously written a small bit defending AI friends, on Facebook. There was some related comments there.

I think my main takeaway is "AI friends/romantic partners" are some seriously powerful shit. I expect we'll see some really positive uses and also some really detrimental ones. I'd naively assume that, like with other innovations, some communities/groups will be much better at dealing with them than others.

Related, research to help encourage the positive sides seems pretty interesting to me. 

Load More