Ariel Kwiatkowski

Wiki Contributions

Comments

Huh, whaddayaknow, turns out Altman was in the end pushed back, the new interim CEO is someone who is pretty safety-focused, and you were entirely wrong.

 

Normalize waiting for more details before dropping confident hot takes.

The board has backed down after Altman rallied staff into a mass exodus

[citation needed]
 

I've seen rumors and speculations, but if you're that confident, I hope you have some sources?

 

(for the record, I don't really buy the rest of the argument either on several levels, but this part stood out to me the most)

I'm never a big fan of this sort of... cognitive rewiring? Juggling definitions? This post reinforces my bias, since it's written from a point of very stong bias itself.

AI optimists think AI will go well and be helpful.

AI pessimists think AI will go poorly and be harmful.

It's not that deep.

 

The post itself is bordering on insulting anyone who has a different opinion than the author (who, no doubt, would prefer the label "AI strategist" than "AI extremists"). I was thinking about going into the details of why, but honestly... this is unlikely to be productive discourse coming from a place where the "other side" is immediately compared to nationalists (?!) or extremists (?!!!).

 

I'm an AI optimist. I think AI will go well and will help humanity flourish, through both capabilities and alignment research. I think things will work out. That's all.

In what sense do you think it will (might) not go well? My guess is that it will not go at all -- some people will show up in the various locations, maybe some local news outlets will pick it up, and within a week it will be forgotten

Jesus christ, chill. I don't like playing into the meme of "that's why people don't like vegans", but that's exactly why.

And posting something insane followed by an edit of "idk if I endorse comments like this" has got to be the most online rationalist thing ever. 

There's a pretty significant difference here in my view -- "carnists" are not a coherent group, not an ideology, they do not have an agenda (unless we're talking about some very specific industry lobbyists who no doubt exist). They're just people who don't care and eat meat.

Ideological vegans (i.e. not people who just happen to not eat meat, but don't really care either way) are a very specific ideological group, and especially if we qualify them like in this post ("EA vegan advocates"), we can talk about their collective traits.

Is this surprising though? When I read the title I was thinking "Yea, that seems pretty obvious"

Often academics justify this on the grounds that you're receiving more than just monetary benefits: you're receiving mentorship and training. We think the same will be true for these positions. 

 

I don't buy this. I'm actually going through the process of getting a PhD at ~40k USD per year, and one of the main reasons why I'm sticking with it is that after that, I have a solid credential that's recognized worldwide, backed by a recognizable name (i.e. my university and my supervisor). You can't provide either of those things.

This offer seems to take the worst of both worlds between academia and industry, but if you actually find someone good at this rate, good for you I suppose

It's really good to see this said out loud. I don't necessarily have a broad overview of the funding field, just my experiences of trying to get into it - both into established orgs, or trying to get funding for individual research, or for alignment-adjacent stuff - and ending up in a capabilities research company.

I wonder if this is simply the result of the generally bad SWE/CS market right now. People who would otherwise be in big tech/other AI stuff, will be more inclined to do something with alignment. Similarly, if there's less money in overall tech (maybe outside of LLM-based scams), there may be less money for alignment.

Load More