I agree and this is why research grant proposals often feel very fake to me. I generally just write up my current best idea / plan for what research to do, but I don't expect it actually pan out that way and it would be silly to try to stick rigidly to a plan.
I will (once again!) be raising the bar for what gets included going forward to prevent that.
I'm confused by this because the bar for what gets included seems very low. I mostly don't read these posts because a large fraction of the "news" reported is just random tweets by people in the rationalist / adjacent sphere.
It might be nice to move all AI content to the Alignment Forum. I'm not sure the effect you're discussing is real, but if it is, it might be because LW has become a de facto academic journal for AI safety research, so many people are posting without significant engagement with the LW canon or any interested in rationality.
The current rules around who can post on the Alignment Forum seem a bit antiquated. I've been working on alignment research for over 2 years and I don't know off the top of my head how to get permission to post there. And I expect the relevant people to see stuff if it's on LW anyway.
The thread under this comment been yourself and Said seems to conflate two different questions, resulting in you talking past each other:
1. Can Aella predict that people will be offended by things.
2. Can Aella empathize with the offended people / understand why they are offended.
My guess would be that Aella can generally predict people's reactions, but she cannot empathize with their point of view.
The nearby pattern of internet bait that I dislike is when someone says "I don't understand how someone can say/think/do X" where this is implicitly a criticism of the behavior.
I think the reason I find these tweets of Aella's highly engaging and rage-baity is that they generally read as criticisms to me. Perhaps this is uncharitable, but I expect this is also how most others read them.
I hope I don't have to explain why some people would rather not go near X/Twitter with a ten foot pole.
Right. So for me the even bigger question is "why is someone like Eliezer on twitter at all?" If politics is the mind killer, twitter is the mind mass-murderer.
It delivers you the tribal fights most tailored to trigger you based on your particular interests. Are you a grey tribe who can rise above the usual culture war battles? Well then here's an article about NIMBYs or the state of American education that's guaranteed to make you seethe with anger at people's stupidity.
When I read Eliezer's sequences I feel like I've learned some interesting ideas and spent my time well. When I read Eliezer's twitter I feel frustrated by the world and enraged by various out-group comments.
Or Aella, whose blog is exciting and interesting. But who posts comments on twitter like "I'm shocked that people are offended by [highly decoupled statement] / [extremely norm-breaking behavior]" when it is completely obvious that most highly-coupling / norm-following people would be triggered by that.
This discussion has been had many times before on LessWrong. I suggested taking Why it's so hard to talk about Consciousness as a starting point.
Anthropic is reportedly lobbying against the federal bill that would ban states from regulating AI. Nice!
Implicit in my views is that the problem would be mostly resolved if people had aligned AI representatives which helped them wield their (current) power effectively.
Can you make the case for this a bit more? How are AI representatives going to help people prevent themselves becoming disempowered / economically redundant? (Especially given that you explicitly state you are skeptical of "generally make humans+AI (rather than just AI) more competitive").
Mandatory interoperability for alignment and fine-tuning
Furthermore, I don't really see how fine-tuning access helps create AI representatives. Models are already trained to be helpful and most people don't have very useful personal data that would make their AI work much better for them (that can't be put in context of any model).
The hope here would be to get the reductions in concentration of power that come from open source
The concentration of power from closed source AI comes from (1) the AI companies' profits and (2) the AI companies having access to more advanced AI than the public. Open source solves (1), but fine-tuning access solves neither. (Obviously your "Deploying models more frequently" proposal does help with (2)).
Yeah that would be great!
Downvoted. This post feels kinda mean. Tyler Cowen has written a lot and done lots of podcasts - it doesn't seem like anyone has actually checked? What's the base rate for public intellectuals ever admitting they were wrong? Is it fair to single out Tyler Cowen?