I have signed no contracts or agreements whose existence I cannot mention.
[epistemic status: way too ill to be posting important things]
hi fellow people-who-i-think-have-much-of-the-plot
you two seem, from my perspective as having read a fair amount of content from both, to have a bunch of similar models and goals, but quite different strategies.
on top of both having a firm grip on the core x-risk arguments, you both call out similar dynamics in capabilities orgs capturing will to save the world and turning it into more capabilities progress[1], you both take issue with somewhat different but i think related parts of openphil's grantmaking process, you both have high p(doom) and not very comfortable timelines, etc.
i suspect if connor explained why he was focusing on the things he is here, that would uncover the relevant difference. my current guess is connor is doing a kind of political alliancebuilding which is colliding with some of habryka's highly active integrity reflexes.
maybe this doesn't change much, these strategies do seem at least somewhat collision-y as implemented so far, but i hope our kind can get along.
e.g. "Turning care into acceleration" from https://www.thecompendium.ai/the-ai-race#these-ideologies-shape-the-playing-field
e.g. https://www.lesswrong.com/posts/h4wXMXneTPDEjJ7nv/a-rocket-interpretability-analogy?commentId=md7QvniMyx3vYqeyD and lots of calling out Anthropic
If it's easy for submitters to check a box which says "I asked them and they said full post imports are fine", maybe?
No strong takes on default, just obvious considerations you'll have thought of.
Cool, in that case probably opt-in to full-post makes more sense, maybe with the ability to switch modes for all posts by an author if they give permission?
I lean towards an opt-out system for whole post imports? I'd expect the vast majority of relevant authors to be happy with it, and it would offer less inconvenience to readers. Letting an author easily register as "no whole text imports please" seems worthwhile, and maybe if people aren't happy with that switching to opt-in?
Great post! One correction:
AI Safety Info (Robert Miles)
Focus: Making YouTube videos about AI safety, starring Rob Miles
AI Safety Info is a project owned by Rob Miles which mostly works on an extensive FAQ (300+ articles, covering how to help, common objections and responses, introduction, and more in-depth material+resources like the memes wiki), as well as some side projects like maintaining the Alignment Research Dataset and the RAG chatbot. While AI Safety Info is exploring producing videos, Rob's videos are not under the heading of AI Safety Info.
Elon Musk is top 20 in Diablo 4 in the world, one of only two Americans? WTF?
This is not an easy thing to do, and it’s definitely not a remotely fast thing to do. You have to put in the work. However many companies Elon Musk is running, there could have been at least one more, and maybe two, but he decided to play Diablo 4 instead. Can we switch him over to Factorio?
Isn't this just "someone good at Diablo used the name Elon Musk"? Any reliable evidence that it's him?
LW supports polls? I'm not seeing it in https://www.lesswrong.com/tag/guide-to-the-lesswrong-editor, unless you mean embed a manifold market which.. would work, but adds an extra step between people and voting unless they're registered to manifold already.
Ayahuasca: Informed consent on brain rewriting
(based on anecdotes and general models, not personal experience)
Grantmaking models and bottlenecks
Depends what they do with it. If they use it to do the natural and obvious capabilities research, like they currently are (mixed with a little hodge podge alignment to keep it roughly on track), I think we just basically for sure die. If they pivot hard to solving alignment in a very different paradigm and.. no, this hypothetical doesn't imply the AI can discover or switch to other paradigms.
I think doom is almost certain in this scenario.