LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I have plenty of complaints about this piece and wish Dario's worldview/his-publicly-presented-stances were different.
But, holding those constant, overall I'm glad he wrote this. I'm glad autonomy risks are listed early on. One of my main hopes this year was for Dario and Demis to do more public advocacy in the sort of directions this points.
I also just... find myself liking some of the poetry of the section names. (I found the "Black Seas of Infinity" reference particularly satisfying)
Vaguely attempting to separate out "reasonable differences in worldview" from "feels kinda skeezy":
Skeezy
The way this conflates religious/totalizing-orientation-to-AI pessismism with "it just seems pretty likely for AI to be massively harmful". (I do think it's fair to critique a kind of apocalyptic vibe that some folk have, although I think there's also kind of similarly badly totalizing views of "AI will be our salvation/next-phase of evolution", and if you're going to bother critiquing that you should be addressing both)
That feels pretty obviously like a political move to try to position Anthropic as "a reasonable middle ground." (I don't strongly object to them pulling that move. But, I think there are better ways to pull it)
Disagreement
Misuse/Bad-Actors. I have some genuine uncertainty whether it makes sense to be as worried about misuse as Dario is. Most of my beliefs are of the form "misalignment is real bad and real difficult" so I'm not too worried about bad actors getting AI, but, it's plausible that if we solved misalignment, bad actors would immediately become a problem and it's right to be concerned about it.
Unclear about skeezy vs just disagreeing
His frame around regulation, and it not-being-possible-to-slow-down feels pretty self serving, and/or confusing.
I agree with his caution about regulating things we don't understand yet. I might agree with the sentence "regulations should be as surgical as possible" (mostly because I think that's usually true of regulations). But I don't really see a regime where the regulations are not relatively extreme in some ways, and I think surgical implies something like "precise" and "minimal".
I find it quite weird that he doesn't explore at all the options for controlled takeoff. It sounds like he thinks... like, do export controls and a few simple trade-embargo things are the only way to slow down autocracies, and it's important to beat autocracies, and therefore we can only potentially slow down a teeny amount.
The options to slowing down are all potentially somewhat crazy or intense (not like "Dyson Spheres" crazy, but, like, "go to war" level crazy), and I dunno if he's just not saying them because he doesn't want to say anything too intense sounding, or he honestly doesn't think they'll work.
He reads something like "negative-utilitarian for accidentally doing costly regulations."
...
This document is clearly overall a kind of political document (trying to shape the zeitgeist) and I don't have that strong a take about what sort of political documents are good to write. But, in a world where political discourse was overall better, I'd have liked if he included notions of what would change his mind about the general vibe of "the way out of this situation is through it, rather than via slowdown/stopping." If you're going to be one of the billion dollar companies hurtling us towards unprecedented challenges, with some reasons for thinking that's correct, I think you should at least spell out the circumstances where you'd change your mind or stop or naturally pivot your strategy.
There's some related harder-to-track metric of "% code written by non-humans, which was a mistake." (i.e. the code is actually kinda bad and the human would have done better to write it themselves).
I don't feel very confident about any of this, but, I think it's just sort of fine if not all posts are for all people.
In any other topic than politics, I think it'd be be fine to have a lower effort meta post trying to get traction on how to think about the problem, with the people who are already following a topic, before writing higher effort posts that do a better job being a good canonical reference. It's totally fine for someone to write an agent foundations post that just assumes a lot of background while some people hash out their latest ideas, and people who aren't steeped in agent foundations just aren't the target audience.
It's possible politics should have different standards from that such that basically every posts should be accessible, but, that's a fairly specific argument I'd need to hear.
I agree it'd be bad if there were only ever political posts like this. I don't know if I think it'd be bad if 10% or 20% or 50% of posts like this, would need to think about it more.
Thinking out loud about next steps.
So, I agree with all the commenters who be like "the listed questions feel like an oddly specific set of questions that are cherrypicked." It's not obvious what to actually do instead.
One angle is to try for more of a "world map" rather than a "US map" that is trying to ask general questions across history that a) make it easier to compare the US to other countries (Which seems relevant) and also forces the mindset of "see what are interesting things to notice across history" as opposed to "try to answer specific questions")
Which, like, I still have no idea how to do.
But, it occurs to me OurWorldInData is already kinda trying to be this thing. Taking a quick look there, it seems like often there's only relatively recent data (makes sense).
Their page on corruption does a decent job of laying out why the problem of asking "how corrupt are countries?" is hard, but, answers it a few different ways.
Nod. Agree with your object level take in the 3rd paragraph.
I think it'd have been dramatically more effort and mostly a different post to make the opening paragraphs to your satisfaction, and kinda the whole point of this post is to be able to write a second post that is more the type you want. (I also suspect you're an outlier in the amount you're not following Trump discourse already, none of the opening paragraphs are supposed to be new information for the reader)
Yeah I went to try to write some stuff and felt bottlenecked on figuring out how to generate a character I connect with. I used to write fiction but like 20 years ago and I'm out of touch.
I think a good approach here would be to start with some serial webfiction since that's just easier to iterate on.
What is your concrete preference for what I had done with this post?
(this feels like a fairly generic response that's not particularly engaging with the situation or post, which is specifically asking "how to get grounded", with a description of my current ideas for doing so)
I think it's a bad framing to treat "unprecedented moves to expand executive power" and "natural extension of existing trends" as the same mental bucket. The two are not the same. A key problem in the US is that the existing trends over the last two decades have been bad when it comes to expanding executive power.
I'm confused about what you mean here, the specific existing trend I was imagining was "unprecedented moves to expand executive power." Which look different if they are on a steady trend, vs one guy radically doing much worse than trend.
Having sat on this for a night, I think basically yeah this posts's framing doesn't make sense as a way to engage with active Trump supporters.
Right now my main question is "should I spend more time thinking about this or go back to ignoring it and hope it isn't too bad?". I think if I decided to do that I'd probably expect "solve political polarization" to be a major piece of it and yeah I'd want to talk to a wider variety of people qualitatively.
I agree that baking in the framing into the initial question is bad, but, like, the framing is the reason why I'm even considering thinking more about this in the first place and I'm not sure how to sidestep that.
The point about "online arguments" vs "chatting with individual people" is well taken though.
I don't know that it's actually targeting the stuff you specifically say here (because I think a lot of this isn't actually the most useful version of rationality), but, I (and @Screwtape) am working on a rationality training site. I would compare it more to the older version of brilliant.org or codewars than duolingo.
Can you say a bit more about what you'd have wanted out of such an app?