Most of my posts and comments are about AI and alignment. Posts I'm most proud of, which also provide a good introduction to my worldview:
I also created Forum Karma, and wrote a longer self-introduction here.
PMs and private feedback are always welcome.
NOTE: I am not Max Harms, author of Crystal Society. I'd prefer for now that my LW postings not be attached to my full name when people Google me for other reasons, but you can PM me here or on Discord (m4xed) if you want to know who I am.
I would just evaluate your argument on my own and I would evaluate the counterargument in the comment on my own.
The precise issue is that a sizable fraction of the audience will predictably not do this, or will do it lazily or incorrectly.
On LessWrong, this shows up in voting patterns, for example, a controversial post will sometimes get some initial upvotes and then the karma / trend will swing around based on the comments and who had the last word. Or, a long back-and-forth ends up getting far fewer votes (and presumably, eyeballs) than the top-level post / comment.
My impression is that most authors aren't that sensitive to karma per se but they are sensitive to a mental model of the audience that this swinging implies, namely that many onlookers are letting the author and their interlocutor(s) do their thinking for them, with varying levels of attention span, and where "highly upvoted" is often a proxy for "onlookers believe this is worth responding to (but won't necessarily read the response)". So responding often feels both high stakes and unrewarding for someone who cares about communicating something to their audience as a whole.
Anyway, I like Duncan's post as a way of making the point about effort / implied obligation to both onlookers and interlocutors, but something else that might help is some kind of guide / reminder / explanation about principles of being a good / high-effort onlooker.
What specifically do you think is obviously wrong about the village idiot <-> Einstein gap? This post from 2008 which uses the original chart makes some valid points that hold up well today, and rebuts some real misconceptions that were common at the time.
The original chart doesn't have any kind of labels or axes, but here are two ways you could plausibly view it as "wrong" in light of recent developments with LLMs:
I think it's debatable how much Eliezer was actually making the stronger versions of the claims above circa 2008, and also remains to be seen how wrong they actually are, when applied to actual superintelligence instead of whatever you want to call the AI models of today.
OTOH, here are a couple of ways that the village idiot <-> Einstein post looks prescient:
Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks. But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee. A chimp couldn't tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.
(and something like a 4B parameter open-weights model is analogous to the chimpanzee)
Whereas I expect that e.g. Robin Hanson in 2008 would have been quite surprised by the similarity and non-specialization among different models of today.
Thanks for the report, should be fixed now.
The issue was that the LW GraphQL API has changed slightly, apparently. The user query suggested here no longer works, but something like:
{
GetUserBySlug(slug: "max-h") {
_id
slug
displayName
pageUrl
postCount
commentCount
createdAt
}
}
works fine.
I prefer (classical / bedrock) liberalism as a frame for confronting societal issues with AGI, and am concerned by the degree to which recent right-wing populism has moved away from those tenets.
Liberalism isn't perfect, but it's the only framework I know of that even has a chance of resulting in a stable consensus. Other frames, left or right, have elements of coercion and / or majoritarianism that inevitably lead to legitimacy crises and instability as stakes get higher and disagreements wider.
My understanding is that a common take on both the left and right these days is that, well, liberalism actually hasn't worked out so great for the masses recently, so everyone is looking for something else. But to me every "something else" on both the left and right just seems worse - Scott Alexander wrote a bunch of essays like 10y ago on various aspects of liberalism and why they're good, and I'm not aware of any comprehensive rebuttal that includes an actually workable alternative.
Liberalism doesn't imply that everyone needs to live under liberalism (especially my own preferred version / implementation of it), but it does provide a kind of framework for disagreement and settling differences in a way that is more peaceful and stable than any other proposal I've seen.
So for example on protectionism, I think most forms of protectionism (especially economic protectionism) are bad and counterproductive economic policy. But even well-implemented protectionism requires a justification beyond just "it actually is in the national interest to do this", because it infringes on standard individual rights and freedoms. These freedoms aren't necessarily absolute, but they're important enough that it requires strong and ongoing justification for why a government is even allowed to do that kind of thing. AGI might be a pretty strong justification!
But at the least, I think anyone proposing a framework or policy position which deviates from a standard liberal position should acknowledge liberalism as a kind of starting point / default, and be able to say why the tradeoff of any individual freedom or right is worth making, each and every time it is made. (And I do not think right-wing frameworks and their standard bearers are even trying to do this, and that is very bad.)
I think it was fine for Nate to delete your comment and block you, and fine for you to repost it as a short form.
But my anecdote is a valid report of the historical consequences of talking with Nate – just as valid as the e/acc co-founder's tweet.
"just as valid" [where validity here = topical] seems like an overclaim here. And at the time of your comment, Nate had already commented in other threads, which are now linked in a footnote in the OP:
By "cowardice" here I mean the content, not the tone or demeanor. I acknowledge that perceived arrogance and overconfidence can annoy people in communication, and can cause backlash. For more on what I mean by courageous vs cowardly content, see this comment. I also spell out the argument more explicitly in this thread.
So it's a bit of a stretch to say that any AI safety-related discussion or interpersonal interaction that Nate has ever had in any context is automatically topical.
I also think your description of Nate's decision to delete your comment as "not ... allowing people to read negative truths about his own behavior" is somewhat overwrought. Both of the comment threads you linked were widely read and discussed at the time, and this shortform will probably also get lots of eyeballs and attention.
At the very least, there is an alternate interpretation, which is that the comment really was off-topic in Nate's view, and given the history between the two of you, he chose to block + delete instead of re-litigating or engaging in a back-and-forth that both of you would probably find unpleasant and unproductive. Maybe it would have been more noble or more wise of him to simply let your comment stand without direct engagement, but that can also feel unpleasant (for Nate or others).
I gave YIMBYism as an example of a policy agenda that would benefit from more widespread support for liberalism, not as something I personally support in all cases.
A liberal argument for NIMBYism could be: people are free to choose the level of density and development that they want within their own communities. But they should generally do so deliberately and through the rule of law, rather than through opposition to individual developments (via a heckler's veto, discretionary review processes that effectively require developers to lobby local politicians and woo random interest groups, etc.). Existing strict zoning laws are fine in places where they already exist, but new laws and restrictions should be wary of treading on the rights of existing property owners, and of creating more processes that increase discretionary power of local lawmakers and busybodies.
Hmm, I'm not so pessimistic. I don't think the core concepts of liberalism are so complex or unintuitive that the median civically engaged citizen can't follow along given an amenable background culture.
And lots of policy, political philosophy, culture, big ideas, etc. are driven by elites of some form, not just liberalism. Ideas and culture among elites can change and spread very quickly. I don't think a liberal renaissance requires "wrestling control" of any particular institutions so much as a cultural shift that is already happening to some degree (it just needs slightly better steering IMO).
I don't personally feel screwed over, and I suspect many of the people in the coalitions I mentioned feel similarly. I am sympathetic to people who do feel that way, but I am not really asking them to unilaterally honor anything. The only thing in my post that's a real concrete ask is for people who do already broadly support liberalism, or who have preferred policy agendas that would benefit from liberalism, be more outspoken about their support.
(To clarify, I have been using "liberalism" as a shorthand for "bedrock liberalism", referring specifically to the principles I listed in the first paragraph - I don't think everything that everyone usually calls "liberalism" is broadly popular with all the coalitions I listed, but most would at least pay lip service to the specific principles in the OP.)
I don't really agree with the characterization of recent history as people realizing that "liberalism isn't working", and to the degree that I would advocate for any specific policy change, I support a "radical incrementalist" approach. e.g. maybe the endpoint of the ideal concept of property rights is pretty far from wherever we are right now, but to get there we should start with small, incremental changes that respect existing rights and norms as much as possible.
So for example, I think Georgism is a good idea in general, but not a panacea, and a radical and sudden implementation would be illiberal for some of the reasons articulated by @Matthew Barnett here.
I think a more realistic way to phase in Georgism that respects liberal principles would mainly take the form of more efficient property tax regimes - instead of complex rules and constant fights over property tax assessment valuations, there would be hopefully slightly less complex fights over land valuations, with phase-ins that keep the overall tax burden roughly the same. Some property owners with relatively low-value property on higher value land (e.g. an old / low density building in Manhattan) would eventually pay more on the margin, while others with relatively high-value property on lower value land (e.g. a newer / high density building in the exurbs) would pay a bit less. Lots of people in the middle of the property-vs-land value spectrum would pay about the same. But this doesn't really get at the core philosophical objections you or others might have with current norms around the concept of property ownership in general.
The screenshotted tweet says that you're required to install something like Crowdstrike, which is correct and also seems consistent with the ChatGPT dialogue you linked?
There are long lists of computer security practices and procedures needed to pass an audit for compliance with a standard like ISO27001, PCI DSS, SOC 2, etc. that many firms large and small are subject to (sometimes but not necessarily by law - e.g. companies often need to pass an SOC 2 audit because their customers ask for it).
As you say, none of these standards name specific software or vendors that you have to use in order to satisfy an auditor, but it's often much less of a headache to use a "best in class" off-the-shelf product (like CrowdStrike) that is marketed specifically as satisfying specific requirements in these standards, vs. trying to cobble together a complete compliance posture using tools or products that were not designed specifically to satisfy those requirements.
A big part of the marketing for a product like CrowdStrike is that it has specific features which precisely and unambiguously satisfy more items in various auditor checklists than competitors.
So "opens up an expensive new chapter of his book" is colorful and somewhat exaggerated, but I wouldn't describe it as "misinformation" - it's definitely pointing at something real, which is that a lot of enterprise security software is sold and bought as an exercise in checking off specific checklist items in various kinds of audits, and how easy / convenient / comprehensive a solution makes box-checking is often a bigger selling point than how much actual security it provides, or what the end user experience is actually like.