LESSWRONG
LW

519
Max H
2557Ω73243650
Message
Dialogue
Subscribe

Most of my posts and comments are about AI and alignment. Posts I'm most proud of, which also provide a good introduction to my worldview:

  • Without a trajectory change, the development of AGI is likely to go badly
  • Steering systems, and a follow up on corrigibility.
  • "Aligned" foundation models don't imply aligned systems
  • LLM cognition is probably not human-like
  • Gradient hacking via actual hacking
  • Concrete positive visions for a future without AGI

I also created Forum Karma, and wrote a longer self-introduction here.

PMs and private feedback are always welcome.

NOTE: I am not Max Harms, author of Crystal Society. I'd prefer for now that my LW postings not be attached to my full name when people Google me for other reasons, but you can PM me here or on Discord (m4xed) if you want to know who I am.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5Max H's Shortform
2y
7
Shortform
Max H4h20

The audit requirements Mark is talking about don't exist. He just completely made them up.

The screenshotted tweet says that you're required to install something like Crowdstrike, which is correct and also seems consistent with the ChatGPT dialogue you linked?

There are long lists of computer security practices and procedures needed to pass an audit for compliance with a standard like ISO27001, PCI DSS, SOC 2, etc. that many firms large and small are subject to (sometimes but not necessarily by law - e.g. companies often need to pass an  SOC 2 audit because their customers ask for it).

As you say, none of these standards name specific software or vendors that you have to use in order to satisfy an auditor, but it's often much less of a headache to use a "best in class" off-the-shelf product (like CrowdStrike) that is marketed specifically as satisfying specific requirements in these standards, vs. trying to cobble together a complete compliance posture using tools or products that were not designed specifically to satisfy those requirements. 

A big part of the marketing for a product like CrowdStrike is that it has specific features which precisely and unambiguously satisfy more items in various auditor checklists than competitors.

So "opens up an expensive new chapter of his book" is colorful and somewhat exaggerated, but I wouldn't describe it as "misinformation" - it's definitely pointing at something real, which is that a lot of enterprise security software is sold and bought as an exercise in checking off specific checklist items in various kinds of audits, and how easy / convenient / comprehensive a solution makes box-checking is often a bigger selling point than how much actual security it provides, or what the end user experience is actually like.

Reply
Obligated to Respond
Max H5d*4334

I would just evaluate your argument on my own and I would evaluate the counterargument in the comment on my own.

 

The precise issue is that a sizable fraction of the audience will predictably not do this, or will do it lazily or incorrectly.

On LessWrong, this shows up in voting patterns, for example, a controversial post will sometimes get some initial upvotes and then the karma / trend will swing around based on the comments and who had the last word. Or, a long back-and-forth ends up getting far fewer votes (and presumably, eyeballs) than the top-level post / comment.

My impression is that most authors aren't that sensitive to karma per se but they are sensitive to a mental model of the audience that this swinging implies, namely that many onlookers are letting the author and their interlocutor(s) do their thinking for them, with varying levels of attention span, and where "highly upvoted" is often a proxy for "onlookers believe this is worth responding to (but won't necessarily read the response)". So responding often feels both high stakes and unrewarding for someone who cares about communicating something to their audience as a whole.

Anyway, I like Duncan's post as a way of making the point about effort / implied obligation to both onlookers and interlocutors, but something else that might help is some kind of guide / reminder / explanation about principles of being a good / high-effort onlooker.

Reply
Buck's Shortform
Max H22d30

What specifically do you think is obviously wrong about the village idiot <-> Einstein gap?  This post from 2008 which uses the original chart makes some valid points that hold up well today, and rebuts some real misconceptions that were common at the time.

The original chart doesn't have any kind of labels or axes, but here are two ways you could plausibly view it as "wrong" in light of recent developments with LLMs:

  • Duration: the chart could be read as a claim that the gap between the development of village idiot and Einstein-level AI in wall-clock time would be more like hours or days rather than months or years.
  • Size and dimensionality of mind-space below the superintelligence level. The chart could be read as a claim that the size of mindspace between village idiot and Einstein is relatively small, so it's surprising to Eliezer-200x that there are lots of current AIs landing in between them, and staying there for a while.

I think it's debatable how much Eliezer was actually making the stronger versions of the claims above circa 2008, and also remains to be seen how wrong they actually are, when applied to actual superintelligence instead of whatever you want to call the AI models of today.

OTOH, here are a couple of ways that the village idiot <-> Einstein post looks prescient:

  • Qualitative differences between the current best AI models and second-to-third tier models are small. Most AI models today are all roughly similar to each other in terms of overall architecture and training regime, but there are various tweaks and special sauce that e.g. Opus and GPT-5 have that Llama 4 doesn't. So you have something like: Llama 4: GPT-5 :: Village idiot : Einstein, which is predicted by:

Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks.  But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee.  A chimp couldn't tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.

(and something like a 4B parameter open-weights model is analogous to the chimpanzee)

Whereas I expect that e.g. Robin Hanson in 2008 would have been quite surprised by the similarity and non-specialization among different models of today.

  • Implications for scaling. Here's a claim on which I think the Eliezer-200x Einstein chart makes a prediction that is likely to outperform other mental models of 2008, as well as various contemporary predictions based on scaling "laws" or things like METR task time horizon graphs:

    "The rough number of resources, in terms of GPUs, energy, wall clock time, lines of Python code, etc. needed to train and run best models today (e.g. o4, GPT-5), are sufficient (or more than sufficient) to train and run a superintelligence (without superhuman / AI-driven levels of optimization / engineering / insight)."

    My read of task-time-horizon and scaling law-based models of AI progress is that they more strongly predict that further AI progress will basically require more GPUs. It might be that the first Einstein+ level AGI is in fact developed mostly through scaling, but these models of progress are also more surprised than Eliezer-2008 when it turns out that (ordinary, human-developed) algorithmic improvements and optimizations allow for the training of e.g. a GPT-4-level model with many fewer resources than it took to train the original GPT-4 just a few years ago.
Reply
Forum Karma: view stats and find highly-rated comments for any LW user
Max H1mo21

Thanks for the report, should be fixed now.

The issue was that the LW GraphQL API has changed slightly, apparently. The user query suggested here no longer works, but something like:

{
      GetUserBySlug(slug: "max-h") {
          _id
          slug
          displayName
          pageUrl
          postCount
          commentCount
          createdAt
      }
}

works fine.

Reply
Applying right-wing frames to AGI (geo)politics
Max H2mo219

I prefer (classical / bedrock) liberalism as a frame for confronting societal issues with AGI, and am concerned by the degree to which recent right-wing populism has moved away from those tenets.

Liberalism isn't perfect, but it's the only framework I know of that even has a chance of resulting in a stable consensus. Other frames, left or right, have elements of coercion and / or majoritarianism that inevitably lead to legitimacy crises and instability as stakes get higher and disagreements wider.

My understanding is that a common take on both the left and right these days is that, well, liberalism actually hasn't worked out so great for the masses recently, so everyone is looking for something else. But to me every "something else" on both the left and right just seems worse - Scott Alexander wrote a bunch of essays like 10y ago on various aspects of liberalism and why they're good, and I'm not aware of any comprehensive rebuttal that includes an actually workable alternative.

Liberalism doesn't imply that everyone needs to live under liberalism (especially my own preferred version / implementation of it), but it does provide a kind of framework for disagreement and settling differences in a way that is more peaceful and stable than any other proposal I've seen.

So for example on protectionism, I think most forms of protectionism (especially economic protectionism) are bad and counterproductive economic policy. But even well-implemented protectionism requires a justification beyond just "it actually is in the national interest to do this", because it infringes on standard individual rights and freedoms. These freedoms aren't necessarily absolute, but they're important enough that it requires strong and ongoing justification for why a government is even allowed to do that kind of thing. AGI might be a pretty strong justification!

But at the least, I think anyone proposing a framework or policy position which deviates from a standard liberal position should acknowledge liberalism as a kind of starting point / default, and be able to say why the tradeoff of any individual freedom or right is worth making, each and every time it is made. (And I do not think right-wing frameworks and their standard bearers are even trying to do this, and that is very bad.)

Reply
TurnTrout's shortform feed
Max H3mo1919

I think it was fine for Nate to delete your comment and block you, and fine for you to repost it as a short form.

But my anecdote is a valid report of the historical consequences of talking with Nate – just as valid as the e/acc co-founder's tweet.

"just as valid" [where validity here = topical] seems like an overclaim here. And at the time of your comment, Nate had already commented in other threads, which are now linked in a footnote in the OP:

By "cowardice" here I mean the content, not the tone or demeanor. I acknowledge that perceived arrogance and overconfidence can annoy people in communication, and can cause backlash. For more on what I mean by courageous vs cowardly content, see this comment. I also spell out the argument more explicitly in this thread.

So it's a bit of a stretch to say that any AI safety-related discussion or interpersonal interaction that Nate has ever had in any context is automatically topical.

I also think your description of Nate's decision to delete your comment as "not ... allowing people to read negative truths about his own behavior" is somewhat overwrought. Both of the comment threads you linked were widely read and discussed at the time, and this shortform will probably also get lots of eyeballs and attention. 

At the very least, there is an alternate interpretation, which is that the comment really was off-topic in Nate's view, and given the history between the two of you, he chose to block + delete instead of re-litigating or engaging in a back-and-forth that both of you would probably find unpleasant and unproductive. Maybe it would have been more noble or more wise of him to simply let your comment stand without direct engagement, but that can also feel unpleasant (for Nate or others).

Reply
Support for bedrock liberal principles seems to be in pretty bad shape these days
Max H3mo61

I gave YIMBYism as an example of a policy agenda that would benefit from more widespread support for liberalism, not as something I personally support in all cases.

A liberal argument for NIMBYism could be: people are free to choose the level of density and development that they want within their own communities. But they should generally do so deliberately and through the rule of law, rather than through opposition to individual developments (via a heckler's veto, discretionary review processes that effectively require developers to lobby local politicians and woo random interest groups, etc.). Existing strict zoning laws are fine in places where they already exist, but new laws and restrictions should be wary of treading on the rights of existing property owners, and of creating more processes that increase discretionary power of local lawmakers and busybodies.

Reply
Support for bedrock liberal principles seems to be in pretty bad shape these days
Max H3mo52

Hmm, I'm not so pessimistic. I don't think the core concepts of liberalism are so complex or unintuitive that the median civically engaged citizen can't follow along given an amenable background culture.

And lots of policy, political philosophy, culture, big ideas, etc. are driven by elites of some form, not just liberalism. Ideas and culture among elites can change and spread very quickly. I don't think a liberal renaissance requires "wrestling control" of any particular institutions so much as a cultural shift that is already happening to some degree (it just needs slightly better steering IMO).

Reply
Support for bedrock liberal principles seems to be in pretty bad shape these days
Max H3mo20

I don't personally feel screwed over, and I suspect many of the people in the coalitions I mentioned feel similarly. I am sympathetic to people who do feel that way, but I am not really asking them to unilaterally honor anything. The only thing in my post that's a real concrete ask is for people who do already broadly support liberalism, or who have preferred policy agendas that would benefit from liberalism, be more outspoken about their support.

(To clarify, I have been using "liberalism" as a shorthand for "bedrock liberalism", referring specifically to the principles I listed in the first paragraph - I don't think everything that everyone usually calls "liberalism" is broadly popular with all the coalitions I listed, but most would at least pay lip service to the specific principles in the OP.)

Reply
Support for bedrock liberal principles seems to be in pretty bad shape these days
Max H3mo20

I don't really agree with the characterization of recent history as people realizing that "liberalism isn't working", and to the degree that I would advocate for any specific policy change, I support a "radical incrementalist" approach. e.g. maybe the endpoint of the ideal concept of property rights is pretty far from wherever we are right now, but to get there we should start with small, incremental changes that respect existing rights and norms as much as possible.

So for example, I think Georgism is a good idea in general, but not a panacea, and a radical and sudden implementation would be illiberal for some of the reasons articulated by @Matthew Barnett  here.

I think a more realistic way to phase in Georgism that respects liberal principles would mainly take the form of more efficient property tax regimes - instead of complex rules and constant fights over property tax assessment valuations, there would be hopefully slightly less complex fights over land valuations, with phase-ins that keep the overall tax burden roughly the same. Some property owners with relatively low-value property on higher value land (e.g. an old / low density building in Manhattan) would eventually pay more on the margin, while others with relatively high-value property on lower value land (e.g. a newer / high density building in the exurbs) would pay a bit less. Lots of people in the middle of the property-vs-land value spectrum would pay about the same. But this doesn't really get at the core philosophical objections you or others might have with current norms around the concept of property ownership in general.

Reply
Load More
32Support for bedrock liberal principles seems to be in pretty bad shape these days
3mo
52
68Bayesian updating in real life is mostly about understanding your hypotheses
2y
4
21Emmett Shear to be interim CEO of OpenAI
2y
5
41Concrete positive visions for a future without AGI
2y
28
34Trying to deconfuse some core AI x-risk problems
2y
13
35An explanation for every token: using an LLM to sample another LLM
2y
5
37Actually, "personal attacks after object-level arguments" is a pretty good rule of epistemic conduct
2y
15
60Forum Karma: view stats and find highly-rated comments for any LW user
2y
18
3510 quick takes about AGI
2y
17
12Four levels of understanding decision theory
2y
11
Load More