LESSWRONG
LW

637
thenoviceoof
472160
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
7[Linkpost] AI War seems unlikely to prevent AI Doom
5mo
6
8Is OpenAI losing money on each request?
Q
2y
Q
8
Visionary arrogance and a criticism of LessWrong voting
thenoviceoof4d21

To be clear, I didn't downvote you: I did think "hmm, wasn't there a recent big discussion around downvote-without-commenting norms which didn't result in any changes?" and went and found it. I can see why you'd think I did downvote you; you specifically requested it! (Well, requested `if downvote then comment`)

Reply
Visionary arrogance and a criticism of LessWrong voting
thenoviceoof4d85

You may be interested in a very similar discussion from several months ago: When you downvote, explain why.

Reply
Community Feedback Request: AI Safety Intro for General Public
thenoviceoof4mo10

I was recently experimenting in extreme amounts of folding (LW linkpost): I'd be interested to hear from Chris whether he thinks this is too much folding?

Reply
[Linkpost] AI War seems unlikely to prevent AI Doom
thenoviceoof5mo10

Hmm, "AI war makes s-risks more likely" seems plausible, but compared to what? If we were given a divine choice was between a non-aligned/aligned AI war, or a suffering-oriented singleton, wouldn't we choose the war? Maybe more likely relative to median/mean scenarios, but that seems hard to pin down.

Hmm, I thought I put a reference to the DoD's current Replicator Initiative into the post, but I can't find it: I must have moved it out? Still, yes, we're moving towards automated war fighting capability.

Reply
[Linkpost] AI War seems unlikely to prevent AI Doom
thenoviceoof5mo10

The post setup skips the "AIs are loyal to you" bit, but it does seem like this line of thought broadly aligns with the post.

I do think this does not require ASI, but I would agree that including it certainly doesn't help.

Reply
deleted
thenoviceoof6mo10

Some logical nits:

  • Early on you mention physical attacks to destroy offline backups; these attacks would be highly visible and would contradict the dark forest nature the scenario.
  • Perfect concealment and perfect attacks are in tension. The AI supposedly knows the structure and vulnerabilities of the systems hosting an enemy AI, but finding these things out for sure requires intrusion, which can be detected. The AI can hold off on attacking and work off of suppositions, but then the perfect attack is not guaranteed, and the attack could fail due to unknowns.

Other notes:

  • Why do you assume that AIs bias towards perfect, deniable strikes? An AI that strikes first can secure an early advantage; for example, if it can knock out all running copies of an enemy AI, restoring from backups will take time and leave the enemy AI vulnerable. As another example, if AI Alpha knows it is less capable than AI Bravo, but that AI Bravo will wait to attack it perfectly, AI Alpha attacking first (imperfectly) can force AI Bravo to abandon all previous attack preparations to defend itself (see maneuver warfare).
    • "Defend itself" might be better put as re-taking and re-securing compromised systems; relatedly, I think cybersecurity defense is much less of an active action than this analysis seems to assume?
  • An extension of your game theory analysis implies that the US should have nuked the USSR in the 1950s, and should have been nuking all other nuclear nations over the last 70 years. This seems weird? At least, I expect it to not be persuasive to folks thinking about AI society.
  • The stylistic choice I disagree with most is the bolding: if a short paragraph has 5 different bolded statements, then... what's the point?
Reply
The Dissolution of AI Safety
thenoviceoof9mo32

Let's say there's a illiterate man that lives a simple life, and in doing so just happens to follow all the strictures of the law, without ever being able to explain what the law is. Would you say that this man understands the law?

Alternatively, let's say there is a learned man that exhaustively studies the law, but only so he can bribe and steal and arson his way to as much crime as possible. Would you say that this man understands the law?

I would say that it is ambiguous whether the 1st man understands the law; maybe? kind of? you could make an argument I guess? it's a bit of a weird way to put it innit? Whereas the 2nd man definitely understands the law. It sounds like you would say that the 1st man definitely understands the law (I'm not sure what you would say about the 2nd man), which might be where we have a difference.

I think you could say that LLMs don't work that way, that the reader should intuitively know this and that the word "understanding" should be treated as being special in this context and should not be ambiguous at all; as I reader, I am saying I am confused by the choice of words, or at least this is not explained in enough detail ahead of time.

Obviously, I'm just one reader, maybe everyone else understood what you meant; grain of salt, and all that.

Reply
The Dissolution of AI Safety
thenoviceoof9mo10

This makes much more sense: when I was reading from your post lines like "[LLMs] understand human values and ethics at a human level", this is easy to read as "because LLMs can output an essay on ethics, those LLMs will not do bad things". I hope you understand why I was confused; maybe you should swap "understand ethics" for something like "follow ethics"/"display ethical behavior"? And maybe try not to stick a mention of "human uploads" (which presumably do have real understanding) right before this discussion?

And responding to your clarification, I expect that old school AI safetyists would agree that an LLM that consistently reflects human value judgments to be aligned (and I would also agree!), but they would say #1 this has not happened yet (for a recent incident, this hardly seems aligned; I think you can argue that this particular case was manipulated, that jailbreaks in general don't matter, or that these sorts of breaks are infrequent enough they don't matter, but I think this obvious class of rejoinder deserves some sort of response) #2 consistency seems unlikely to happen (like MondSemmel makes a case for in a sibling comment).

Reply
The Dissolution of AI Safety
thenoviceoof9mo90

I'd agree that the arguments I raise could be addressed (as endless arguments attest) and OP could reasonably end up with a thesis like "LLMs are actually human aligned by default". Putting my recommendation differently, the lack of even a gesture towards those arguments almost caused me to dismiss the post as unserious and not worth finishing.

I'm somewhat surprised, given OP's long LW tenure. Maybe this was written for a very different audience and just incidentally posted to LW? Except the linkpost tagline focuses on the 1st part of the post, not the 2nd, implying OP thought this was actually persuasive?! Is OP failing an intellectual Turing test or am I???

Reply
The Dissolution of AI Safety
thenoviceoof9mo150

The post seems to make an equivalence between LLMs understanding ethics and caring about ethics, which does not clearly follow (I can study Buddhist ethics without caring about following it). We could cast RLHF as training LLMs into caring about some sort of ethics, but then jailbreaking becomes a bit of a thorny question. Alternatively, why do we assume training the appearance of obedience is enough when you start scaling LLMs?

There are other nitpicks I will drop in short form: why assume "superhuman levels of loyalty" in upgraded LLMs? Why implicitly assume that LLMs will extend ethics correctly? Why do you think mechanistic interpretability is so much more promising than old school AI safetyists do? Why does self-supervision result in rising property values in Tokyo?

In short, you claim that old school AI safety is wrong, but it seems to me you haven't really engaged their arguments.

That said, the 2nd part of the post does seem interesting, even for old school AI safetyists - most everyone focuses on alignment, but there's a lot less focus on what happens after alignment (although nowhere close to none, even >14 years ago; this is another way that the versus AI safety framing does not make sense). Personally, I would recommend splitting up the post; the 2nd part stands by itself and has something new to say, while the 1st part needs way more detail to actually convince old school AI safetyists.

Reply1
Load More