If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
That's a good point. I hope Joe ends up focusing more on this type of work during his time at Anthropic.
What are the disagreement votes for[1], given that my comment is made of questions and a statement of confusion? What are the voters disagreeing about?
(I've seen this in the past as well, disagreement votes on my questioning comments, so figure I'd finally ask what people have in mind when're voting like this.)
2 votes totally -3 agreement, at the time of this writing
Sorry, you might be taking my dialog too seriously, unless you've made such observations yourself, which of course is quite possible since you used to work at OpenAI. I'm personally far from the places where such dialogs might be occurring, so don't have any observations of them myself. It was completely imagined in my head, as a dark comedy about how counter to human (or most human's) nature strategic thinking/action about AI safety is, and partly a bid for sympathy for the people caught in the whiplashes, to whom this kind of thinking or intuition doesn't come naturally.
Edit: To clarify a bit more, B's reactions like "WTF!" were written more for comedic effect, rather than trying to be realistic or based on my best understanding/predictions of how a typical AI researcher would actually react. It might still be capturing some truth, but again just want to make sure people aren't taking my dialog more seriously than I intend.
The Inhumanity of AI Safety
A: Hey, I just learned about this idea of artificial superintelligence. With it, we can achieve incredible material abundance with no further human effort!
B: Thanks for telling me! After a long slog and incredible effort, I'm now a published AI researcher!
A: No wait! Don't work on AI capabilities, that's actually negative EV!
B: What?! Ok, fine, at huge personal cost, I've switched to AI safety.
A: No! The problem you chose is too legible!
B: WTF! Alright you win, I'll give up my sunken costs yet again, and pick something illegible. Happy now?
A: No wait, stop! Someone just succeeded in making that problem legible!
B: !!!
Legible problem is pretty easy to give examples for. The most legible problem (in terms of actually gating deployment) is probably wokeness for xAI, and things like not expressing an explicit desire to cause human extinction, not helping with terrorism (like building bioweapons) on demand, etc., for most AI companies.
Giving an example for an illegible problem is much trickier since by their nature they tend to be obscure, hard to understand, or fall into a cognitive blind spot. If I give an example of a problem that seems real to me, but illegible to most, then most people will fail to understand it or dismiss it as not a real problem, instead of recognizing it as an example of a real but illegible problem. This could potentially be quite distracting, so for this post I decided to just talk about illegible problems in a general, abstract way, and discuss general implications that don't depend on the details of the problems.
But if you still want some explicit examples, see this thread.
maybe we could get some actual concrete examples of illegible problems and reasons to think they are important?
See Problems in AI Alignment that philosophers could potentially contribute to and this comment from a philosopher saying that he thinks they're important, but "seems like there's not much of an appetite among AI researchers for this kind of work" suggesting illegibility.
Yeah it's hard to think of a clear improvement to the title. I think I'm mostly trying to point out that thinking about legible vs illegible safety problems leads to a number of interesting implications that people may not have realized. At this point the karma is probably high enough to help attract readers despite the boring title, so I'll probably just leave it as is.
I guess I'm pretty guilty of this, as I tend to write "here's a new concept or line of thought, and its various implications" style posts, and sometimes I just don't want to spoil the ending/conclusion, like maybe I'm afraid people won't read the post if they can just glance at the title and decide whether they already agree or disagree with it, or think they know what I'm going to say? The Nature of Offense is a good example of the latter, where I could have easily titled it "Offense is about Status".
Not sure if I want to change my habit yet. Any further thoughts on this, or references about this effect, how strong it is, etc.?