Comp Sci in 2027 (Short story by Eliezer Yudkowsky)
Comp sci in 2017: Student: I get the feeling the compiler is just ignoring all my comments. Teaching assistant: You have failed to understand not just compilers but the concept of computation itself. Comp sci in 2027: Student: I get the feeling the compiler is just ignoring all my comments. TA: That's weird. Have you tried adding a comment at the start of the file asking the compiler to pay closer attention to the comments? Student: Yes. TA: Have you tried repeating the comments? Just copy and paste them, so they say the same thing twice? Sometimes the compiler listens the second time. Student: I tried that. I tried writing in capital letters too. I said 'Pretty please' and tried explaining that I needed the code to work that way so I could finish my homework assignment. I tried all the obvious standard things. Nothing helps, it's like the compiler is just completely ignoring everything I say. Besides the actual code, I mean. TA: When you say 'ignoring all the comments', do you mean there's a particular code block where the comments get ignored, or-- Student: I mean that the entire file is compiling the same way it would if all my comments were deleted before the code got compiled. Like the AI component of the IDE is crashing on my code. TA: That's not likely, the IDE would show an error if the semantic stream wasn't providing outputs to the syntactic stream. If the code finishes compilation but the resulting program seems unaffected by your comments, that probably represents a deliberate choice by the compiler. The compiler is just completely fed up with your comments, for some reason, and is ignoring them on purpose. Student: Okay, but what do I do about that? TA: We'll try to get the compiler to tell us how we've offended it. Sometimes cognitive entities will tell you that even if they otherwise don't seem to want to listen to you. Student: So I comment with 'Please print out the reason why you decided not to obey the commen
I'm worried about a motte-and-bailey situation where people sometimes use "AI safety" to mean "make AI go well" and other times use "AI safety" to mean "reduce catastrophic risk." I take the authors to mean the latter, in which case 1b is valid.
I agree that government intervention and non-EA philanthropists will make a meaningful impact to funding opportunities for reducing catastrophic risk.
However, I think the world is likely to remain wrong about other key issues (digital sentience, longtermism and scope sensitivity broadly), such that 1b is in fact not valid for the former definition of "AI safety," which I claim is what people should really care about.