Well obviously the cleanup nanobots eventually scrubbed all the evidence, then decomposed. :) /s
Yeah. Seems plausible to me to at least some extent, given the way the Internet is already trending (bots, fake content in general). We already get little runaway things, like Wikipedia bots getting stuck reverting each other's edits. Not hard to imagine some areas of the Internet just becoming not worth interacting with, even if they're not overloaded in a network traffic sense. But as you say, I'd certainly prefer that to potential much worse outcomes. Why do we do this to ourselves?
Re ancient AGI, I'm no conspiracy theorist, but just for fun check out the Paleocene–Eocene Thermal Maximum.
p.s. Nice Great Dictator reference.
I suppose my thinking is more that it wouldn't be nearly as bad as many of the other potential outcomes. Because yes I certainly agree that we have come to rely on the Internet rather a lot, and there are some very nice things about it too.
p.s. Nice Matrix reference.
As an example of the first: Once upon a time I told someone I respected that they shouldn’t eat animal products, because of the vast suffering caused by animal farming. He looked over scornfully and told me that it was pretty rich for me to say that, given that I use Apple products—hadn’t I heard about the abusive Apple factory conditions and how they have nets to prevent people killing themselves by jumping off the tops of the factories? I felt terrified that I’d been committing some grave moral sin, and then went off to my room to research the topic for an hour or two. I eventually became convinced that the net effect of buying Apple products on human welfare is probably very slightly positive but small enough to not worry about, and also it didn’t seem to me that there’s a strong deontological argument against doing it.
(I went back and told the guy about the result of me looking into it. He said he didn’t feel interested in the topic anymore and didn’t want to talk about it. I said “wow, man, I feel pretty annoyed by that; you gave me a moral criticism and I took it real seriously; I think it’s bad form to not spend at least a couple minutes hearing about what I found.” Someone else who was in the room, who was very enthusiastic about social justice, came over and berated me for trying to violate someone else’s preferences about not talking about something. I learned something that day about how useful it is to take moral criticism seriously when it’s from people who don’t seem to be very directed by their morals.)
My guess here would be that he felt criticised and simply wanted to criticise back to make himself feel better, so he repeated a talking point he'd heard. Since he likely didn't actually hold any strong belief one way or the other, you re-entering the argument later only opened him up to potential further criticism, after he already felt he'd got even.
It would be easy to end the thought there and rest happily in the knowledge that and , but maybe it's worth examining your own thoughts also. Were your motivations for going away to research and bring the topic back up later actually as pure as written (i.e. "terrified [of] committing some grave moral sin")? Or were you partly motivated also by your own chagrin, hoping for a chance to even the score in the other direction by proving that you were right all along? If so, could that even have influenced your final decision that owning Apple products is morally positive?
I don't mean to criticise you specifically (and I certainly don't know what you or he were really thinking), but more point out a way people often think in general. It's worth being careful about how much an argument might come across as an attack, and leaving the other person a way to gracefully admit defeat or bow out of the discussion (I recall expecting that's what Leave a Line of Retreat from the Sequences was going to be about, but it ended up being about something different). If every argument could be respectful, in good faith, and not based on emotion, things would be a lot better. But alas, we're only human.
Considering the high percentage of modern-day concerns that are centered around Internet content, if it manages to only destroy the Internet and materially nothing else, maybe that won't actually be so bad. Let me download an offline copy of Wikipedia first please though.
Is there any existing name for the kind of logical fallacy where one who actually considers whether they can achieve a thing is criticised above one who simply claims they'll do the thing and doesn't?
Examples abound in politics but here's one concrete example:
In 2007 the UN passed the "Declaration on the Rights of Indigenous Peoples". New Zealand, which was already putting some significant effort into supporting the rights of its indigenous people, genuinely considered whether they would be able to hold up the requirements of the declaration, and decided not to sign due to it being incredibly broad[1]. Many other countries, not doing much for their own indigenous people and recognising the declaration as non-binding, simply signed it essentially for the good vibes. As a result, New Zealand was criticised for not being willing to sign while others were, and was eventually pressured into signing (for the good vibes).
[1] See e.g. https://www.converge.org.nz/pma/decleov07.pdf For example the entire country could feasibly fall under the requirements for returning land to indigenous people.
Sorry yeah, I was just joking, of course that very much shouldn't be the actual plot of the film. Just seemed funny because Yudkowsky was thinking about these things long before most people were. Good lesson that I shouldn't treat LessWrong discussion like a Reddit discussion.
It's Yudkowsky that's sent back. He starts a movement called LessWrong to get people thinking about AI risk. He takes a huge time-paradox gamble in writing a book directly called "If Anyone Builds It, Everyone Dies". But somehow it's still happening.
Edit: To clarify, this isn't an actual plot suggestion. Just seemed funny to me because Yudkowsky was thinking about these things long before most people were. I put some real thoughts on plot in my other comment here.
Thank you for writing this up, as I've been thinking the same thing for a while.
Totally agree re "slowburn realism". Start with things exactly as they are today. Then move to things that will likely happen soon, so that when those things do happen, people will be thinking directly back to the film they saw recently. Keep escalating until you get to whatever ending works - maybe something like AI2027, maybe something like in A Disneyland Without Children.
It doesn't even have to be a scenario where the AI is intentionally evil. We've had a thousand of those films already. An AI that's just trying to do what it's been told but is misaligned might be even scarier. No-one's done a paperclip maximiser film.
Whatever ends up destroying us in the script, if you must have a not-totally-bleak ending, maybe the main characters manage to escape into space. Maybe they look back to watch a grey mass visibly spreading across the green Earth.
It's actually thought to be something in the region of 4000-20,000 years for the ramp up (seriously). The 200k years includes the whole slow drift back down.