I recall a comment here on LessWrong where this worked successfully.
I used to have really strong emotions that could be triggered by trivial things, which caused both me and the people I was around a lot of suffering.
I managed to permanently stop this, reducing my emotional suffering by about 90%! I did this by resolving to completely own and deal with my emotions myself, and told relevant people about this commitment. Then I was just pretty miserable and lonely feeling for about 3 months, and then these emotional reactions just stopped completely without any additional effort. I think I permanently lowered my level of neuroticism by doing this.
- Comment by user Adele Lopez, May 2020
The reason I really remember that comment is mainly the follow-up comment by user Alex_Shleizer, which suggests why it might work, and wonders whether this could be even a groundbreaking treatment.
There is research that claims that suffering might serve as an honest signal to get help from your group and that humans suffer more than other animals due to this reason.
You might have taught your system 1 that emotional suffering is useless for signaling purposes and it stopped using it.
If it's true it could be an extremely impactful and even groundbreaking intervention.
Reminds me of how 'pick-up artists' have that idea of 'negging'. Normal friendly people really do often flirt by saying negative things to each other, but they're never things that they think will actually hurt; most of the time they're not even really true, it's more likely sarcastic. It also needs to go both ways. Just straight-up insulting someone in the hope they'll try to prove themselves to you is a very different thing.
The proliferation of AI bots and content on Reddit, Twitter, YouTube, everywhere, is becoming more and more visible and detrimental to the platforms. But it also occurs to me that no AI is choosing to create a Twitter account and start posting, or upload Suno tracks to Spotify, or put AI videos on YouTube. All these choices are still made by humans, mostly with the same old perverse incentive.
It's actually thought to be something in the region of 4000-20,000 years for the ramp up (seriously). The 200k years includes the whole slow drift back down.
Well obviously the cleanup nanobots eventually scrubbed all the evidence, then decomposed. :) /s
Yeah. Seems plausible to me to at least some extent, given the way the Internet is already trending (bots, fake content in general). We already get little runaway things, like Wikipedia bots getting stuck reverting each other's edits. Not hard to imagine some areas of the Internet just becoming not worth interacting with, even if they're not overloaded in a network traffic sense. But as you say, I'd certainly prefer that to potential much worse outcomes. Why do we do this to ourselves?
Re ancient AGI, I'm no conspiracy theorist, but just for fun check out the Paleocene–Eocene Thermal Maximum.
p.s. Nice Great Dictator reference.
I suppose my thinking is more that it wouldn't be nearly as bad as many of the other potential outcomes. Because yes I certainly agree that we have come to rely on the Internet rather a lot, and there are some very nice things about it too.
p.s. Nice Matrix reference.
As an example of the first: Once upon a time I told someone I respected that they shouldn’t eat animal products, because of the vast suffering caused by animal farming. He looked over scornfully and told me that it was pretty rich for me to say that, given that I use Apple products—hadn’t I heard about the abusive Apple factory conditions and how they have nets to prevent people killing themselves by jumping off the tops of the factories? I felt terrified that I’d been committing some grave moral sin, and then went off to my room to research the topic for an hour or two. I eventually became convinced that the net effect of buying Apple products on human welfare is probably very slightly positive but small enough to not worry about, and also it didn’t seem to me that there’s a strong deontological argument against doing it.
(I went back and told the guy about the result of me looking into it. He said he didn’t feel interested in the topic anymore and didn’t want to talk about it. I said “wow, man, I feel pretty annoyed by that; you gave me a moral criticism and I took it real seriously; I think it’s bad form to not spend at least a couple minutes hearing about what I found.” Someone else who was in the room, who was very enthusiastic about social justice, came over and berated me for trying to violate someone else’s preferences about not talking about something. I learned something that day about how useful it is to take moral criticism seriously when it’s from people who don’t seem to be very directed by their morals.)
My guess here would be that he felt criticised and simply wanted to criticise back to make himself feel better, so he repeated a talking point he'd heard. Since he likely didn't actually hold any strong belief one way or the other, you re-entering the argument later only opened him up to potential further criticism, after he already felt he'd got even.
It would be easy to end the thought there and rest happily in the knowledge that and , but maybe it's worth examining your own thoughts also. Were your motivations for going away to research and bring the topic back up later actually as pure as written (i.e. "terrified [of] committing some grave moral sin")? Or were you partly motivated also by your own chagrin, hoping for a chance to even the score in the other direction by proving that you were right all along? If so, could that even have influenced your final decision that owning Apple products is morally positive?
I don't mean to criticise you specifically (and I certainly don't know what you or he were really thinking), but more point out a way people often think in general. It's worth being careful about how much an argument might come across as an attack, and leaving the other person a way to gracefully admit defeat or bow out of the discussion (I recall expecting that's what Leave a Line of Retreat from the Sequences was going to be about, but it ended up being about something different). If every argument could be respectful, in good faith, and not based on emotion, things would be a lot better. But alas, we're only human.
Considering the high percentage of modern-day concerns that are centered around Internet content, if it manages to only destroy the Internet and materially nothing else, maybe that won't actually be so bad. Let me download an offline copy of Wikipedia first please though.
A similar post from earlier this year: Someone should fund an AGI Blockbuster.