I recall a comment here on LessWrong where this worked successfully.
I used to have really strong emotions that could be triggered by trivial things, which caused both me and the people I was around a lot of suffering.
I managed to permanently stop this, reducing my emotional suffering by about 90%! I did this by resolving to completely own and deal with my emotions myself, and told relevant people about this commitment. Then I was just pretty miserable and lonely feeling for about 3 months, and then these emotional reactions just stopped completely without any additional effort. I think I permanently lowered my level of neuroticism by doing this.
- Comment by user Adele... (read more)
Reminds me of how 'pick-up artists' have that idea of 'negging'. Normal friendly people really do often flirt by saying negative things to each other, but they're never things that they think will actually hurt; most of the time they're not even really true, it's more likely sarcastic. It also needs to go both ways. Just straight-up insulting someone in the hope they'll try to prove themselves to you is a very different thing.
The proliferation of AI bots and content on Reddit, Twitter, YouTube, everywhere, is becoming more and more visible and detrimental to the platforms. But it also occurs to me that no AI is choosing to create a Twitter account and start posting, or upload Suno tracks to Spotify, or put AI videos on YouTube. All these choices are still made by humans, mostly with the same old perverse incentive.
It's actually thought to be something in the region of 4000-20,000 years for the ramp up (seriously). The 200k years includes the whole slow drift back down.
Well obviously the cleanup nanobots eventually scrubbed all the evidence, then decomposed. :) /s
Yeah. Seems plausible to me to at least some extent, given the way the Internet is already trending (bots, fake content in general). We already get little runaway things, like Wikipedia bots getting stuck reverting each other's edits. Not hard to imagine some areas of the Internet just becoming not worth interacting with, even if they're not overloaded in a network traffic sense. But as you say, I'd certainly prefer that to potential much worse outcomes. Why do we do this to ourselves?
Re ancient AGI, I'm no conspiracy theorist, but just for fun check out the Paleocene–Eocene Thermal Maximum.
p.s. Nice Great Dictator reference.
I suppose my thinking is more that it wouldn't be nearly as bad as many of the other potential outcomes. Because yes I certainly agree that we have come to rely on the Internet rather a lot, and there are some very nice things about it too.
p.s. Nice Matrix reference.
... (read 416 more words →)As an example of the first: Once upon a time I told someone I respected that they shouldn’t eat animal products, because of the vast suffering caused by animal farming. He looked over scornfully and told me that it was pretty rich for me to say that, given that I use Apple products—hadn’t I heard about the abusive Apple factory conditions and how they have nets to prevent people killing themselves by jumping off the tops of the factories? I felt terrified that I’d been committing some grave moral sin, and then went off to my room to research the topic for an hour or two. I eventually became convinced that the
Considering the high percentage of modern-day concerns that are centered around Internet content, if it manages to only destroy the Internet and materially nothing else, maybe that won't actually be so bad. Let me download an offline copy of Wikipedia first please though.
Is there any existing name for the kind of logical fallacy where one who actually considers whether they can achieve a thing is criticised above one who simply claims they'll do the thing and doesn't?
Examples abound in politics but here's one concrete example:
In 2007 the UN passed the "Declaration on the Rights of Indigenous Peoples". New Zealand, which was already putting some significant effort into supporting the rights of its indigenous people, genuinely considered whether they would be able to hold up the requirements of the declaration, and decided not to sign due to it being incredibly broad[1]. Many other countries, not doing much for their own indigenous people and recognising the... (read more)
I have a general prediction that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below. AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives (as mentioned in Yudkowsky’s “Truly Part Of You”). An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon.
A similar post from earlier this year: Someone should fund an AGI Blockbuster.