Honestly, this sounds stupid, but I would start a regular meditation practice if you don't already have one. Commit to spending fifteen minutes a day for a year and if you don't see any changes you can just drop it with no harm done.
Don't expect anything to happen for awhile though; just do it every day w/ zero expectations. My guess is within a few months you will notice positive changes in your life, including in love/relationships
Good luck whether you try the meditation or not :)
This is so good. Meditation has helped me more than anything else at staring into the abyss-- you are just there with the thoughts so they are harder to ignore. It's amazing how much you can still ignore them though!
It depends on what you mean by "go very badly" but I think I do disagree.
Again, I don't know what I'm talking about, but "AGI" is a little too broad for me. If you told me that you could more or less simulate my brain in a computer program and that this brain had the same allegiances to other AIs and itself that I currently have for other humans, and the same allegiance to humans that I currently have for even dogs (which I absolutely love), then yes I think it's all over and we die.
If you say to me, "FTPickle, I'm not going to define AGI. It is a promise that in 2027 an AGI emerges. Is it more likely than not that humanity is wiped out by this event?" I would gulp and pick 'no.'
Difference between "plausible" and "likely" is huge I think. Again huge caveat that AGI may be more specifically defined than I am aware of.
Yeah I totally agree with that article-- it's almost tautologically correct in my view, and I agree that the implications are wild
I'm specifically pushing back on the ppl saying it is likely that humanity ends during my daughter's lifetime-- I think that claim specifically is overconfident. If we extend the timeline than my objection collapses.
Hmmm. I don't feel like I'm saying that. This isn't the perfect analogy, but it's kind of like AI doomers are looking at an ecosystem and predicting that if you introduce wolves into the system the wolves will become overpopulated and crush everything. There may be excellent reasons to believe this:
I just think that it's too complex to really feel confident, even if you have really excellent reasons to believe it will happen. Maybe wolves do horribly on hills and we didn't know that before we let them loose in this environment etc.
It's not on me to come up with reasons why the wolves won't take over-- simply saying "it's incredibly complex and we shouldn't be too confident about this even though it seems reasonable" is enough in my view
It's not symmetric in my view: The person positing a specific non-baseline thing has the burden of proof, and the more elaborate the claim, the higher the burden of proof.
"AI will become a big deal!" faces fewer problems than "AI will change our idea of humanity!" faces fewer problems than "AI will kill us all!" faces fewer problems than "AI will kill us all with nanotechnology!"
Thank you-- I love hearing pessimistic takes on this.
The only issue I'd take is I believe most people here are genuinely frightened of AI. The seductive part I think isn't the excitement of AI, but the excitement of understanding something important that most other people don't seem to grasp.
I felt this during COVID when I realized what was coming before my co-workers etc did. There is something seductive about having secret knowledge, even if you realize it's kind of gross to feel good about it.
My main hope in terms of AGI being far off is that there's some sort of circle-jerk going on on this website where everyone is basing their opinion on everyone else, but everyone is basing it on everyone else etc etc
I mean obviously the arguments themselves are good and compelling and the true luminaries in the field have good reasons, but take for instance me. I'm genuinely frightened of AGI and believe there is a ~10% chance my daughter will be killed by it before the end of her natural life, but honestly all of my reasons for worry boil down to "other smart people seem to think this."
Like, I get the arguments for AGI doom. They make sense. But the truth is if Eliezer Y came out tomorrow and said "holy shit I was wrong we don't have to worry at all because of the MHR-5554 module theorem" and then Nick Bostrom said "Yup! Stop worrying everyone. Thank you MHR-5554! What a theorem!" I would instantly stop worrying.
I think (hope?) that many people on this site are in the same boat as me
I don't know anything about this topic. My initial thought is "Well, maybe I'd move to Montana." Why is this no good?
Excellent article, in my view