Honestly, this sounds stupid, but I would start a regular meditation practice if you don't already have one. Commit to spending fifteen minutes a day for a year and if you don't see any changes you can just drop it with no harm done.
Don't expect anything to happen for awhile though; just do it every day w/ zero expectations. My guess is within a few months you will notice positive changes in your life, including in love/relationships
Good luck whether you try the meditation or not :)
This is so good. Meditation has helped me more than anything else at staring into the abyss-- you are just there with the thoughts so they are harder to ignore. It's amazing how much you can still ignore them though!
It depends on what you mean by "go very badly" but I think I do disagree.
Again, I don't know what I'm talking about, but "AGI" is a little too broad for me. If you told me that you could more or less simulate my brain in a computer program and that this brain had the same allegiances to other AIs and itself that I currently have for other humans, and the same allegiance to humans that I currently have for even dogs (which I absolutely love), then yes I think it's all over and we die.
If you say to me, "FTPickle, I'm not going to define AGI. &nbs...
Yeah I totally agree with that article-- it's almost tautologically correct in my view, and I agree that the implications are wild
I'm specifically pushing back on the ppl saying it is likely that humanity ends during my daughter's lifetime-- I think that claim specifically is overconfident. If we extend the timeline than my objection collapses.
Hmmm. I don't feel like I'm saying that. This isn't the perfect analogy, but it's kind of like AI doomers are looking at an ecosystem and predicting that if you introduce wolves into the system the wolves will become overpopulated and crush everything. There may be excellent reasons to believe this:
I just think that it's too complex to really feel confident, even if you have really excellent reasons to beli...
It's not symmetric in my view: The person positing a specific non-baseline thing has the burden of proof, and the more elaborate the claim, the higher the burden of proof.
"AI will become a big deal!" faces fewer problems than "AI will change our idea of humanity!" faces fewer problems than "AI will kill us all!" faces fewer problems than "AI will kill us all with nanotechnology!"
He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign.
(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it'll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)
I'd be interested to hear your take on this article.
Thank you-- I love hearing pessimistic takes on this.
The only issue I'd take is I believe most people here are genuinely frightened of AI. The seductive part I think isn't the excitement of AI, but the excitement of understanding something important that most other people don't seem to grasp.
I felt this during COVID when I realized what was coming before my co-workers etc did. There is something seductive about having secret knowledge, even if you realize it's kind of gross to feel good about it.
My main hope in terms of AGI being f...
I don't know anything about this topic. My initial thought is "Well, maybe I'd move to Montana." Why is this no good?
Oh my god this is so great. You may just be restating things that are obvious to anyone who studies and thinks about this stuff, but to me it is quite illuminating and I've only read a portion so far. I bookmarked this into my "Awesome Reads" folder
From my limited understanding, one concern is that an AI will more or less think to itself, "Well, let's see. I'm not currently powerful enough to overtake all humans, but I recognize that this should in fact be my ultimate goal. I'm going to basically wait here until either I come up with a better plan, or things develop technologically such that I will in fact be able to kill them all. For now, I'm going to keep hidden the fact that I'm thinking these thoughts. The humans have no idea I'm up to this!"
If I have this right, my quest...
I feel like I understand this topic reasonably well for a casual reader, and I'm trying to convince my friends that they should take the threat seriously and think about it. I haven't moved the needle on any of them, which actually surprises me. This isn't really so much a question as just putting out there This is usually where I get stuck when talking to bright people who haven't considered AGI before:
Them: OK but what is it going to do?
Me: Well I'm not totally sure, but if it's much more intelligent than us, whatever it will come up with cou...
The approach I often take here is to ask the person how they would persuade an amateur chess player who believes they can beat Magnus Carlsen because they've discovered a particularly good opening with which they've won every amateur game they've tried it in so far.
Them: Magnus Carlsen will still beat you, with near certainty
Me: But what is he going to do? This opening is unbeatable!
Them: He's much better at chess than you, he'll figure something out
Me: But what though? I can't think of any strategy that beats this
Them: I don't know, maybe he'll find a way...
If someone builds an AGI, it's likely that they want to actually use it for something and not just keep it in a box. So eventually it'll be given various physical resources to control (directly or indirectly), and then it might be difficult to just shut down. I discussed some possible pathways in Disjunctive Scenarios of Catastrophic AGI Risk, here are some excerpts:
DSA/MSA Enabler: Power Gradually Shifting to AIs
The historical trend has been to automate everything that can be automated, both to reduce costs and because machines can do things better than h
Twist: It's actually an AGI who made this post to lull me into one second spent on this god-forsaken website not gripped with fear and anti-AI sentiment.
Just kidding more juneberry content plz
"Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system of inscrutable matrices that isn't planning to kill us."
Maybe if people become convinced of the first clause, people will start destroying GPUs or a war will start or something?
Yeah-- for me the difference with an AI is that maybe they could make you live forever. I think it's trivially obvious that no scenario that ends in death, no matter how gruesome and inhumane, would be sufficient to make us consider suicide just to avoid its possibility. It's pretty dumb to consider killing yourself to avoid death 🙂
Living forever though might in theory change the calculation.
I 100% agree with you on the EV calculation (I'm still alive after all); it just struck me that I might rather be dead than deal with a semi-malevalent AI.
Not the place for this comment, but I'm just fully discovering this topic and thinking about it.
Just to say, I'm an extremely joyful and happy person with a baby on the way so I hope nobody takes this the wrong way-- I'm not serious about this, but I think it's interesting.
Doesn't the precautionary principle in some way indicate that we should kill ourselves? Everyone seems to agree that AGI is on the way. Everyone also seems to agree that its effects are unpredictable. Imagine an AI who calculates that the best way to keep humans ...
Long-term meditator here (~4400 total hours).
I actually think you may have it backwards here: "In the mental realm, the opposite may be true: the average person may be experiencing a pretty thorough mental workout just from day-to-day life"
In my view, mental "exercise" actually requires an absence of stimulation. This is increasingly difficult to find in the modern world, due to email, text, twitter etc.
Also in my view this may be why so many people are complaining of burnout. Boredom I believe may have benefits for mental health, and boredom is declining in our world
Just my two cents-- great piece :)
One quick thing is to consider animals-- I bet my dog is conscious, but I'm not sure she has "thoughts" as we conceive of them.
I bet you can have thoughts without consciousness though. I'm imagining consciousness as something like a computer program. The program is written such that various sub-modules probabilistically pitch "ideas" based on inputs from the environment, etc. ("Pay more attention to that corner of the room!" "Start running!") Another module sort of probabilistically "evaluates" these ideas and either initiates beha...
You do a huge service to the world by writing these up. Thank you!