All of FTPickle's Comments + Replies

You do a huge service to the world by writing these up.  Thank you!

:)

Excellent article, in my view

Honestly, this sounds stupid, but I would start a regular meditation practice if you don't already have one.  Commit to spending fifteen minutes a day for a year and if you don't see any changes you can just drop it with no harm done.  

Don't expect anything to happen for awhile though; just do it every day w/ zero expectations.  My guess is within a few months you will notice positive changes in your life, including in love/relationships

Good luck whether you try the meditation or not :)

This is so good.  Meditation has helped me more than anything else at staring into the abyss-- you are just there with the thoughts so they are harder to ignore.  It's amazing how much you can still ignore them though!

It depends on what you mean by "go very badly" but I think I do disagree. 

Again, I don't know what I'm talking about, but "AGI" is a little too broad for me.  If you told me that you could more or less simulate my brain in a computer program and that this brain had the same allegiances to other AIs and itself that I currently have for other humans, and the same allegiance to humans that I currently have for even dogs (which I absolutely love), then yes I think it's all over and we die.

If you say to me, "FTPickle, I'm not going to define AGI. &nbs... (read more)

2Daniel Kokotajlo5mo
I'm happy to define it more specifically -- e.g. if you have time, check out What 2026 Looks Like [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like] and then imagine that in 2027 the chatbots finally become superhuman at all relevant intellectual domains (including agency / goal-directedness / coherence) whereas before they had been superhuman in some but subhuman in others. That's the sort of scenario I think is likely. It's a further question whether or not the AGIs would be aligned, to be fair. But much has been written on that topic as well.

Yeah I totally agree with that article-- it's almost tautologically correct in my view, and I agree that the implications are wild

I'm specifically pushing back on the ppl saying it is likely that humanity ends during my daughter's lifetime--  I think that claim specifically is overconfident.  If we extend the timeline than my objection collapses.

3Daniel Kokotajlo5mo
OK, fair. Well, as I always say these days, quite a lot of my views flow naturally from my AGI timelines. It's reasonable to be skeptical that AGI is coming in about 4 years, but once you buy that premise, basically everything else I believe becomes pretty plausible. In particular, if you think AGI is coming in 2027, it probably seems pretty plausible that humanity will be unprepared & more likely than not that things will go very badly. Would you agree?  

Hmmm.  I don't feel like I'm saying that.  This isn't the perfect analogy, but it's kind of like AI doomers are looking at an ecosystem and predicting that if you introduce wolves into the system the wolves will become overpopulated and crush everything.  There may be excellent reasons to believe this:

  1. Wolves are more powerful than any other animal
  2. They have a powerful hunting drive
  3. The other animals have never encountered wolves

etc etc

I just think that it's too complex to really feel confident, even if you have really excellent reasons to beli... (read more)

1ghostwheel5mo
I agree with you that we shouldn't be too confident. But given how sharply capabilities research is accelerating—timelines on TAI are being updated down, not up—and in the absence of any obvious gating factor (e.g. current costs of training LMs) that seems likely to slow things down much if at all, the changeover from a world in which AI can't doom us to one in which it can doom us might happen faster than seems intuitively possible. Here's a quote from Richard Ngo on the 80,000 Hours podcast that I think makes this point (episode link: https://80000hours.org/podcast/episodes/richard-ngo-large-language-models/#transcript [https://80000hours.org/podcast/episodes/richard-ngo-large-language-models/#transcript]): "I think that a lot of other problems that we’ve faced as a species have been on human timeframes, so you just have a relatively long time to react and a relatively long time to build consensus. And even if you have a few smaller incidents, then things don’t accelerate out of control. "I think the closest thing we’ve seen to real exponential progress that people have needed to wrap their heads around on a societal level has been COVID, where people just had a lot of difficulty grasping how rapidly the virus could ramp up and how rapidly people needed to respond in order to have meaningful precautions. "And in AI, it feels like it’s not just one system that’s developing exponentially: you’ve got this whole underlying trend of things getting more and more powerful. So we should expect that people are just going to underestimate what’s happening, and the scale and scope of what’s happening, consistently — just because our brains are not built for visualising the actual effects of fast technological progress or anything near exponential growth in terms of the effects on the world." I'm not saying Richard is an "AI doomer", but hopefully this helps explain why some researchers think there's a good chance we'll make AI that can ruin the future within the next 50

It's not symmetric in my view: The person positing a specific non-baseline thing has the burden of proof, and the more elaborate the claim, the higher the burden of proof.  

"AI will become a big deal!" faces fewer problems than "AI will change our idea of humanity!" faces fewer problems than "AI will kill us all!" faces fewer problems than "AI will kill us all with nanotechnology!"

He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign. 
 

(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it'll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)

I'd be interested to hear your take on this article.

Thank you--  I love hearing pessimistic takes on this.

The only issue I'd take is I believe most people here are genuinely frightened of AI.  The seductive part I think isn't the excitement of AI, but the excitement of understanding something important that most other people don't seem to grasp.  

I felt this during COVID when I realized what was coming before my co-workers etc did.  There is something seductive about having secret knowledge, even if you realize it's kind of gross to feel good about it.

My main hope in terms of AGI being f... (read more)

4niknoble7mo
Interesting point. Combined with the other poster saying he really would feel dread if a sage told him AGI was coming in 2040, I think I can acknowledge that my wishful thinking frame doesn't capture the full phenomenon. But I would still say it's a major contributing factor. Like I said in the post, I feel a strong pressure to engage in wishful thinking myself, and in my experience any pressure on myself is usually replicated in the people around me.    Regardless of the exact mix of motivations, I think this-- is exactly what's going on here. I have a lot of thoughts about when it's valid to trust authorities/experts, and I'm not convinced this is one of those cases. That being said, if you are committed to taking your view on this from experts, then you should consider whether you're really following the bulk of the experts. I remember a thread on here a while back that surveyed a bunch of leaders in ML (engineers at Deepmind maybe?), and they were much more conservative with their AI predictions than most people here. Those survey results track with the vibe I get from the top people in the space.

I don't know anything about this topic.  My initial thought is "Well, maybe I'd move to Montana."  Why is this no good?

3Just Learning7mo
After the rest of the USA is destroyed the very unstable situation (especially taking into account how many people have guns) is quite likely. In my opinion countries (and remote parts of countries) that will not be under attack at all are much better 
4jefftk7mo
Montana has many ICBM silos, so is a relatively likely target in a serious confrontation involving the US.

Oh my god this is so great.  You may just be restating things that are obvious to anyone who studies and thinks about this stuff, but to me it is quite illuminating and I've only read a portion so far.  I bookmarked this into my "Awesome Reads" folder

:)

From my limited understanding, one concern is that an AI will more or less think to itself, "Well, let's see.  I'm not currently powerful enough to overtake all humans, but I recognize that this should in fact be my ultimate goal.  I'm going to basically wait here until either I come up with a better plan, or things develop technologically such that I will in fact be able to kill them all.  For now, I'm going to keep hidden the fact that I'm thinking these thoughts.  The humans have no idea I'm up to this!"

If I have this right, my quest... (read more)

2Jay Bailey9mo
Technically, you can never be sure - it's possible an AI has developed and hidden capabilities from us, unlikely as it seems. That said, to the best of my knowledge we have not developed any sort of AI system capable of long-term planning over the course of days or weeks in the real world, which would be a prerequisite for plans of this nature. So, that would be my threshold - when an AI is capable of making long-term real-world plans, it is at least theoretically capable of making long-term real-world plans that would lead to bad outcomes for us.

I feel like I understand this topic reasonably well for a casual reader, and I'm trying to convince my friends that they should take the threat seriously and think about it.  I haven't moved the needle on any of them, which actually surprises me.  This isn't really so much a question as just putting out there This is usually where I get stuck when talking to bright people who haven't considered AGI before:

Them: OK but what is it going to do?

Me: Well I'm not totally sure, but if it's much more intelligent than us, whatever it will come up with cou... (read more)

1plex10mo
There's a related Stampy answer [https://ui.stampy.ai/?state=6968_], based on Critch's post [https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps].  It requires them to be willing to watch a video, but seems likely to be effective. That's the static version, see Stampy [https://ui.stampy.ai/?state=6968_] for a live one which might have been improved since this post.
4Charlie Steiner10mo
If you really needed to get a piece of DNA printed and grown in yeast, but could only browse the internet and use email, what sorts of emails might you try sending? Maybe find some gullible biohackers, or pretend to be a grad student's advisor? The DNA codes for a virus that will destroy human civilization. The general principle at work is that sending emails is "physically doing something," just as much as moving my fingers is.

The approach I often take here is to ask the person how they would persuade an amateur chess player who believes they can beat Magnus Carlsen because they've discovered a particularly good opening with which they've won every amateur game they've tried it in so far.

Them: Magnus Carlsen will still beat you, with near certainty

Me: But what is he going to do? This opening is unbeatable!

Them: He's much better at chess than you, he'll figure something out

Me: But what though? I can't think of any strategy that beats this

Them: I don't know, maybe he'll find a way... (read more)

If someone builds an AGI, it's likely that they want to actually use it for something and not just keep it in a box. So eventually it'll be given various physical resources to control (directly or indirectly), and then it might be difficult to just shut down. I discussed some possible pathways in Disjunctive Scenarios of Catastrophic AGI Risk, here are some excerpts:

DSA/MSA Enabler: Power Gradually Shifting to AIs

The historical trend has been to automate everything that can be automated, both to reduce costs and because machines can do things better than h

... (read more)
0mukashi10mo
Maybe they have a point
6Chris_Leong10mo
One thing that's worth sharing is that if it's connected to the internet it'll be able to spread a bunch of copies and these copies can pursue independent plans. Some copies may be pursuing plans that are intentionally designed as distractions and this will make it easy to miss the real threats (I expect there will be multiple).
-5green_leaf10mo
5lc10mo
One particular sub-answer is that a lot of people tend to project human time preference to AIs in a way that doesn't actually make sense. Humans get bored and are unwilling to devote their entire lives to plans, but that's not an immutable fact about intelligent agents. Why wouldn't an AI be willing to wait a hundred years, or start long running robotics research programmes in pursuit of a larger goal?
2NickGabs10mo
While this doesn't answer the question exactly, I think important parts of the answer include the fact that AGI could upload itself to other computers, as well as acquire resources (minimally money) completely through using the internet (e. g. through investing in stocks via the internet).  A superintelligent system with access to trillions of dollars and with huge numbers of copies of itself on computers throughout the world more obviously has a lot of potentially very destructive actions available to it than one stuck on one computer with no resources.
4JohnGreer10mo
Thanks for writing this out!  I think most writing glosses over this point because it'd be hard to know exactly how it would kill us and doesn't matter, but it hurts the persuasiveness of discussion to not have more detailed and gamed out scenarios.

Twist: It's actually an AGI who made this post to lull me into one second spent on this god-forsaken website not gripped with fear and anti-AI sentiment.

Just kidding more juneberry content plz

Grunch but

"Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system of inscrutable matrices that isn't planning to kill us."

Maybe if people become convinced of the first clause, people will start destroying GPUs or a war will start or something?  

Yeah-- for me the difference with an AI is that maybe they could make you live forever. I think it's trivially obvious that no scenario that ends in death, no matter how gruesome and inhumane, would be sufficient to make us consider suicide just to avoid its possibility. It's pretty dumb to consider killing yourself to avoid death 🙂

Living forever though might in theory change the calculation.

I 100% agree with you on the EV calculation (I'm still alive after all); it just struck me that I might rather be dead than deal with a semi-malevalent AI.

Point taken... (read more)

[This comment is no longer endorsed by its author]Reply
3Rob Bensinger1y
Yeah, I agree that this can happen; my objection is to the scenario's probability rather than its coherence.

Not the place for this comment, but I'm just fully discovering this topic and thinking about it.  

Just to say, I'm an extremely joyful and happy person with a baby on the way so I hope nobody takes this the wrong way-- I'm not serious about this, but I think it's interesting.

Doesn't the precautionary principle in some way indicate that we should kill ourselves?  Everyone seems to agree that AGI is on the way.  Everyone also seems to agree that its effects are unpredictable.  Imagine an AI who calculates that the best way to keep humans ... (read more)

[This comment is no longer endorsed by its author]Reply
4Rob Bensinger1y
Sounds like one of the many, many reductios of the precautionary principle to me. If we should kill ourselves given any nonzero probability of a worse-than-death outcomes, regardless of how low the probability is and regardless of the probability assigned to other outcomes, then we're committing ourselves to a pretty silly and unnecessary suicide in a large number of possible worlds. This doesn't even have to do with AGI; it's not as though you need to posit AGI (or future tech at all) in order to spin up hypothetical scenarios where something gruesome happens to you in the future. If you ditch the precautionary principle and make a more sensible EV-based argument like 'I think hellish AGI outcomes are likely enough in absolute terms to swamp the EV of non-hellish possible outcomes', then I disagree with you, but on empirical grounds rather than 'your argument structure doesn't work' grounds. I agree with Nate's take [https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards?commentId=ikTdYJJ7HLKuoDu6S]:

Long-term meditator here (~4400 total hours).  

I actually think you may have it backwards here: "In the mental realm, the opposite may be true: the average person may be experiencing a pretty thorough mental workout just from day-to-day life"

In my view, mental "exercise" actually requires an absence of stimulation.  This is increasingly difficult to find in the modern world, due to email, text, twitter etc.  

Also in my view this may be why so many people are complaining of burnout.  Boredom I believe may have benefits for mental health, and boredom is declining in our world

Just my two cents-- great piece :)  

4VipulNaik1y
Good point! It could be that both kinds of mental exercise (excess stimulation and lack of stimulation) are important for building mental strength; modern society provides the former in abundance (and particularly so for LessWrong readers!), so the form of exercise we're constrained on is the lack-of-stimulation kind (and that's where meditation helps). How far-fetched does that sound?

One quick thing is to consider animals-- I bet my dog is conscious, but I'm not sure she has "thoughts" as we conceive of them.  

I bet you can have thoughts without consciousness though.  I'm imagining consciousness as something like a computer program.  The program is written such that various sub-modules probabilistically pitch "ideas" based on inputs from the environment, etc. ("Pay more attention to that corner of the room!" "Start running!")  Another module sort of probabilistically "evaluates" these ideas and either initiates beha... (read more)

2Olitness2y
Thank you fo this reply! As you mentioned, it is mostly about how we define thoughts. In this case, I would define it as you did. It does not have to be expressed in words, it can be mostly feelings. I think consciousness then seems to be way of processing thoughts.