I’ve been coming to LessWrong for a while. I’ve read most of the arguments for how and why things might go wrong.

I’ve been keeping across most developments. I’ve been following alignment efforts. I’ve done some thinking about the challenges involved.

But now I feel it.

Spending time observing ChatGPT – its abilities, its quirks, its flaws – has brought my feelings into step with my beliefs. 

I already appreciated why I should be concerned about AI. Like I say, I’d read the arguments, and I’d often agreed. 

But my appreciation took a detached, ‘I can’t fault the reasoning so I should accept the conclusion’ kind of form. I was concerned in the abstract, but I was never really worried. At least some of my concern was second-hand; people I respected seemed to care a lot, so I probably should too. It was forced. 

I spend a lot of time thinking about catastrophic risks. When I consider something like an engineered pandemic, I’ve always felt the danger. It comes naturally to me. It’s intuitive. The same goes for many other threats. But only in the last few weeks has that become true for AI.

This is all a little embarrassing to admit. I know that ChatGPT isn’t an enormous leap from what existed previously. Yet it has significantly changed how I feel about AI risk. It’s made things click in a way that I can’t fully explain; all the arguments now hit with an added force. 

Those who’ve always 'felt it' are probably wondering what took me so long, and why now. I'm not sure. But I doubt my experience is unique. Hopefully some of you know exactly what I’m trying to convey here. 

Before I believed it. Now I feel it.

And as much as I’d like to think otherwise, that makes a big difference.


New Comment
14 comments, sorted by Click to highlight new comments since: Today at 8:49 PM

Thanks for posting this - reports of experience are interesting and useful.  I advise caution.  That style of emotional belief is useful in motivation, and is a good hint toward areas to model more closely and prioritize in terms of actions.   But it's also over-general and under-nuanced, and lacks humility and acknowledgement that it might be incorrect.

I completely agree. That's a big part of why I said this was all a little embarrassing to admit.

As you say, though, I do think an honest self-reflection can be a useful data point here.

Oh, funny - I misunderstood your "a little embarrassing to admit" to mean that you're embarrassed to admit you didn't feel it sooner, with the implication that you expect most readers to already feel it and think you're late to the party.  Embarrassing to admit that you have aliefs, and that this one has moved to align with your conscious beliefs didn't occur to me.

That makes sense. I can see why you would get that impression.

I should clarify one other thing: having this experience hasn't made me any kind of blind or total believer in AI risk. I still have doubts and disagreements.

I just feel like I 'get' some arguments in a way that I didn't quite before. That's what I wanted to convey in the post. 

People have accused me of being an AI apocalypse cultist. I mostly reject the accusation. But it has a certain poetic fit with my internal experience. I’ve been listening to debates about how these kinds of AIs would act for years. Getting to see them at last, I imagine some Christian who spent their whole life trying to interpret Revelation, watching the beast with seven heads and ten horns rising from the sea. “Oh yeah, there it is, right on cue; I kind of expected it would have scales, and the horns are a bit longer than I thought, but overall it’s a pretty good beast.”

--Scott Alexander, "Perhaps it is a bad thing that the world's leading AI companies cannot control their AIs"

Completely agree here. I've known the risks involved for a long time, but I've only really felt them recently. I think Robert Miles phrases it quite nicely on the Inside View podcast, where "our System 1 thinking finally caught up with our System 2 thinking."

Shouldn't it be the other way round -- System 1 finally catching up with System 2?

Woops, edited. Thanks! :)

I had a similar experience with midjourney. The question now is, how do you change your life once you have the more visceral understanding of the near term future? Seriously, this is my biggest problem. I deeply believe change is coming, fast, but I'm still stuck in so many patterns that only make sense if the status quo continues.

Honestly that's exactly how I feel after messing around with ChatGPT

Yeah it's not perfect but the fact that this is possible now as a free demo to me means real honest to god GAI is only a few decades away.

And even though ChatGPT is about the most wholesome chatbot I've ever seen, this is obviously more of a surface-level PR thing rather than an indication about the underlying technology.

Reading back over this post, I'm slightly concerned that it comes across as a bit over-the-top – like playing around with ChatGPT led me to have some kind of religious experience or something.

It didn't. It just gave me a more visceral appreciation of the potential risks involved.

Hopefully that's clear.

Clear to me

I feel the same way. You're not alone.

I was thinking about posting something exactly like this. Not sure if here, Reddit or any other community. You did. You read my mind. You even phrased my thoughts: Now I feel it.

New to LessWrong?