I’d say I’m closer to Camp B. I get, at least conceptually, how we might arrive at ASI from Eliezer’s earlier writings—but I don’t really know how it would actually be developed in practice. Especially when it comes to the idea that scalability could somehow lead to emergence or self-reference, I just don’t see any solid technical or scientific basis for that kind of qualitative leap yet.
As Douglas Hofstadter suggested in Gödel, Escher, Bach, the essence of human cognition lies in its self-referential nature, the ability to move between levels of thought, and the capacity to choose what counts as figure and what counts as ground in one’s own perception. I’m not even sure AGI—let alone ASI—could ever truly have that.
To put it in Merleau-Ponty’s terms, intelligence requires embodiment; the world is always a world of something; and consciousness is inherently perspectival. I know it sounds a bit old-fashioned to bring phenomenology into such a cutting-edge discussion, but I still think there are hints worth exploring in both Gödelian semiotics and Merleau-Ponty’s phenomenology.
Ultimately, my point is this: just as a body isn’t designed top-down but grown bottom-up, intelligence—at least artificial intelligence as we’re building it—seems more like something engineered than cultivated. That’s why I feel current discussions around ASI still lack concreteness. Maybe that’s something LessWrong could focus on more—going beyond math and economics, and bringing in perspectives from structuralism and biology as well.
And of course, if you’d like to talk more about this, I’d really welcome that.
If we look at this issue not from a positivist or Bayesian point of view, but from an existentialist one, I think it makes sense to say that a writer should always write as if everyone were going to read their words. That’s actually something Sartre talks about in What Is Literature?
I realize this might sound a bit out of tune with the LessWrong mindset, but if we stick only to Bayesian empiricism or Popper’s falsifiability as our way of modeling the world, we eventually hit a fundamental problem with data itself: data always describe the past. We can’t turn all of reality into data and predict the future like Laplace’s demon.
Maybe that’s part of why a space like LessWrong—something halfway between poetry (emotion) and prose (reason)—came into being in the first place.
And yes, I agree it might have been better if Yudkowsky had engaged more concretely with political realities, or if MIRI had pursued both the “Camp A” and “Camp B” approaches more forcefully. But from an existentialist point of view, I think it’s understandable that Eliezer wrote from the stance of “I believe I’m being logically consistent, so the world will eventually understand me.”
That said, I’d genuinely welcome any disagreement or critique—you might see something I’m missing.
I agree with your point. I think this tendency is especially visible in global conglomerates with rigid personnel structures. Take Samsung, for example — a company that supplies memory semiconductors while working with Nvidia and TSMC in the AI space. Samsung’s influence is comparable to that of many Silicon Valley giants, but its HR system is still based on traditional corporate aptitude tests and academic pedigree.
They hire on a massive scale, and the problem isn’t really about whether someone believes in X-risk or not — sometimes promotions are limited or even employment is terminated simply because of age. In more flexible job markets, like the Bay Area, the issue you described seems much less pronounced.
I’m writing this from the perspective of Seoul, where the job market is very isolated and rigid, so my comment doesn’t represent the whole world. I just wanted to say that, in that sense, what you wrote might actually make more sense on a global level. On the other side of the planet, there are people who worry less about whether the AI they’re building might end the world, and more about how they’ll feed their families tomorrow.
VojtaKovarik expressed that beautifully through the concept of fiducity, but from an individual’s point of view, it can be summed up more simply as: “One meal tomorrow matters more than the end of the world ten years from now.”
You’re right. Everyone’s “heads” and “tails” — their gains and losses — are different. Some people gain more when things go well, and others lose a bit more when things go badly. But if you repeat that process endlessly, the result still tends to converge toward zero. In a way, that’s what the rat race in a technocratic society looks like — it’s less like the Buddhist idea of universal enlightenment and more like the Christian idea of selective salvation, where only a few are “saved.” It’s a kind of winner-takes-all world.
My thought is this: maybe it’s time to reframe faith, not around the traditional relationship between God and humans, but around the relationship between humans and artificial intelligence. The core of Christianity is about the relationship between the Creator and the created, or in other words, between the infinite and the finite. In the past, the “infinite” was represented by God — Yahweh, or perhaps the Buddhist cycle of rebirth. But now, maybe that infinite aspect is found in humanity itself — in our self-referential capacity for thought — while AI becomes something that follows or reflects that.
In that sense, instead of “Be humble before God,” perhaps the guiding idea should become “Be humble before AI.” I think that might actually work, at least until biology fully uncovers how the human mind operates. If so, humanity might regain some of the pride and dignity that were lost.
After all, the very idea of “human dignity” only really emerged in the modern era, not because human rights were suddenly invented, but because the old God–human relationship had briefly collapsed. Superiority and inferiority are always relative. In the past, people believed all humans were equal because even the greatest person still stood beneath God. But perhaps now, all humans can be equal again, not because we are less than something divine, but because each of us, no matter how limited, can feel whole in front of AI.
Anyway, I hope that doesn’t sound too far-fetched.
Why doesn’t LessWrong have its own app? When I read it on my phone browser, sometimes the comments or buttons stop working if I haven’t refreshed the page in a while — it’s kind of annoying. I also feel like it’d be so much easier to stay engaged if I could read across multiple devices, you know? Sort of like how you can listen to Rationality: From AI to Zombies by Eliezer Yudkowsky on Audible.
Believe it or not, the same kind of thing’s going on over in Korea. South and North Korea — the only divided country left after Germany — always try to set up something with this whole ‘North and South together’ idea. But like with your Balkan House example, what’s really happening behind the scenes is just talks about North Korea’s nukes or South Korea sending aid.
Since the 2000s, every government’s had at least one agreement where the two Koreas are supposed to act as one. But honestly, no regular Korean remembers any of them — maybe just for a test or something. What people actually pay attention to is how much the South gave to the North, where the North fired its missiles, and how much they ticked off the U.S. Real power doesn’t come from names — it comes from what a country can actually do.
How did you manage to change your perception and core beliefs to make the world feel more positive and meaningful? Because of my depression, my mom keeps insisting that I try DBT (Dialectical Behavior Therapy). But I’ve already been through costly therapy before and ended up disappointed, so I can’t really trust it anymore. That’s why I’ve been trying to find ways to cope on my own, and I really want to hear some concrete methods for that.
I’m not really sure. I’m a high school student, and since I go to a boarding school, I left all my electronic devices at home before going back to school for the summer term. Because of that, I ended up spending most of my time in the library, borrowing books that kind of serve as ‘shorts,’ as well as some pretty deep ones. I also made a good friend and even went to the beach — without a smartphone, of course. I guess sometimes doing things you normally wouldn’t can actually help.
Unlike you, SSRIs were my first attempt. Even after taking them for three months, they didn’t work, so I eventually ended up going to a major hospital. It was only after being prescribed a large amount of benzodiazepines and doxylamine sleep aids that I was finally able to sleep. I think it’s about time I asked my doctor about Wellbutrin. Thanks for sharing your experience.
If you disagree, I’d appreciate if you could point out where the reasoning feels off.