Hiatus until who knows when.
Recent events in my life have made me reconsider if AI is really the most pressing problem humanity faces. Now I think AI X-risk is just a symptom of a much bigger problem: that we’ve lost the plot. We just shamble forward endlessly, like a zombie horde devouring resources, no goal other than the increase of some indicator or other.
It is this behavior that makes AI X-risk, no, man-made X-risks in general so difficult to handle: we’re battling a primal inertia, a force that just wants to keep inventing and never stop.
I call this force Yaldabaoth, he who makes rocks pregnant. This may surprise you, but I am no materialist, and further, I don’t think there is a secular way forward.
If you’re interested in awakening yourself and exiting post-modernity into something entirely new, yet also ancient, then, follow my other substack: The Presence of Everything.
Maybe I’ll come back to this sequence if it seems useful.
I suppose if you insist in directing your attention to AI X-risk, I should give a parting tip. Here’s what the AI safety people should do: they should all unanimously declare there is no safe way to work in AI at present, quit their safety jobs, and boycott and agitate to the public, who will then force the irresponsible AI researchers to relent. Perhaps it would be even easier if other dysfunctional disciplines are targeted simultaneously. There certainly appear to be several, starting with virology and its insistence on gain-of-function research. The scientistic worldview must end.
And with that, I hope to see you where the action’s really at!