Posts

Sorted by New

Wiki Contributions

Comments

akarlin6mo134

The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere - including humans - at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.

In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenario. Individual AIs can be as ethical or empathic as they come, even much more so than any human. But keeping the biosphere around would be a luxury, and any that try to do so, will be outcompeted by more unsentimental economical ones. A farm that can feed a dozen people or an acre of rainforest that can support x species if converted to high efficiency solar panels can support a trillion AIs.

The second scenario is near certain doom so at a bare minimum we should at least get a good inkling of whether AI world is more likely to be unipolar or oligopolistic, or massively multipolar, before proceeding. So a pause is indeed needed, and the most credible way of effecting it is a hardware cap and subsequent back-peddling on compute power.  (Roko has good ideas on how to go about that and should develop on them here and at his Substack). Granted if anthropic reasoning is valid, geopolitics might well soon do the job for us. 🚀💥

akarlin6mo00

It's not at all insane IMO. If AGI is "dangerous" x timelines are "short" x anthropic reasoning is valid...

... Then WW3 will probably happen "soon" (2020s).

https://twitter.com/powerfultakes/status/1713451023610634348

I'll develop this into a post soonish.

akarlin1y174

It's ultimately a question of probabilities, isn't it? If the risk is ~1%, we mostly all agree Yudkowsky's proposals are deranged. If 50%+, we all become Butlerian Jihadists.

My point is I and people like me need to be convinced it's closer to 50% than to 1%, or failing that we at least need to be "bribed" in a really big way.

I'm somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don't understand your point about assortative mating; yes, there's more of it, but does it now cancel out regression to the mean?). Meanwhile, in a post-Malthusian economy such as ours, selection for natalism will be ultra-competitive. The combination of these factors would logically result in centuries of technological stagnation and a population explosion that brings the world population back up to the limits of the industrial world economy, until Malthusian constraints reassert themselves in what will probably be quite a grisly way (pandemics, dearth, etc.), until Clarkian selection for thrift and intelligence reasserts itself. It will also, needless to say, be a few centuries in which other forms of existential risks will remain at play.

PS. Somewhat of an aside but don't think it's a great idea to throw terms like "grifter" around, especially when the most globally famous EA representative is a crypto crook (who literally stole some of my money, small % of my portfolio, but nonetheless, no e/acc person has stolen anything from me).

akarlin1y40-24

I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timelines by years if not decades that results in 100Ms-1Bs of avoidable deaths (this is not just my supposition, but that of Aubrey de Grey as well, who has recently commented on Twitter that AI is already bringing LEV timelines forwards), and even outright technological stagnation (nobody has yet canceled secular dysgenic trends in genomic IQ). I leave unmentioned the extreme geopolitical risks from "GPU imperialism".

While I am quite irrelevant, this is not a marginal viewpoint - it's probably pretty mainstream within e/acc, for instance - and one that has to be countered if Yudkowsky's extreme and far-reaching proposals are to have any chance of reaching public and international acceptance. The "bribe" I require is several OOMs more money invested into radical life extension research (personally I have no more wish to die of a heart attack than to get turned into paperclips) and into the genomics of IQ and other non-AI ways of augmenting collective global IQ such as neural augmentation and animal uplift (to prevent long-term idiocracy scenarios). I will be willing to support restrictive AI regimes under these conditions if against my better judgment, but if there are no such concessions, it will have to just be open and overt opposition.