Posts

Sorted by New

Wiki Contributions

Comments

There's two different considerations at play here:

  1. Whether global birth rates/total human population will decline.

and

  1. Whether that decline will be a "bad" thing.

In the case of the former:

I think that a "business as usual" or naive extrapolation of demographic trends is a bad idea, when AGI is imminent. In the case of population, it's less bad than usual, at least compared to things like GDP. As far as I'm concerned, the majority of the probability mass can be divvied up between "baseline human population booms" and "all humans die".

Why might it boom? (The bust case doesn't need to be restated on LW of all places).

To the extent that humans consider reproduction to be a terminal value, AI will make it significantly cheaper and easier. AI assisted creches or reliable rob-nannies that don't let their wards succumb to what are posited as the ills of too much screen time or improper socialization will mean that much of the unpleasantness of raising a child can be delegated, in much the same manner that a billionaire faces no real constraints in their QOL from having a nigh arbitrary number of kids when they can afford as many nannies as they please. You hardly need to be a billionaire to achieve that, it's in the reach of UMC Third Worlders because of income inequality, and while more expensive in the West, hardly insurmountable for successful DINKs. The wealth versus fertility curve is currently highest for the poor, dropping precipitously with income, but then increases again when you consider the realms of the super-wealthy.

What this does retain will be what most people consider to be universally cherished aspects of raising a child, be it the warm fuzzy glow of interacting with them, watching them grow and develop, or the more general sense of satisfaction it entails.

If, for some reason, more resource rich entities like governments desire more humans around, advances like artifical wombs and said creches would allow large population cohorts to be raised without much in the way of the usual drawbacks today, as seen in the dysfunction of orphanages. This counts as a fallback measure in case the average human simply can't be bothered to reproduce themselves.

The kind of abundance/bounded post-scarcity we can expect will mean no significant downsides from the idle desire to have kids.

Not all people succumb to hyper-stimuli replacements, and the ones who don't will have far more resources to indulge their natal instincts.

As for the latter:

Today, and for most of human history, population growth has robustly correlated with progress and invention, be it technological or cultural, especially technological. That will almost certainly cease to be so when we have non-human intelligences or even superintelligences about, that can replace the cognitive or physical labour that currently requires humans.

It costs far less to spool up a new instance of GPT-4 than it does to conceive and then raise a child to be a productive worker.

You won't need human scientists, or artists, or anything else really, AI can and will fill those roles better than we can.

I'm also bullish on the potential for anti-aging therapy, even if our current progress on AGI was to suddenly halt indefinitely. Mere baseline human intelligence seems sufficient to the task within the nominal life expectancy of most people reading this, as it does for interplanetary colonization or constructing Dyson Swarms. AI would just happen to make it all faster, and potentially unlock options that aren't available to less intelligent entities, but even we could make post-scarcity happen over the scale of a century, let alone a form of recursive self-improvement through genetic engineering or cybernetics.

From the perspective of a healthy baseliner living in a world with AGI, you won't notice any of the current issues plaguing demographically senile or contracting populations, such as failure of infrastructure, unsustainable healthcare costs, a loss of impetus when it comes to advancing technology, less people around to make music/art/culture/ideas. Whether there are a billion, ten billion or a trillion other biological humans around will be utterly irrelevant, at least for the deep seated biological desires we developed in an ancestral environment where we lived and died in the company of about 150 others.

You won't be lonely. You won't be living in a world struggling to maintain the pace of progress you once took for granted, or worse, watching everything slowly decay around you.

As such, I personally don't consider demographic changes to be worth worrying about really. On long enough time scales, evolutionary pressures will ensure that pro-natal populations will reach carrying capacity. In the short or medium term, with median AGI timelines, it's exceedingly unlikely that most current countries with sub-replacement TFR will suffer outright, in the sense their denizens will notice a reduced QOL. Sure, in places like China, Korea, or Japan, where such issues are already pressing, they might have to weather at most a decade or so, but even they will benefit heavily from automation making a lack of humans an issue moot.

Have you guys tried the inverse, namely tamping down the refusal heads to make the model output answers to queries it would normally refuse?

I will regard with utter confusion someone who doesn't immediately think of the last place they saw something when they've lost it.

It's fine to state the obvious on occasion, it's not always obvious to everyone, and like I said in the parent comment, this post seems to be liked/held useful by a significant number of LW users. I contend that's more of a property of said users. This does not make the post a bad thing or constitute a moral judgement!

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

The way I decide this, and how presumably most people do (I admit I could be wrong) revolves around the following chain of thought:

  1. I have qualia with very high confidence.*

  2. To the best of my knowledge, the computational substrate as well as the algorithms running on them are not particularly different from other anatomically modern humans. Thus they almost certainly have qualia. This can be proven to most people's satisfaction with an MRI scan, if they so wish.

  3. Mammals, especially the intelligent ones, have similar cognitive architectures, which were largely scaled up for humans, not differing much in qualitative terms (our neurons are still actually more efficient, mice modified to have genes from human neurons are smarter). They are likely to have recognizable qualia.

  4. The further you diverge from the underlying anatomy of the brain (and the implicit algorithms), the lower the odds of qualia, or at least the same type of qualia. An octopus might well be conscious and have qualia, but I suspect the type of consciousness as well as that of their qualia will be very different from our own, since they have a far more distributed and autonomous neurology.

  5. Entities which are particularly simple and don't perform much cognitive computation are exceedingly unlikely to be conscious or have qualia in a non-tautological sense. Bacteria and single transistors, or slime mold.

More speculatively (yet I personally find more likely than not):

  1. Substrate independent models of consciousness are true, and a human brain emulation in-silico, hooked up to the right inputs and outputs, has the exact same kind of consciousness as one running on meat. The algorithms matter more than the matter they run on, for the same reason an abacus or a supercomputer are both Turing Complete.

  2. We simply lack an understanding of consciousness well grounded enough to decide whether or not decidedly non-human yet intelligent entities like LLMs are conscious or have qualia like ours. The correct stance is agnosticism, and anyone proven right in the future is only so by accident.

Now, I diverge from Effective Altruists on point 3, in that I simply don't care about the suffering of non-humans or entities that aren't anatomically modern humans/ intelligent human derivatives (like a posthuman offshoot). This is a Fundamental Values difference, and it makes concerns about optimizing for their welfare on utilitarian grounds moot as far as I'm concerned.

In the specific case of AGI, even highly intelligent ones, I posit it's significantly better to design them so they don't have capability to suffer, no matter what purpose they're put to, rather than worry about giving them rights that we assign to humans/transhumans/posthumans.

But what I do hope is ~universally acceptable is that there's an unavoidable loss of certainty or Bayesian probability in each leap of logic down the chain, such that by the time you get down to fish and prawns, it's highly dubious to be very certain of exactly how conscious or qualia possessing they are, even if the next link, bacteria and individual transistors lacking qualia, is much more likely to be true (it flows downstream of point 2, even if presented in sequence)

*Not infinite certitude, I have a non-negligible belief that I could simply be insane, or that solipsism might be true, even if I think the possibility of either is very small. It's still not zero.

I mean no insult, but it makes me chuckle that the average denizen of LessWrong is so non-neurotypical that what most would consider profoundly obvious advice not worth even mentioning comes as a great surprise or even a revelation of sorts.

(This really isn't intended to be a dig, I'm aware the community here skews towards autism, it's just a mildly funny observation)

I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There's only one universe to share, and only so much in the way of resources in it, even if it's a staggering amount. The last thing we need are potential "Greedy Aliens" in the Hansonian sense.

So while I wouldn't give the aliens zero moral value, it would be less than I'd give for another human or human-derivative intelligence, for that fact alone.

My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.

I strongly expect, and would be outright shocked if it were otherwise, that we won't end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we've made such a great dent in visual art already with image models that lack the intelligence of an average human.

Thus, it doesn't matter in the least if it stifles human output, because the overwhelming majority of us who don't rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.

To put money where my mouth is, I write a web serial, after years of world-building and abortive sketches in my notes, I realized that the release of GPT-4 meant that any benefit from my significantly above average ability to be a human writer was in jeopardy, if not now, then a handful of advances down the line. So my own work is more of a "I told you I was a good writer, before anyone can plausibly claim my work was penned by an AI" for street cred rather than a replacement for my day job.

If GPT-5 can write as well as I can, and emulate my favorite authors, or even better yet, pen novel novels (pun intended), then my minor distress at losing potential Patreon money is more than ameliorated by the fact I have a nigh-infinite number of good books to read! I spend a great deal more time reading the works of others than writing myself.

The same is true for my day job, being a doctor, I would look forward to being made obsolete, if only I had sufficient savings or a government I could comfortably rely on to institute UBI.

I would much prefer that we tax the fruits of automation to support us all when we're inevitably obsolete rather than extend copyright law indefinitely into the future, or subject derivative works made by AI to the same constraints. The solution is to prepare our economies to support a ~100% non-productive human populace indefinitely, better preparing now than when we have no choice but to do so or let them starve to death.

should mentally disabled people have less rights

That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can't ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all?

Personally, I have no compunctions about tying a large portion of someone's moral worth to their intelligence, if not all of it. Certainly not to the extent I'd prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.

Ctrl+F and replace humanism with "transhumanism" and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they're as different from their common Homo sapiens ancestor as a rat and a whale.

I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).

I'm a doctor in India right now, and will likely be a doctor in the UK by then, assuming I'm not economically obsolete. And yes, I expect that if we do have therapies that help provide LEV, they will be affordable in my specific circumstances as well as most LW readers, if not globally. UK doctors are far poorer compared to the their US kin.

Most biological therapies are relatively amenable to economies of scale, and while there are others that might be too bespoke to manage the same, that won't last indefinitely. I can't imagine anything with as much demand as a therapy that is proven to delay aging nigh indefinitely, for an illustrative example look at what Ozempic and Co are achieving already, every pharma industry leader and their dog wants to get in on the action, and the prices will keep dropping for a good while.

It might even make economic sense for countries to subsidize the treatment (IIRC, it wouldn't take much more for GLP-1 drugs to reach the point where they're a net savings for insurers or governments in terms of reducing obesity related health expenditures). After all, aging is why we end up succumbing to so many diseases in our senescence, not the reverse.

Specifically, gene therapy will likely be the best bet for scaling, if a simple drug doesn't come about (seems unlikely to me, I doubt there's such low hanging fruit, even if the net result of LEV might rely on multiple different treatments in parallel with none achieving it by themself).

Load More