Sorted by New

Wiki Contributions



Most parents may not want a robot to babysit their children.

Assuming that stays true, your friends and family, who also don't have jobs, can do that in an informal quid-pro-quo. And you'll need it less often. Seems unlikely to need any meaningful kind of money economy.

Art history and museums. There is a lot of physical work and non-text knowledge involved and demand may remain. This includes art restoration (until clouds of nanobots will do it).

If the robots are fully embodied and running around doing everything, they'll presumably get that knowledge. There's a lot of non-text knowledge involved in plumbing, too, but the premise says that plumbing is done by machines.


I don't understand why everybody seems to think it's desirable for there to keep being jobs or to have humans "empowered". If AI runs the world better than humans, and also provides humans with material wealth and the ability to pursue whatever hobbies they feel like, that seems like a huge win on every sane metric. Sign me up for the parasitic uberpet class.

I am scared of the idea of very powerful AI taking orders from humans, either individually or through some political process. Maybe more scared of that than of simply being paperclipped. It seems self-evidently hideously dangerous.

Yet an awful lot of people seem obsessed with avoiding the former and ensuring the latter.


It definitely raises the bar, and it may very well raise it well out of reach of the average DNM seller, but I think you may be imagining the whole process to be harder than it has to be.

I have everything I'd need to seal ampoules lying around downstairs[1]. It's a few hundred dollars worth of stuff. More than a bag sealer, but not a fortune. You wouldn't even have to buy blanks; you could make them from plain tubing. You don't have to seal them under vacuum. There's not that much skill involved, either; closing a tube is about as simple as flameworking can get. The biggest learning investment would probably be learning how to handle the torch without unfortunate unintended consequences.

You don't have to avoid contamination; you just have to clean it off. One nice thing about glass is that you can soak it for as long as you like in a bottle of just about any any noxious fentanyl-eating chemical you can identify. You can wash down the outside of the bottle with the same stuff. I doubt you'd have to resort to any exotic chemicals; one or another of bleach, peroxide, lye, or your favorite mineral acid would probably destroy it pretty rapidly.

It would be a really good idea to have a separate pack-and-ship facility, and an accomplice to run it, but you don't need to resort to a clean room.

FIngerprints (and stray hairs and the like) would actually be much harder to deal with, although of course they won't alert dogs.

  1. Doesn't everybody have basic glassblowing and welding equipment? Kids these days. ↩︎


DNM sellers are mostly not too bright[1], and the dreaded killer weed is relatively smelly, bulky, and low-value compared to something like fentanyl. If I remember right, they use, or at least used to use, heat-sealed mylar bags, which you'd expect would leak.

If I wanted to ship fentanyl past dogs, I'd research the possibility of sealing it in glass ampoules. A correctly sealed ampoule will hold a hard vacuum for decades. Assuming it was properly cleaned on the outside, I don't believe a dog would detect it even from sniffing it directly. And I do know some of the very impressive things dogs can detect. Welded metal vessels can also be pretty tight.

A bit off topic, dogs are also unreliable enough that they shouldn't really be allowed to be used, even if you think mass-searching shipments is OK to begin with. It's not that dog doesn't know what's there; it's that the communication between the dog and the handler is suspect.

  1. ^

    Smarter than buyers, but still not smart.


What does "value" mean here? I seriously don't know what you mean by "total loss of value". Is this tied to your use of "economically important"?

I personally don't give a damn for anybody else depending on me as the source of anything they value, at least not with respect to anything that's traditionally spoken of as "economic". In fact I would prefer that they could get whatever they wanted without involving me, and i could get whatever I wanted without involving them.

And power over what? Most people right this minute have no significant power over the wide-scale course of anything.

I thought "extinction", whether for a species or a culture, had a pretty clear meaning: It doesn't exist any more. I can't see how that's connected to anything you're talking about.

I do agree with you about human extinction not necessarily being the end of the world, depending on how it happens and what comes afterwards... but I can't see how loss of control, or value, or whatever, is connected to anything that fits the word "extinction". Not physical, not cultural, not any kind.


I also meant existing life sentences. At any given time, you may have a political change that ends them, and once that happens, it's as much a matter of law as the original sentence.

I can't see any given set of laws or constitution, or the sentences imposed under them, lasting more than a few hundred years, and probably much less.

I could see a world where they didn't get the treatments to begin with, though.


How much does the rest of the world change?

Suppose that things in general are being run by pervasive AI that monitors everything, with every human being watched by many humans-worth of intelligence, and fast enough, ubiquitous enough robotics to stop most or all human actions before they can be completed. Why would you even have prison sentences of any kind?

If you hold everything constant and just vastly extend everybody's life span, then maybe they stay in prison until it becomes unfashionable to be so punitive, and then get released. Which doesn't mean that kind of punitiveness won't come back into fashion later. Attitudes like that can change a lot in a few centuries. For that matter the governments that enforce the rules have a shelf life.


One obvious question, as someone who loves analyzing safety problems through near-term perspectives whenever possible, is what if the models we currently have access to are the most trusted models we'll ever have? Would these kinds of security methods work, or are these models not powerful enough?

My reasonably informed guesses:

  1. No, they are not close to powerful enough. Not only could they be deliberately fooled, but more importantly they'd break things all the time when nobody was even trying to fool them.
  2. That won't stop people from selling the stuff you propose in the short term... or from buying it.

In the long term, the threat actors probably aren't human; humans might not even be setting the high-level goals. And those objectives, and the targets available, might change a great deal. And the basic software landscape probably changes a lot... hopefully with AI producing a lot of provably correct software. At that point, I'm not sure I want to risk any guesses.

I don't know how long the medium term is.


You seem to be privileging the status quo. Refraining from doing that has equally large effects on your peers.


effective-extinction (a few humans kept in zoos or the like, but economically unimportant to the actual intelligent agents shaping the future)

Do you really mean to indicate that not running everything is equivalent to extinction?

Load More