This analogy falters a bit if you consider the research proposals that use advanced AI to police itself (a.k.a., tigers controlling tigers). I hope we can scale robust versions of that.
A simple analogy for why the "using LLMs to control LLMs" approach is flawed:
It's like training 10 mice to control 7 chinchillas, who will control 4 mongooses, controlling three raccoons, which will reign in one tiger.
A lot has to go right for this to work, and you better hope that there aren't any capability jumps akin to raccoons controlling tigers.
I just wanted to release this analogy out into the wild to be picked up by any public/political-facing people to pick up if useful for persuasion.
To summarize this risk eloquently, if we build God, we build the real possibility of Hell.
It feels like there's a huge blind spot in this post, and it saddens (and scares) me to say it. The possible outcomes are not utopia for billions of years or bust. The possible outcomes are utopia for billions of years, distopia for billions of years, or bust. Without getting into the details, I can imagine S-tier risks in which the AGI turns out to care too much about engagement from alive humans, and things getting dark from there.
Short of pretty much torture for eternity, the "keep humans around but drug them to increase their happiness" scenarios are also distopian and may also be worse than death. Are there good reasons to expect utopia is more likely relative to distopian (with extinction remaining most likely)?
Also, I'm feeling some whiplash reading my reply because I totally sound like an LLM when called out for a mistake. Maybe similar neural pathways for embellishment were firing, haha.
Thank you both for calling this out, because I was clearly incorrect. I was trying to recall my wife's initial calculation, which I believe included maintenance, insurance, gas, and repairs.
I think this is one of those things where I was so proud of not owning a car that the amount saved morphed from $8k to $10k to $15k in the retelling. I need to stop doing that.
As the father of 2 kids (a 5 y/o and 2 y/o) in Palo Alto, I can confirm that childcare is a lot. $2k per kid per month at our subsidized academic-affiliation rate. At $48k, it's almost the entirety of my wife's PhD salary. Fortunately, I have a well-paying job and we are not strapped for money.
We also got along with just an e-bike for 6 years, saving something like $15k per year in car insurance and gas (save for 9 months when we had the luxury of borrowing a car from family) [Incorrect, see below]. We got a car recently due to a longer commute, but even then, I still use the e-bike almost everyday because the car is not much faster and overlapping with exercise time is valuable (plus the 5 y/o told me he likes fresh air),
For clothes/toys/etc., we've used Facebook market place, "Buy Nothing" groups, and our neighbors to source pretty much everything. The best toys have just been cardboard, masking tape, and scissors, which are very cheap.
[Edit: As comments below point out, the figure for no-car savings was incorrect. It's closer to $8k, taking into account gas, insurance, maintenance, and repairs. Apologies for the embellishment—I think it was from a combination of factors including (i) being proud of previously not owning a car, (ii) making enough not to track it closely, and (iii) deferring to my spouse for most of our household payments/financial management (which is not great on my part—she is busy and household management is a real burden).
To shore up my credibility on child care, I pulled our receipts, and we're currently at $2,478 per month for the toddler, and $1,400 per month for the kindergartener's after-school program (though cheaper options were available for the after-school program).]
Richard Rorty argued that stories, rather than ethical principles, are at the heart of morality. For Rorty, the basic question of morality is which groups to recognize as persons entitled to respect. Stories about women and slaves made privileged people recognize them as people who matter.
Within Rorty's framing, it feels like The Wild Robot, Wall-E, and stories like that prime us to (eventually) recognize the personhood of robots. I suppose those would be important stories if we succeeded in creating conscious entities that desire to continue living*, but since there are (VERY!) good reasons not to build these entities now, we need stories that highlight the risks as you've discussed.
* And we retained full power over them. (Edit was to add this)
I wrote a paper on selecting for tallness in graduate school and how that could lead to adverse public health outcomes. Here were a few interesting finds:
At least one researcher believes the longevity differences between Japan v. U.S. and men v. women is explained just by height differences. I wish there were more research on this. Of course, the research would have to take into account caloric/nutrient deficiencies in early development, which can lead to lower height but also worse health outcomes.
edit: typo + last bullet
It seems like the peasants used to eat lots of animal organs that now fall to the wayside (the wayside being kibble). A related fact is that the art of preparing those organ meats has been lost within family lines
My grandmother remembers her grandmother preparing liver, but she herself never picked up the skill. Thankfully, my wife and I were able to turn to the internet for preparation strategies.