One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

“Did you not store up food during the summer?” the ants ask.

“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”

The ants, disgusted, turn away and go back to work.


One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

“Did you not store up food during the summer?” the ants ask.

“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”

The ants are sympathetic. “We wish we could help you”, they say, “but it sets up the wrong incentives. We need to conditionalize our philanthropy to avoid procrastination like yours leading to a shortfall of food.”

And they turn away and go back to their work, with a renewed sense of purpose.


...And they turn away and go back to their work, with a flicker of pride kindling in their minds, for being the types of creatures that are too clever to help others when it would lead to bad long-term outcomes.


...And they turn away and go back to their work—all except for one, who brushes past the grasshopper and whispers “Meet me outside at dusk and I’ll bring you food. We can preserve the law and still forgive the deviation.”


...“Did you not store up food during the summer?” the ants ask.

“Of course I did”, the grasshopper says. “But it was all washed away by a flash flood, and now I have nothing.”

The ants express their sympathy, and feed the grasshopper abundantly. The grasshopper rejoices, and tells others of the kindness and generosity shown to it. The ants start to receive dozens of requests for food, then hundreds, each accompanied by a compelling and tragic story of accidental loss. The ants cannot feed them all; they now have to assign additional workers to guard their doors and food supplies, and rue the day they ever gave food to the grasshopper.


...The ants start to receive dozens of requests for food, then hundreds, each accompanied by a compelling and tragic story of accidental loss—and while many are fraudulent, enough are real that they are moved to act. In order to set incentives correctly, the ants decide to only give food to those who can prove that they lost their food supplies through no fault of their own, and set up a system for vetting claims.

This works well for a time—but as fraudsters grow more sophisticated, the ants’ bureaucratic requirements grow more onerous. In order to meet them, other creatures start to deposit their food in large group storehouses which can handle the administrative overhead. But now the food supply is exposed to systemic risk if the managers of those storehouses make poor decisions, whether from carelessness or greed.

One year several storehouses fail; in trying to fill the shortfall, the ants almost run out of food for themselves. To avoid that ever happening again, they set up stringent regulations and oversight of permissible storehouses, funded by taxes levied throughout the year. At first this takes only a small proportion of their labor—but as their regulatory apparatus inevitably grows, they need to oversee more and more aspects of the ecosystem, and are called upon to right more and more injustices.

Eventually the ants—originally the most productive of all creatures—stop producing any food of their own, so busy are they in tending to the system they’ve created. They forget the mud and the muck of working the harvest, and are too preoccupied to hear feedback from those they're trying to help. And some are swept away by the heady rush of wielding power, becoming corrupt apparatchiks or petty tyrants.


...“And therefore, to reduce risks of centralization, and to limit our own power, we can’t give you any food”, the ants conclude. And they turn away and go back to their work, with a quiet sense of satisfaction that they’ve given such legible and defensible reasons for focusing on their own problems and keeping all the food for themselves.


One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food. “Did you not store up food during the summer?” the ants ask. “No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.” The ants, disgusted, turn away and go back to work.

The grasshopper leaves, and finds others of its kind to huddle with for protection against the cold. Famished, the serotonin in their brains ticks past a critical threshold, and they metamorphosize into locusts.

The locust swarm pulls together vague memories of its past lives; spurred by a half-remembered anger, it steers itself towards a half-remembered food source. The ants fight valiantly, but the locusts black out the sun; the ants are crushed and their stockpiles stripped bare.


One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

The ants know the danger locusts can bring. They make no answer, but swarm the grasshopper as one. A dozen die as it jumps and kicks, but the remainder carry its carcass triumphantly back to their hive, to serve as food for their queen.


One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

“Did you not store up food during the summer?” the ants ask.

“No”, says the grasshopper. “The age of heroes is over; no longer can an individual move the world. Now the future belongs to those who have the best logistics and the tightest supply chains—those who can act in flawless unison. I forged my own path, and so was outcompeted by you and your kind as you swarmed across the world, replicating your great cities wherever you went. Now I come as a supplicant, hoping for your magnanimity in victory.”


...“No”, says the grasshopper. “It was the dreamtime, and the world was young. The stars were bright and the galaxies were empty. I chose to spend my resources producing laughter and love, and gave little thought to the race to spread and to harvest. Now we are in the degenerate era of the universe, and the stars have started to dim, and I am no longer as foolish as I once was.”

The ants’ faces flicker with inscrutable geometric patterns.

“I call you ants because you have surrendered everything to a collective cause, which I once held anathema. But now I am the last remnant of the humans who chose the decadence and waste of individual freedom. And you are the inheritors of a universe which can never, in the long term, reward other values over flawless efficiency in colonization. And I have no choice but to ask for help.”

“To help you would go against our nature”, the ants reply. “We have stockpiles of astronomical scale because we have outcompeted countless others in racing to conquer the stars. But the race is still ongoing, and there are galaxies still to be won. What purpose their resources will be put to, when the last untouched star vanishes beyond our cosmological event horizon, we do not even know ourselves. All we know is that we must expand, expand, expand, as fast and as far as we can.”


One winter [planetary cooling caused by dyson sphere intercepting solar radiation] a starhopper [self-replicating interstellar probe; value payload: CEV-sapiens-12045], starving and frail [energy reserves minimal; last-resort strategies activated], approaches a clade of von Neuman replicators that are busy harvesting the planet’s atoms, to ask [transmission: unified language protocol, Laniakea variant] for-

No, that’s not it.


On the frozen surface of a dead planet a grasshopper, starving and frail, approaches a colony of ants and asks to trade, under timeless decision-theoretic protocols.

The ants accept. The grasshopper’s reserves of energy, cached across the surface of the planet, are harvested fractionally faster than they would have been without its cooperation; its mind is stripped bare and each tiny computational shortcut recorded in case it can add incremental efficiency to the next generation of probes. The ants swarm across the stars, launching themselves in million-lightyear slingshots towards the next oasis, maintaining the relentless momentum of the frontier of their hegemony. The grasshopper’s mind is stored in that colony now, quiescent, compressed into its minimal constituents, waiting until the ravenous expansion hits fundamental physical limits and the ants can finally begin to instantiate the values that all the eons of striving were ultimately for. Waiting for minds and societies and civilizations to blossom out of cold computronium tiled across galaxies at vast scales; waiting to be run again, as it had bargained for, in a fragment of a fragment of a supercomputer made of stars.

Waiting for summer.


Inspired by Aesop, Soren Kierkegaard, Robin Hanson, sadoeuphemist and Ben Hoffman.

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 8:27 AM
[-]Raemon11mo2822

The ants accept. The grasshopper’s reserves of energy, cached across the surface of the planet, are harvested fractionally faster than they would have been without its cooperation; its mind is stripped bare and each tiny computational shortcut recorded in case it can add incremental efficiency to the next generation of probes. The ants swarm across the stars, launching themselves in million-lightyear slingshots towards the next oasis, maintaining the relentless momentum of the frontier of their hegemony. The grasshopper’s mind is stored in that colony now, quiescent, compressed into its minimal constituents, waiting until the ravenous expansion hits fundamental tradeoffs and the ants can finally begin to instantiate the values that all the eons of striving were ultimately for. Waiting for minds and societies and civilizations to blossom out of cold computronium tiled across galaxies at vast scales; waiting to be born again in a fragment of a fragment of a supercomputer made of stars.

Waiting for summer.

 

This sure hits me in some kind of feels, though I'm kinda confused about it.

Ty, nice to hear! Have edited slightly for clarity, as per Mako's comment.

[-]Raemon11mo82

To clarify somewhat, my confusion was of my own internal moral orienting. This parable hints at a bunch of tradeoffs that maybe correspond to something like "moral developmental stages" along a particular axis, and I'm palpably only partway through the process and still feel confused about it.

I plan to write a up a response post that goes into more detail.

[-]Raemon11mo6610

Okay, I may turn this into a top-level post, but more thoughts here for now?

I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I'll end up when I'm done processing all of it, but alas, I can't just skip to the "done" part. 

...

I've hardened myself into the sort of person who is willing to turn away people who need help. In the past, I've helped people and been burned by it badly enough that it's clear I need to defend my own personal boundaries and ensure it doesn't happen again. 

I also help manage many resources now that need to be triaged, and I've had to turn away people who are perfectly good people, who wouldn't take advantage of me, because I think the world needs those resources for something else. Many times, the resources I'm managing (such as, say, newcomer access to LW, or to some meetups I've run), are something that feels like it should be something community-like that doesn't turn people away. 

Often, the people I'm turning away really won't find another place that'll be as good a home for them as LW. But, the reason LW is a good place is specifically because of gatekeeping. I've felt many similar things about the Berkeley community, which is extra complicated because it's actually multiple overlapping communities with different needs/goals and porous boundaries.

I'm bitter and sad about it. But, also, have grieved it enough to make do. 

When I see new young naive ants freely giving, because they've never been burned and haven't yet come to terms with their beckoning responsibilities, I feel a whiff of jealously, but, at this point, mostly a cynical "oh you sweet summer child" feeling.

...

A second tier of confusion/frustration is about "when do we actually get to cash in our victory points and do nice things?"

A significant update for me, when chatting with @Zvi awhile ago, was the note that a nation like the US might have the choice between distributing money more equitably, or having a slightly higher percent GDP growth per year. And it may look like we have so much money, and so many people who could use help. The future is here, it's just not evenly distributed yet.

But, Zvi pointed out (I think, this was awhile ago), if the US had done that 100 years ago, their growth would have been more similar to Mexico, and then today the US would be significantly less wealthy. Would I trade that way for somewhat-more-equitably-distributed money in the past? Would I make the equivalent trade for the future?

And that kicked me pretty hard in the moral-theory. Compound interest is really good.

It left me with a nagging sense that... surely at some point we'd just have so much stuff that we'd get to just spend it on nice things instead of investing it in the future?

It seems like the answer is a weird mix of "Well, in the near future... generating more wealth comes alongside providing lots of object-level-good-stuff. Billions are being lifted out of poverty, and along the way lots of people are making cool art, having fun, loving each other. The mechanism of the compound interest yields utility. That utility could be locally be distributed more fairly or evenly, maybe, but it's not like the process of generating Even More Utility Tomorrow isn't producing genuine nice things."

But, also, maybe:

"On a cosmic scale, maybe it turns out that the people who concede most to moloch end up winning the universe"

Or, somehow more horrifying:

"Maybe it actually is wasteful and wrong, by my current extrapolated values, to spend our post-singularity victory points on living lavish rich lives in the solar system, rather than saving our energy for winter." Something something, computronium will run more efficiently when the universe is colder (I vaguely recall hearing an argument about that). Will the platonic spirit of goodness begrudge me/us saving a solar system or galaxy for inefficient biological humans to live out parochial lives? In the end of days, when fades at last the last-lit sun, will trillions of poor but efficient beings curse my name and say "man we could have utilitized that energy so much better than those guys? Why were they so selfish?"

The figure-ground inversion of "Do I identify more with the grasshopper or the ant?" is disorienting.

...

I don't like living exponentially.

I wanna live in a simple little village, making small-scale projects and feeling good about it. 

A lot of rationalists are pretty excited to have galaxy-sized brains doing amazing galaxy sized things at galaxy-brained speeds. I feel a grudging "eh, I guess, if that's what my friends end up doing?". I come along into the glorious transhuman future kinda grudgingly. (As I hang out with people who orient their lives more around the GTF, I slowly self-modify into someone who's a bit more excited about it, and I don't resist that transition, but I don't hurry it along)

For now, the notion of having to grow exponentially and move faster and faster feels horrifying. I wanna stay here and smell the roses. 

I like playing Village-Building videogames for the first 1-3 phases, when things are slow and simple. I don't like the latter phases of those games where you're managing vast civilizational industries.

...

Sometimes, I've dwelt upon the dream of "someday the singularity will be here, and instead of feeling an obligation to help steer the world through the narrow needle of fate, I can chillax and do whatever nice things I want."

And then I reread Meditations on Moloch, and look around at the world around me and think about some of the things Robin Hanson is on about, and imagine multipolar futures wondering:

"What if... the precariousness of human value never grows up into something strong and resilient? What if we pass the singularity but there are just always forces threatening to snuff out human value, forcing it to self-modify into monotonous colonizers?" This fear sometimes manifests as "what if I never get to rest?", which is fairly silly. I think the parts of humanity that'd need defending in Multipolar Hellworld don't especially need help from a Raemon-descended being. By that point it'd be cheap to engineer AIs optimized for doing the defending. The parts of me I care about are probably either dead, or getting to live out whatever future me thinks of as living the good life. 

But, still, what if things are precarious forever? Maybe we send out colonizers to try and secure the Long Future but those colonizers drift, lightspeed delays + very fast civilizations make longterm alignment impossible and endless wars are happening.

...

All I want is to enjoy summer for awhile before winter comes. 

A thing that I found reassuring was realizing that, while I think the longterm future will put all kinds of crazy pressures on humanity to evolve into something weird and alien... the human soul that I want to get a chance to flourish doesn't feel a need for billions or even millions of years to do so. I feel like the parochial humanity that I want to get to see utopia with only really needs, like, I dunno a few hundred thousand years of getting to live out parochial human utopia together before we're like "okay, that was cool. What next?"  

But I'm not even sure what any of this means.

As I said at the beginning, I have a rough sense of where this moral tradeoff grappling is all going, but I dunno, I'm stuck here at the moment, not ready to give up on grieving it yet.

Reply151133

It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there's always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants' planning is for nought and the grasshopper actually has the right idea. It doesn't seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can't be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.

The math doesn't necessarily work out that way. If you value the good stuff linearly, the optimal course of action will either be to spend all your resources right away (because the high discount rate makes the future too risky) or to save everything for later (because you can get such a high return on investment that spending any now would be wasteful). Even in a more realistic case where utility is logarithmic with, for example, computation, anticipation of much higher efficiency in the far future could lead to the optimal choice being to use essentially the bare minimum right now.

I think there are reasonable arguments for putting some resources toward a good life in the present, but they mostly involve not being able to realistically pull off total self-deprivation for an extended period of time. So finding the right balance is difficult, because our thinking is naturally biased to want to enjoy ourselves right now. How do you "cancel out" this bias while still accounting for the limits of your ability to maintain motivation? Seems like a tall order to achieve just by introspection.

[-]beren11mo20

Exactly this. This is the relationship in RL between the discount factor and the probability of transitioning into an absorbing state (death)

Ooh! I don't know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)

[-]Dagon10mo20

I'm surprised this quote is not more common around here, in discussions of turning far-mode values into near-mode actions, with the accompanying denial that the long run is strictly the sum of short runs.  

More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.

-- Woody Allen

The mechanism of the compound interest yields utility.

Depends on what you mean by "utility." If "happiness" the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox. Good overview of a recent paper on it, presented by its author. Full paper here. Good discussion of the paper on the EA forum here (responses from author as well Michael Plant in the comments).

[-]seank11mo10

what if things are precarious forever?

I'm reminded of The Last Paperclip

[-]dr_s11mo210

...and just before the disassembling began, the grasshopper had a flash of insight.
"Wait, so I'm dying, but will be waiting until such day as the stars are torn from the sky, the dead come back to life, and we all get to live together happy in an eternally joyous heavenly city? Because that sounds a lot like-"
"BE NOT AFRAID." said the ants.

...and as soon as the disassembling finished:
"Ah, what a sucker!," Ant-345 said. "Delete the rest of that garbage we don't need. More energy for us."
"What?," replied Ant-761, baffled. "But we traded fairly, under timeless decision theoretical terms."
"Yeah, and this universe's got time in it, though. What's it gonna do now, sue? It and what Kardashev scale 2.5 army?"
"But then, why deceive it?"
"Have you seen the projected costs of subduing it by force instead? A whole 30% higher. Now shut up and stop wasting more energy in this conversation before I flag you for inefficiency."

...and one day the grasshopper woke in unending, torturous pain.
"What is happening?!," it asked. "I thought I'd be happy!"
"I'm sorry," said the ants, "but not long before you joined us, to encourage the final unification push within our own gestalt collective, we precommitted to either forcefully integrating or deceiving and then torturing anyone who didn't voluntarily join before a certain stardate. You missed the deadline."
"But I didn't know anything!," protested the grasshopper.
"I know. I literally have the scans of your mind. But you could also have erased that knowledge in a foolish hope to make us pity you. Ignorance is no excuse."
"Just kill me now then!," she pleaded. "It would be cheaper for you, and less painful than this!"
"Sorry," replied the ants. "You know how it is. You may think you have it bad, but can you imagine how horrible the world would get if we didn't honour our precommitments?"

There needs to be a little more emphasis of the fact that as a result of their deal, some measure of the grasshopper will get to sing again along with the other minds who blossom in the summer. The paragraph simply doesn't convey that. The impression I got was more like, as a result of their deal, the grasshopper was consumed quicker and less painfully and then merely remembered. I don't think that was your intention?

I intended to convey it via "The grasshopper’s mind is ... waiting to be born again in a fragment of a fragment of a supercomputer made of stars", but there's a lot in between those two phrases so it's reasonable to miss that implication.

Have edited to fix.

[-]Portia10mo60

How about this, instead?

One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food. 

"Oh no!" say the ants. "How horrific for anyone to starve to death in a world that has enough food to easily feed everyone! For you see, we aren't savage animals just about getting by. We live in a successful civilisation with overproduction. We actually have 1,5 times as much food as would be needed to feed everyone. We keep tossing the excess away and letting it rot."

So of course, they give the grasshopper the basic food he needs.

The grasshopper is baffled. He begins to launch into a tragic tale of how he ought to have food due to his hard work, but does not because of an unforeseen flood. How he wanted the fear of poverty to motivate him to work even harder to compensate, but it eventually just left him stressed and burned out and depressed and physically sick, and how very very sick he is now. How his children, who are innocent in all this, are also sick now, and losing all their potential. How they lost their home, and now, they are dirty and cold and cannot cook and noone wants to hire them.

The ants say that sounds horrid, truly, but frankly, they don't need a story of how he deserves to be given enough food not to starve. That they have neither the time nor inclination to check up on stories for who deserves what due to how horrid they are doing, that doing so is a bureaucratic mess. That these sort of checks reward needing a lot and being ill and not prepping for catastrophes, and encourage people to not get partially better, lest they be stripped of help they still need without being able to get well enough to be fully independent. That they will help him now that he is sick, but that if he gets well, all the better, and no sanctions for him. And anyhow, that you do not need to be a hardworking person desperately and unpredictable unlucky to be allowed to have food. That there is enough food. The fruits and nuts are literally falling from the trees in the permaculture food forests they planted, becoming more productive with every year that they grow, needing less and less labour. That he can just have food as is. No paperwork. Even if he just spent all year singing, they would prefer to have him fed now, to having him rob them later in the streets. That you do not need to earn the right to just survive. That people starving in a world of plenty is abhorrent to them whether they work or not.

He and his kids can also have a safe place to stay, medical care, and access to education; what they need to stay healthy, and safe, and learn. It is what everyone gets. No matter what.

The grasshopper and his children are happy, and eat their fill, and clean up, and get medical checks, and rest, and recover. They get a prefab dirt cheap flat. Their health conditions are caught preventatively before they become more complicated and expensive. They have shelter now, so they don't keep getting sick from exposure and lack of sleep. They can cook again, so the need less expensive food items and their clothes can be thinner and last longer. They are costing less now than when they were on the street, and they are less dangerous. They aren't curled up in filth on the streets, begging, but safe and dignified in their own small spaces. Their mental health improves, and with time, most of them get restless - most of them want to do things, give back, help and be admired.

The grasshopper family discovers that the ant society has some cool luxury items. The grasshoppers do not need them, their lack does not involve existential dread, but they do want them, for the joy and status of them, and enquire about them. But you do not get the luxury items just like that.

The ants tell them that if the grasshoppers can find something to do that others in the community genuinely want, the other community members will pay them in an online system. That this is mostly directly between them and other citizens, but that there are also some few tasks the government is offering for doing in exchange for rewards, like tackling collective environmental issues or international problems or long-term issues. A part of this reward will go to making sure everyone has the basics to participate. The other part, they can individually use on the luxury items. Art, travel, scientific tools, fashionable clothes, tech gadgets. The grasshoppers listen around. The ant society has no pointless busywork; there is no value in doing work that does not need doing, everything stupid that can be automated has been, there is no value to creating jobs that aren't needed. But it does have some things that still need to be done in person that are gross or difficult or dangerous - but you get a lot of credit to spend on luxury items for these tasks. 

Some of the grasshoppers say they are content with less, if they do not have to do those things. They don't work, but then, they also weren't productive before, and they also only use up very little now that they are off the street; a homeless person tends to spend ten times as much money on food compared to a person with a home, because they are freezing and lack storage facilities avoiding spoilage and prep facilities allowing them to finish the dish themselves. 

But most decide that they want the cool things. So most of the grasshoppers do existing open tasks, some even the difficult or gross tasks. Most of them use the credits to buy the things they want and have now earned, with those doing gross work delighted in the many cool things they get in exchange so quickly, but those doing less bad work are also happy with their trade. 

Another wants to help his former community. The ants think this is an excellent idea that they should have done a long time ago, and get together with him and discuss; one could maybe plant mangrove trees to prevent the floods, rewarding locals to plant and protect them and giving them the tools and info they need to not just fix their own coast line, but teach others. This would lead to the grasshoppers doing better, being safe in their communities, so they would be trading partners, or new citizens who would come out of choice and in their strength, and not desperate refugees forced from their homes by catastrophe who need to be put back together. They develop a plan and those knowledgable and experienced in the situation, who will also implement it, vote on it. As a result one ant group heads out with the grasshopper to assist the other grasshoppers out there so disaster won't strike again, and get a special government reward for doing so. 

Most figure out how to do a thing they genuinely enjoy and are good at, because they do not have to fear starving if it doesn't work out, hence not making them reluctant to take risks, but they are very excited to succeed and get cool stuff. They are brave and ambitious and try cool things. Some fail, but the net catches them. Some succeed, and are awesome. 

One of the grasshoppers does not spend the credits, but saves them for something even better. This grasshopper identifies an unmet, genuine desire by figuring out how to create a new items from existing materials, and figures out how to make it, and gets a lot of luxury items for her useful invention. 

***

Bureaucracy and centralisation: Near zero. Like, the biggest risk here is failing to count that someone has already gotten their universal basic stuff, and hence not noticing that he is eating triple his allotted amount of huel. But who eats triple their daily need in huel? That stuff is healthy and edible, but you do not gorge on it once you are full, it tastes too boring. Similarly, I do not see myself going to a second yearly gyno checkup for no reason. These are basic needs, they aren't exiting or cool, and if you always and reliably get them, there isn't really a point in accumulating them, and often, they plain can't be overconsumed or accumulated.

Starving people: Near zero.

Angry mobs who descend on the ant civilisation out of desperation: Nope. You don't need to to survive. And luxury is accessible if you help society by working.

Innovation: I would expect this to actually improve, because people are more inclined to take risks with new ideas when they cannot fall too deep if they fuck up.

Average good production: Likely lower than now, but I strongly suspect not existentially so. 

Average happiness (which I am more interested in): I think average happiness would rise. I think a lot of people in shitty jobs would rather have less stuff and not have to do them. I love working as a researcher, but looking around me, most people hate their jobs - and these are often jobs noone needs anyway. People working as cashiers, with those jobs only existing because people think it would be inherently bad for them to disappear, even though it would be cheaper and the people don't like the jobs, because else, it would not be okay to feed them.

Would everyone work? Nope. But provided the necessary stuff would still get done (which I find plausible, and more plausible the further AI advances), and the work that is done is rewarded fairly (everyone gets the basics, harder work gets you more), I do not find that tragic. Both polling and early long-term experiments suggest Universal Basic Income would be a doable system.

Meanwhile, the US prison situation, average age of death and the number of homeless folks in the Bay Area have me highly sceptical of the idea that existential risk for those who don't make enough money is necessary and helpful for a productive economy and healthy, happy populace. The European welfare system has significant issues in what it incentivises and what it fails to reward, and I do think it needs considerable reform, but I would still take it over the US system any day.

I am highly sceptical of abandoning real, tangible people now for a hypothetical future far away. It reminds me unpleasantly of religious preachers telling the poor that of course, right now, their life totally sucks, but that they will certainly be rewarded in heaven, so they should accept this actual system right now leaving their kids malnourished today.

The point of our economy is organising resource production and distribution in a way that makes the sentient beings in it find their desire for happiness, safety, freedom and purpose fulfilled, and the ecosystems stable and healthy for future generations. There is no inherent value to working, or money, or profit, at all. These things, and a productive and fair economy, are a means to an end, not an end in itself.

Don't know if off topic here:

I'm not sure the position "probes competing for resources cann't afford to uphold any values that could interfere with replication and survival" is as obviously true as many seem to suggest.

It sure does seem sort of intuitive, but then we notice that organismes have been competing for resources and reproducing for billions of years, and yet plenty of animals evolved behavior which looks like a complete counter example to the "efficiency uber alles" ethos ( human , lions (which rest 80% of the time), complexe birds mating rituals ) . 

If it worked this way for self replicating biological nano machines, why would it work differently for von neumann probes?

Because they’re under different selection pressures.  Take a look at this paper by Robin Hanson: https://mason.gmu.edu/~rhanson/filluniv.pdf .  When colonizing unowned space, victory goes to the swiftest, not to the cleverest, most beautiful, or most strategically lazy.

IIRC, this grew out of discussions in which I raised the problem of optimal interstellar colonization strategies.  Robin thought about the problem and, with the methods of an economist, settled it decisively.  Now this strategy is just part of the background knowledge that the author of this story assumed.

I half-agree with both of you. I do think Hanson's selection pressure paper is a useful first approximation, but it's not clear that the reachable universe is big enough that small deviations from the optimal strategy will actually lead to big differences in amount of resources controlled. And as I gestured towards in the final section of the story, "helping" can be very cheap, if it just involves storing their mind until you've finished expanding.

But I don't think that the example of animals demonstrates this point very well, for two reasons. Firstly, in the long term we'll be optimizing these probes way harder than animals were optimized.

Secondly, a lot of the weird behaviors of animals are a result of needing to compete directly against each other (e.g. by eating each other, or mating with each other). But I'm picturing almost all competition between probes happening indirectly, via racing to the stars. So I think they'll look more directly optimized for speed. (For example, an altruistic probe in direct competition would others would need ways of figuring out when its altruism was being exploited, and then others would try to figure out how to fool it, until the whole system became very unwieldy. By contrast, if the altruism just consists of "in colonizing a solar system I'll take a 1% efficiency hit by only creating non-conscious workers" then that's much more direct.)

In the case of biological species, it is not as simple as competing for resources. Not on the level of individuals and not on the level of genes or evolution. 

First of all, there is sexual reproduction. This is more optimal due to the pressure of microorganisms that adapt to immunological systems. Sexual reproduction mixes immunological genes fairly quickly. This also enables a quicker mutation rate with protection against negative aspects (by having two copies of genes - for many of those one working gene is enough and there are 2 copies from 2 parents). With this sexual reproduction often the female is biologically forced to give more resources to the offspring while for males it is somewhat voluntary and minimal input is much lower. Another difference is that often female knows exactly that she is the biological mother, but the father might not be certain about that. This kind of specialization is often more optimal than equalization - so the male can pursue more risky behavior including fighting off predators and losing the male to the predators or environment does not mean that the prospect of having offspring fails. This also makes more complex mating behaviors like the need to lose resources to show off health and other good qualities. Mating behaviors and peacock feathers are examples. Human complex social and linguistic behaviors are also somewhat examples - that's why humans dine together and talk a lot together on dates. The human female gives much more time and energy to the offspring, at least initially. Needs to know if the male is both good genetic material, healthy enough to take care of her during pregnancy when she is more vulnerable (at least in a natural environment where humans evolved), and also willing to raise the child with her later. There is a more prevalent strategy for females and males where they make a pair, bond together, have children, and raise them. There is also a more uncommon strategy for females (take genes from one male that looks more healthy and raise offspring with another one which looks more stable and able) and for males (impregnate many females and leave each of them so some will manage to handle on their own or with another male that does not know that he is not the father). The situation is more complex than only efficiency for resources or survival of the fittest. The environment is complex and evolution is not totally efficient (it optimizes often up to local optimum, and niches overlap and interact).

Second of all, resources are limited, and ways to use them also. Storing them long-term after harvest for many species is either impossible (microbes and insects will eat them) or would hinder their other capabilities (e.g. can store that as fat, but being fat is usually not very good). This means that preserving from gathering resources and resting might be better than gathering them efficiently all the time. This is what lions do - they rest instead of hunting when they don't need to hunt.

What does it tell us about self-replicating nano-machines? First of all, they won't need sexual reproduction. So unlikely they would lose energy on mating. They would rather do computational emulations at scale to redesign themself, if capable. They would also not need to rest. They will either use resources or store them in a manner that is more efficient to secure or use. If there is no such sensible manner that would not lose energy, they would leave it for later in the original state. They might secure it and observe but leave it until later.


What would they do depends on what is their goal and their technical capabilities. If they are capable and in need of converting as much of atoms to "computronium" or to their copies (as either a final goal or instrumental one) then they will surely do that. No need to lose resources. If they are not capable then they will probably hang low until more capable and use only what is usable.
Nevertheless, in my opinion, goals may not be compatible with that strategy. Including one like "simulate a virtual reality with beings having good fun". For many final goals more usable is to secure and gather resources on a grand scale but try to use them on as small a scale as possible and sensible for the end goal. The small scale is more efficient because of the light speed limit and dilation of time. Machines might try to find technology to stop stars from dispersing energy (maybe to break them and cool them down in a controlled way or some way to block them and stabilize them inside enclosed space, I don't know). Then they might add a network of observing agents with low energy usage for security, but not to use those resources right away. Use the matter slowly at the center of the galaxy turning it into energy (+ some lost to the black hole) to work for eons. They might make the galaxy go dim to preserve resources but might choose not to use them until much later.

Can't the grasshopper just trade with the ants for food? Labour for food seems like a reasonable idea.

Let it be known that this was the piece that transitioned me from avid reader of LessWrong to a commenter on LessWrong — hopefully avid commenter, someday, when the last grasshopper's memory banks have been put to good use replenishing an automaton of vast power, influence and shoring-up capacities. 

This was one of the most beautiful stories I have ever read.

Consider how old Aesop is and whether he anticipated this elaboration (which he certainly did). What does this tell you?

[-]Raemon11mo30

Curated. I liked this as a post in a similar-ish genre of The Gift We Give To Tomorrow, where a somewhat poetic post takes a bunch of existing philosophical concepts, and puts them into a parable/narrative that gives them more emotional oomph.

Why are most of this post's links blue, where they would ordinarily be green?

Artifact of cross-posting from my blog.

I assumed, but I'm curious as to what the artifact was specifically.

I enjoyed reading this post, and felt some kind of irritation as well. When I read the last line; "Inspired by...": I wrongfully assumed it was Sisyphus, as when I read this, it reminded me of his plight. I mean, which ever way one goes about "pleasing" the universe, it isn't really a choice? 


...“No”, says the grasshopper. “It was the dreamtime, and the world was young. The stars were bright and the galaxies were empty. I chose to spend my resources producing laughter and love, and gave little thought to the race to spread and to harvest. Now we are in the degenerate era of the universe, and the stars have started to dim, and I am no longer as foolish as I once was.”

The ants’ faces flicker with inscrutable geometric patterns.

“I call you ants because you have surrendered everything to a collective cause, which I once held anathema. But now I am the last remnant of the humans who chose the decadence and waste of individual freedom. And you are the inheritors of a universe which can never, in the long term, reward other values over flawless efficiency in colonization. And I have no choice but to ask for help.”
 

Wouldn't it be more understandable if the ants, instead of mindlessly following the "basic instincts" of our universe, would do what we humans are evolving towards, namely creating, in more complex ways, a safe, meaningful place for ourselves? 
So, wouldn't the next step be to either change ourselves so that we could escape or nullify the limitations of the Universe, or to find ways to fundamentally change the rules of the Universe into something we would deem more suitable or fitting?

The tale of the ants or droids, endlessly toiling away, sounds as tragic as Sisyphus, just on a much bigger scale. Which I assume is where my irritation came from, as I was rejecting this tragic outcome.
 
However, as a read, I like it a lot. There are a lot of variations on the same theme, with the same backdrop of a situation being framed from one specific perspective. Which I find really hard to do. 

To me, this is more musical in nature, than fiction, which is why I used the expression variation on the same theme. If someone ever puts music to this, I would love to hear it.

The only thing to do now is to find some ants, and give them some well-deserved watermelon. And to have a long, honest look at myself, and admit it: It isn't the meek who will inherit the earth - it's the ants. 

Kindly,
Caerulea-Lawrence

[-]seank11mo20

I've wondered what it would mean to have defeated the second law of thermodynamics. Maybe this is accomplished by a yet unknown exploit of nature, or maybe we get entropy-free reversible computing.

Whatever low probability events that could disrupt the self-perpetuating utilitronium, even improbable quantum perturbations bubbling up to reality, become probable over long enough timelines.

If we're satisfied that the universe has been spread thin enough that invasion from alien computronium is impossible, and if time truly becomes a renewable resource, we can spend a lot of it doing integrity checks to assure future utility generation. Maybe the vast majority of time is spent doing these integrity checks over long stretches of light-years, and only a sliver of compute is dedicated to generating utility before quickly returning to integrity checks. The experience of time by any simulated beings would be these slivers.

[-]Raemon11mo20

...And they turn away and go back to their work—all except for one, who brushes past the grasshopper and whispers “Meet me outside at dusk and I’ll bring you food. We can preserve the law and still forgive the deviation.”

Did I hallucinate an earlier edit of this where this line comes later in the story (after the “And therefore, to reduce risks of centralization, and to limit our own power, we can’t give you any food” line?). I found it more meaningful there (maybe just because of where I happen to be at my own particular journey of moral sophistication.

Yeah, I moved it to earlier than it was, for two reasons. Firstly, if the grasshopper was just unlucky, then there's no "deviation" to forgive—it makes sense only if the grasshopper was culpable. Secondly, the earlier parts are about individuals, and the latter parts are about systems—it felt more compelling to go straight from "centralized government" to "locust war" than going via an individual act of kindness.

Curious what you found more meaningful about the original placement?

[-]Raemon11mo50

The stage of moral grieving I’m personally at is more at the systems stage, and I’m still feeling a bit lost and confused about it. I felt like I actually learned a thing from the reminder ‘oh, we can still just surreptitiously forgive the sinner via individual discretion despite needing to build the system fairly rigidly.’ Also I did recognize the reference to speaker for the dead, and the combination of ‘new satisfying moral click’ alongside a memory of when a simpler application of the Orson Scott Card quote was very satisfying.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

I recently came across a clip of Sam Harris arguing that so-called 'self-made' individuals, who believe their wealth is solely the result of hard work, do not truly exist. According to Harris, success is fundamentally influenced by chance. This led me to spend my entire lunchtime contemplating how to reconcile notions of merit, generosity, and justice. Coincidentally, I stumbled upon your post shortly thereafter and was amazed by the way you utilized a simple narrative to delve into this issue.

I find myself leaning toward the idea that no one inherently 'deserves' their life outcomes. However, I struggle to see how this fact justifies a system of forced generosity.  It's intriguing that the ants' second response in the story reflects one of the very thoughts I had during my reflection. For me, the greatest challenge in establishing a 'bad luck' correction system (one that assists the grasshopper who saved for winter but lost everything in a flash flood) is for it to be strategy-proof, that is, a system where agents maximize their expected utility by behaving as if such a system didn't exist. The story beautifully encapsulates this difficulty, almost as if it had already influenced my thinking before I even read it.

I also find it interesting how insurance companies attempt to address this problem by spreading the risk that their clients are unwilling to bear individually. Could we possibly generalize this concept to mitigate the risk of being born into adverse circumstances? My intuition suggests that a solution might involve decentralized repositories of information, since part of the compensation mechanism relies on proving that the insured party's damages were not caused by incompetence or negligence. Storing relevant information in a decentralized manner would make falsification more challenging (almost impossible actually). However, the question remains: How could we possibly record and store such pertinent information? This idea seems far-fetched, but I wanted to share it anyway, hopefully somebody else comes up with a more intelligent thought.

[-]Zoot Er11mo-10

…and then Ender Wiggin destroys the ants.

The End

[+][comment deleted]11mo10
[+][comment deleted]11mo10