Each person alone is powerless, a fleshy mammal that would typically die from the elements. Networked together into factions, we devise ways to benefit “us” or defeat “them”.

Social platforms like Facebook, Twitter, and Tik-Tok have brought this human story to unprecedented scale, connecting billions. Whatever the platforms might have done with this power, the chosen business model has been to monetize access to our minds. They sell the power to divide us into groups and to sell access to each group of “us” to the highest bidder, often pitting us against a chosen group of “them”. Dictators and demagogues have grown particularly adept at using these systems to bend our minds and fracture society. With 15 years of hindsight, it’s clear that great power should have come with greater responsibility.

Perhaps humanity’s communications will always be rooted in surveillance-based business models. But let’s indulge for the moment the ideal of a platform that networks people together for a novel purpose: not for monetizing access to our minds, but rather with the express purpose of elevating humanity.

Regardless of how it’s implemented (whether via an AI, an oversight board, or some other means), what moral framework would be “best” for humanity?

Can this question be answered? It is bold to try, but the status quo is clearly unacceptable. Let’s zoom out to find an anchor among universal truths.

The universe started as nothing. Then somehow, matter flashed into existence. Out of nothingness there was suddenly Creation.

Over time, the matter clumped together in stars and galaxies. Relative to the empty void of space, the mind perceives these more complex structures as “interesting”.

On Earth, matter formed ever more complex structures, eventually to include living creatures with minds, most notably humans. We formed complex societies, so much more interesting and dynamic than just the sum of the parts.

This has somehow happened despite the overwhelming tendency of the universe toward more dispersion, more randomness, more entropy as scientists would say. At a cosmic scale, “stuff” is moving apart and becoming less interesting over time. Eventually, atoms will just be randomly spread out, will no longer interact, and the universe will be dead.

Yet life, and particularly complex life, bucks this trend. If the universe were a TV screen, it went from all black nothingness (nihilism) to having every pixel lit at Creation (conformity) toward a long-term outcome of random static (meaninglessness). Somehow, at least for now, complex life has filled the screen with the most interesting patterns the universe has ever created. We are gifted to inherit a dynamic and interesting world that is profoundly alive. Our rich mental lives, as we appreciate that interestingness and engage with it dynamically, multiply interestingness further still.

Given this, it seems logical that a universal morality should value higher levels of total interestingness. To maximize total interestingness is to appreciate and fulfill the gift our Creator has provided, respite from relentless entropy. We are not void, nor singularity, nor static; we are interesting, and we perceive interestingness.

Interestingness is difficult to measure in absolute terms, but higher or lower levels are easily recognized. A fish is complex and interesting. A school of fish is even more so. An ecosystem with schools of fish of different species is yet more interesting. When that ecosystem is observed by one diver, total interestingness has increased further, partly owing to their interaction but even more so to the mental experience of the diver. If that diver is a scientist or videographer who publishes her work, total interestingness has grown further still. Measuring the total would now include the mental experience of everyone who experienced the article or video.

A solitary person, a complex human with a rich mental experience and the ability to shape the surrounding environment, is interesting. A community of people working, playing, and inventing together is more so.

But suppose those same individuals come under the rule of a sadist: their heads are shaved, they’re stripped down, they’re assigned hard labor and given little food, they’re forbidden from speaking, and tortured for any violations. Their once rich mental lives and their interesting social dynamics are all shut down, as they can focus only on survival. We recognize this horror innately without the need for new measures of morality. But if we did measure using our TV metaphor, there would be a combination of random static meaninglessness, all-white-screen conformity, and as the men and women succumb, the black-screen of nihilism.

Suppose we could somehow measure the change in total interestingness as this situation unfolded. The complex and interesting relationships, mental experiences, daily routines, and belief systems were destroyed. Destruction of total interestingness was experienced as pain, loss, and suffering.

Happiness, beauty, pain, and suffering are all subjective experiences. They are also tricky optimization metrics since they are subject to perverse manipulation — as with a brainwashed cult member or the wire-headed brain in a vat sci-fi dystopia. But “interestingness”, despite being impossible to measure in absolute terms, is at least based on measurable concepts described by complexity theory and information science, rooted in the second law of thermodynamics. The thriving civilization is measurably more complex and interesting, especially accounting for the rich mental lives of its people, than the dystopia. Consider how much more information it would take to describe everything that each individual is experiencing (including their thoughts) when the society is healthy vs. how much when in chains.

Speech certainly requires (and receives) legal protection. But just because certain speech is legal doesn’t mean it deserves amplification. An ideal system amplifies speech that furthers the aims of humanity, rather than amplifying speech that generates the most outrage or click revenue.

Suppose there’s a social media influence campaign. It uses a combination of standard posts, targeted ads, and message amplification from influencers and bots. The ad campaign’s manager wants to target a particular group of people with a particular message. In fact, each individual can be micro-targeted based on knowledge of their hopes, likes, and fears. The social campaign is clearly a form of speech.

The campaign’s message could be to buy a particular shampoo. But that’s not today’s campaign. This one is designed to denigrate and dehumanize a particular class of people (like this one). The more this message is viewed, the less human the opponent group will seem. Maybe the campaign will accuse them of acts against children, of acts against God, or of plans to destroy the nation. If successful, the target group hasn’t bought shampoo; it has been prepped to commit atrocities. The shaved-head dystopia described earlier could easily begin with “speech” like this. Whatever the marginalized group, society is splintered apart violently and slides into fascism and dystopia.

Like hate speech, misinformation and disinformation are also forms of speech. By law, we are generally allowed to share untrue statements. This makes sense. Some of the most important truths were once heresy, considered so obviously false that it was a crime to utter them (the Earth revolving around the Sun for example). In a healthy system, falsities are increasingly laid bare as failing to explain the natural world.

Ideally, the more likely a claim is to be true, the more it is amplified. Relative to truth, which explains the real world, offers predictive power, and provides a foundation for discovering additional truths, false statements are inherently less interesting. They not only fail to provide signal; their noise drowns out signal, resulting in chaotic, nonsensical, and less interesting outcomes. Misinformation and disinformation should therefore be relatively dampened by amplification algorithms and media minders. We can subjectively measure their harm by probabilistically gauging the interestingness of the system (society) with or without the message’s amplification. The message can still be spoken, but its amplification should be appropriate.

By describing moral calculations in terms of interestingness, accounting for the mental lives of the people affected, the health of ecosystems, and the impact on the future (measuring changes vs. the status quo so that destruction has a negative weight), we can weigh moral outcomes such as moderation decisions without having to think in terms of “left” and “right” or other labels. To be sure, the absolute numerical value cannot be calculated, but the framework could at least point in the right relative direction. Amplification and moderation decisions relate to the relative ranking (boosting or dampening) within a social stream.

Perhaps someday an AI will be able to perform the calculations. Perhaps we’ll arrive at a governance system where online moderators and jury systems do the hard work. But at a minimum, we can hopefully move on from specious debates that amount to “freedom for me but not for thee” toward a framework rooted in a universal morality.

Amplification and moderation are not the only ways that social platforms can drive total interestingness to higher levels. Collective action problems, questions of how to reach a critical mass toward a societally beneficial action, can benefit from coordination. For humanity to achieve its true potential, ideally social platforms build in ways to commit, “I will if you will” or “I will if X,000 others will”. Platforms such as Kickstarter run along similar lines and provide inspiration. These could be layered into social platforms with amplification based on expected impact on societal interestingness. In this way, humanity can avoid “race to the bottom” spirals of “every man for himself”, “I’ll pollute because he probably will anyway”, etc. These systems for planetary-scale collective action can lift society to outcomes that we and our grandchildren would find most interesting… which is to say, dynamic. beautiful. ALIVE!

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:03 AM
Perhaps humanity’s communications will always be rooted in surveillance-based business models. But let’s indulge for the moment the ideal of a platform that networks people together for a novel purpose: not for monetizing access to our minds, but rather with the express purpose of elevating humanity.

Library Genesis

Regardless of how it’s implemented (whether via an AI, an oversight board, or some other means), what moral framework would be “best” for humanity?

Huh. A goal seems more specific, and easier to orient towards. But less defined stuff can be important as well.

The thriving civilization is measurably more complex and interesting, especially accounting for the rich mental lives of its people, than the dystopia.

I'm not sure it's the right thing to optimize far. Dystopia: the matrix, everyone gets a different unique world, with very different cultures. This might 'be more interesting' and also not what we want.


The rest covers a few different things:

Interesting

True

Moderation?

Universal morality (a call for fair rules applied evenly)


Overall it mentions interestingness more than true. The usual classification might be

Nonfiction (part of truth)

Fiction and other stuff (Interesting)

Both can be interesting, though fiction (and art) seems to go there more often. I think building a better world needs to be a more specific goal, and kind of seems like a third thing.


The call for more platforms, and solutions to collective action problems is interesting.

systems for planetary-scale collective action

Thanks. Planetary scale collective action is the Big Goal. Right now, the dominant social platforms that form the public square are so far from that vision that I was trying to start with the question of “collective action in what direction”? For that, I wanted to make a moral argument without invoking religion or left/right bias. Anyhow, that was the goal. Thanks again. Minus 3 so far, so I guess I need to learn this forum better. Just trying to stand up to Moloch. Peace. 

A huge concern I have is "longtermism".

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Someone could read what I've written and say "yes, fulfill humanity's potential". I don't think those are the same thing. Longtermism looks thousands or millions of years ahead, caring much less about the present day unless it resulted in total extinction or equivalent. As we'd say in finance, the "discount rate" matters (weighing present vs. future). I believe the metric should be measured (a) accounting for *changes* to state (so democracies falling to tyranny matters, curing tropical diseases matters, etc.) and (b) placing nearly all value on those currently living and their children and grand-children. On (b) we all want to leave a better world for our children and grandchildren; and we want them to leave a better world for their children and so on. But we'd place a higher value on our own (already alive) children than our distant potential descendants 20 generations down the line. I think that same instinct needs to be preserved when considering weights of current vs. future. 

One aspect that longtermists and I would agree on is that collective action problems (described in the last paragraph) need to be solved if we are to create a better world (or even preserve it) for ourselves and future generations.