I was once in conversation with a group of 70 year olds and I posed the question “What should you do when the right and the good are in conflict?”
One of the men, white haired, took me seriously and answered “Choose the good… What's considered right are heuristics and conventions that efficiently expedite the good. They are important but secondary.”
I do not know his name.
Valueism starts with the premise that one of the greatest goods to pursue is value creation. As long as you are pursuing that goal, valueism has something to say.
Valueism is a philosophical compass that guides you towards value creation.
As actors in the world, as agents, we create value and harm often at the same time. When a parent takes away their child's phone as punishment there is a value component, teaching them that actions have consequences, and a harm component, the child’s internalization of the disrespect. That duality might be uncomfortable, but it really shouldn't be controversial. In case it is, let's simplify our lives down to good advice and bad advice.
We can all admit that we sometimes give bad advice, though how much of the total seems a suspicious mystery. Let’s assume it is a fixed percentage rather than a mere handful of unconnected occurrences.
Here we have two people A and B, and this is what their aggregated advice looks like:
How do we interpret this?
We have two major categories; value and harm. Or for our example good advice and bad advice.
From these we can derive three types of value.
Absolute Value which is equal to |value|. This is the total value created, and is measured by the magnitude of the blue bar. This value in a personal context is referred to as your effectiveness.
Then there is Relative Value which is calculated as |value| / |harm|. It should be a positive integer above 1, and is a measure of your efficiency.
There is Net Value, which is calculated as |value| - |harm|. This is value creation. From this, net value / you = your generativity, which is the focus of this framework.
There is one last second order concept, which is ( |value| + |harm| ) / you = your impact.
So let's look at the graph again.
Person A creates greater absolute value, but also creates greater absolute harm. He gives more good advice, and also more bad advice. The graph shows him to have greater impact, some of which is negative. He also has greater effectiveness, he creates more value for his community, not counting the harm.
Person B creates less |value| and less |harm|. She creates more relative value compared to person A, because she is more reliable when seeking advice and says i don’t know more often. Person B is more efficient yet less impactful.
The most important part is that A and B create the same Net Value. Each of them, after |value| - |harm|, is actually helping those around them equally after subtracting their respective bad advice. They have the same generativity, so under valueism they have the same moral standing.
Note that Person A may be a more controversial figure, and Person B may be more widely liked. Valueism is agnostic about the love and hate in this case and believes they should be respected equally.
Now let's look at Person C and D:
C and D create the same relative value. They both create 4 units of value for every 1 unit of harm. They are equally as efficient and yet under valueism, with value creation as the goal, they do not have the same moral standing.
D has greater effectiveness and impact. D is giving much more good and helpful advice than C, even though they are equally efficient at giving advice. Regardless that the relative value of their advice is the same, D is a better valuist.
Note that D creates more harm than C, 3.3 times as much actually. Yet in this framework, that is no grounds to dismiss the greater net value D created. D is more generative and that is how valueism decides who is worthy of more respect.
Now let's look at person E and F:
Persons E and F create the same absolute value, they are equally effective at creating value. Let’s say they are similar people, who give similar advice, but person F creates more harm. He reaches more and gives bad advice more often, while E holds herself to a higher standard.
Person F is more impactful, yet too much of that impact is harm. Impact can be negative which isn’t always mentioned.
Person E is more efficient, which then makes her more generative. She creates more net value by minimizing the harm she does. She is the better valuist.
Let's take a detour.
There may still be a place for co-beliefs such as that every person is equal before God or equal before the law. These may not be derivable from valueism, but there is something that can be.
Potential.
Every human being has the potential to create great value and therefore should be given great consideration.
So valueism can still be moral even if it compares how much value individuals are creating.
A valuist is not dismissive of others, because anyone could become a great contributor. Therefore, things like mentorship, education, and support systems that unlock the potential in others, create immense value.
And let me suggest that the future’s uncertainty does not allow a valuist to be complacent.
There is one more interesting case to look at:
Here G and H are equally ineffective, or rather their bad advice creates the same amount of harm. Yet at the same time, H gives more good advice and is more effective at creating value.
We could even think about G and H as the same person at different points in time. H being a more mature and effective valuist. H creates more net value by being more effective.
Summarizing these 4 graphs, A and B were worthy of equal respect even if different in their approach. In the second comparison, D was a better valuist because she had greater impact. In the third example, E was a better valuist because she was more efficient. In the last group, H was a better valuist because he was more effective. Ultimately, the measure of all of them was their generativity.
Why isn’t effectiveness the goal of valueism?
Well, it is one of them.
It seems to me that greater generativity is the direction, and greater effectiveness is one of the paths.
Maybe this will become a point of confusion if I don’t address it, but maximizing generativity (net value) is better at value creation than maximizing effectiveness.
Improving your effectiveness 10X may just barely be possible but what about 100X or more. At some point there are forces pushing in the other direction, like personal flaws and limitations.
With effectiveness as the goal, what are you really pursuing?
If it is to maximize effectiveness, then the amount of harm you create ceases to matter, and the morality becomes incoherent.
If it is to minimize ineffectiveness, then you are chasing efficiency and forward progress quickly becomes untenable.
With the latter definition, an ‘effective valueist’ would have to stop at some point in their journey, while the ‘generative valueist’ would keep going. When efficiency regresses at any point, you’ll have to stop.
This is a point of interest, but for another time.
Before we explore what these ideas imply there is an important question that needs to be answered.
How do we measure net value creation?
This seems like an intractable problem, but it’s not.
The real world seems to have already created mechanisms, namely, markets and status hierarchies.
Markets efficiently allocate capital towards value creation. Corporations operating in markets are rewarded with profits in large part by creating a product or service that others value. They must produce something valuable for their consumers, and return value to their shareholders. Employees would be a part of the corporation itself like mini agents of their own, that are parts of the larger agentic organization. So lastly, the corporation can create value for itself and by extension its employees.
Instead of economic value, status hierarchies reward social value creation. There is a saying “All is fair in love, war and status” which is a minor exaggeration of an imperfect correlation. We seem to inherently ‘respect’ those who create social value, rewarding them with status in our communities.
Not all value creation has economic reward. Think a loving parent, or a groundbreaking scientist. So it’s a given that monetary income as a signal of economic value creation is imperfect, perhaps a 0.3 to 0.5 correlation.
Yet, rewards can also be intrinsic, such as the feeling of safety from earning enough money for your family. Or extrinsic but non-monetary: the love and respect of your wife, her good humor, her happiness.
We often care more about these intangibles over money, so valueism is about both.
What other mechanisms exist, or could be created? How do we strengthen the signal?
These are all great paths of inquiry and action, but let’s change direction.
I’d like to use Uber as a way to ground what value really is. Money is a proxy and not the only way to understand value, we will get to that.
I will assume corporations create value and that they create harm. Yet, it is not binary and it is not equal. Corporations on net, create positive value for the world. Let’s not engage in all-or-nothing thinking or whataboutism.
One interpretation of value created is consumer surplus which is hard to measure but has been done at least once before with Uber. This paper does just that:
USING BIG DATA TO ESTIMATE CONSUMER SURPLUS: THE CASE OF UBER
ABSTRACT Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individual-level observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.
So returning to valueism, we will combine producer and consumer surplus into a proxy for ‘value’. We’ll join things like taxi owners losing their jobs, vehicle emissions and worker injuries into a ‘harm’ category, which we will make up a number for.
Uber in 2015 might look something like this.
The previous sections show that neither effectiveness, efficiency or impact are the right measures of value creation. So, with uber we will skip all the way to generativity (net value).
Value per ride dollar spent is consumer surplus ($1.60) + producer surplus ($1.00) which combine into our absolute value. Harm per unit is a made up number ($0.80). which represents income lost by displaced taxi drivers and worker injuries among other things. The generativity is $1.80 per dollar spent on a ride, and Uber in 2015 gave tens of millions of rides.
Under valueism, Uber is a net good for our society and should go on to great things.
But isn’t Uber also doing harm?
Yes they are. And yet their positive impact has been profound.
Let’s move away from money as our measure of value.
Absolute value, what does it represent? Money is just a proxy, consumer surplus is just an approximation.
What does it look like?
It’s the moment you landed in a new city at 2:00 AM and there was no one to pick you up. That is valuable.
It’s the moment your friend leaves a bar drunk. If she drives, she may kill someone or die herself, but she does not have to. That is valuable.
Its convenience, availability, and optionality among many other things. It’s the problem you had, that was solved.
Use your imagination to make value concrete.
If Uber had focused solely on reducing harm, they would have likely reduced their positive impact (generativity), or perhaps never even started.
Valueism does not particularly like or dislike the goody-two-shoes who does very little harm, yet not really that much good either.
Agents are imperfect, limited and incompetent in many ways. Yet as valueists we seek to create net value for the world and the people in it. Action is a way to create value, whether it is suboptimal or not.
What does valueism have to say about intent?
Valueism focuses on the outcomes of your intent and actions.
A politician or voter who intends well with a policy that is later found to have perverse and unintended consequences, is not viewed favorably.
Valueism can accept harm at some level, though not thoughtlessly. You may need to increase your impact even if the harm you create also increases. Other times the best action is to decrease your harm, and increase your efficiency.
On the other hand a child who accidentally solves a mystery of great importance, is viewed very favorably.
These two examples, of the voter and the child, are close to gambling or rolling the dice so to speak.
Valueism pushes you to find an edge.
Boldness and action increase your impact. Caution and wisdom increase your efficiency. Curiosity and knowledge increase your effectiveness. Virtues help us navigate external risk and uncertainty and give internal meaning to our lives, like wonderful dual purpose coins.
Strategic thinking is a more effective way at creating value than rolling the dice. Social intelligence, emotional intelligence and so on help as well.
This is a growth oriented philosophy.
What are the proper units of time?
Valueism works at the interaction level and when aggregating many interactions, over days, years or lifetimes. At whatever scale the valuist seeks to maximize net value creation.
A hug might have a positive net value and an infinite relative value. No harm done. Gossip might be a two sided interaction, with much of the social value cancelled out by the social harm. Violence is almost always net harmful. Even when taking a life to protect a life, that action is not seen as completely good and non lethal methods would be seen as more efficient.
What are some odd conclusions made by valueism?
Here is one that neither you nor I will like. Astrology, as judged by the market, has positive net value. Asking yourself why, leads to some interesting territory.
Another is that no crime is unrecoverable (except x-risk crimes). Valueism does not rule out punishment for someone who is likely to cause more harm, but it does rule out death sentences in a world of radical life extension.
Another is that when following the law is net harmful, you should break it. Think Germans who hid Jews during WWII. This aligns with the intuition that laws should conform to morality, and not the other way around.
What about addictive substances?
That does seem to change the equation somewhat and is an interesting area of exploration.
Perhaps the difference lies in whether you choose to create value for others or for yourself which brings up altruism.
Unlike objectivism, there is a place for altruism in valueism. You can create value for your own sake because you will be rewarded by markets and status hierarchies, or by the intrinsic and non-monetary rewards. Alternatively, you can create value for their sake which is altruism. Either way valueism pushes you towards creating value.
How is the sum of actions calculated?
This system could maybe be improved, but I am treating each action as a unit with a value and harm percentage that has a weight attached multiplicatively. The weight represents the significance of that action. From there it is simple addition.
IF harm >= 0 && harm <= 1
IF value >= 0 && value <= 1
IF (harm + value == 1)
IF significance > 0
[harm, value] * significance = action
action_1 + action_2 + action_3…
Significance is a rather important concept, but intuitively understandable.
Different actions can have wildly different importance. Changing cities versus changing clothes. Also, the same action can also have vastly different significance. Hugging your daughter before school versus hugging her at her grandma's funeral.
All actions have consequences, but those consequences are not the same.
To calculate significance abstractly, place two actions on the positive side of a number line to get a relationship. Then based on that add any additional actions you are considering to the line in relation to the previous numbers. Significance is always positive even if the action is net harmful.
Let’s move on to another important question.
Are value and harm really commensurable?
The core formula of Valueism is net value = |value| - |harm|. This assumes that value and harm exist on the same scale and can be directly subtracted from one another.
That is a necessary prior for valueism.
Under valueism there is no harm so sacred it invalidates creating value.
Yet there is still a place for shame and guilt. There are harms so significant they outweigh all the value someone has so far created.
Condemnation of actions makes sense in a simple way, and condemnation of people makes sense considering the practical limitations of human capability and lifespan. Maybe they will never make up for it in their lifetime. Maybe they won’t or can’t change their ways.
These may turn out to be true, but their potential for future value creation remains.
Harm is already antithetical to value, and within the framework of value creation we deal with it as wisely as possible.
What we get from letting go of sacred harm is redemption.
The years I spent teasing my baby sister out of love and fun. The teasing that caused her legit anger issues, and minor trauma. At this point I'd say my good intent is less important than the outcome. But yes, that can be redeemed by adding value to her life.
And in more significant cases, yes, you can redeem great harm with greater amounts of value.
Redemption is a rather important thing to have in a philosophy. By including that path we may inspire more people to create value.
There is another topic I’d like to get to.
Namely how valueism is in opposition to many cognitive distortions, those heuristics that feel oh-so-natural but are actually destructive. I will go over a few.
How is valueism in opposition to all-or-nothing thinking?
Actions and their aggregations are usually two sided instead of binary. We focus on net value which incorporates both value and harm. We don’t dismiss someone’s positive impact, because some of the impact was negative.
How is valueism in opposition to discounting the positive?
We count the harm, we count the value, not one or the other. We don’t downplay another person’s value creation for moral or social failings. We are not hyper focused on harm.
How is valueism in opposition to labeling and mislabeling?
We view people as agents in a continuous process of value creation over time. We believe in redemption and we reject that a person can be unforgivable. We believe in potential and reject that the value so far created is the final measure of a human being. We resist labeling people as "a failure" or "evil", instead we assess their actions.
So now with what we have covered, I’d like to explore conflicting motivations. Let’s return to altruism and self interest.
Say you're faced with a dilemma. Your wife calls you and asks you to pick up your son. It was her responsibility this time, but something came up. But this isn’t trivial to you either, it’s game night. You had prepared the whole week for this.
Say you as a person are 40% altruistic and 60% self-interested. We can take the action “picking up your son” and split it by those two motivations. You can pick your son up out of altruism or out of self interest.
We’ll use the framework of action = [value, harm] * significance, for the comparison.
Under altruism this is a 100% valuable action with 0% harm. The significance might be a 10.
Under self-interest this action is 80% harmful to you, and 20% valuable because you enjoy helping your wife. The game is important to you and under self interest leaving it behind might be slightly more significant, say a 12. Lastly, we can flip the harm and value by not going.
Altruism says go, while self-interest says stay. We have a choice to make and we don’t know what to do.
Putting this all together we get this table.
The easiest way to think through our choice is through net value, the action ‘go’ is better. The net value of go is a 10 vs a 7.2.
That is one way to do it but there is another.
internal net value = net value * nature
Nature being the percentage of your will that a motivation takes up, if the sum of your motivations is less than or equal to 1 and greater than or equal to -1.
When factoring our conflicting motivations with the weights of our nature, the internal net value says stay > go. Just compare the numbers and you're done.
I pick up the phone and tell my wife no… and then get yelled at. She’s trying to change the equation.
Jokes aside, when internal net value is so close, that might be a moment of moral anguish.
This method allows us to have choices with more than 1 motivation, and also works for sums of actions as in the chain of actions it takes to achieve a goal. Just add up the actions, convert into net value and stop there or multiply by the weights in your nature.
Reflecting on the math, it seems altruism as a motivation pulls us towards creating more net value. It may also be a force that pushes us away from causing harm.
Let’s give one more example.
Say your young daughter pulled the cat's tail and you are considering whether or not to take action.
Do you put her on a time out by the stairs?
Have a look.
We are looking at the same action, so the harm, value and significance are all the same.
If we decide by net value the choice is simple, because it is positive, but what about if we have conflicting motivations again.
Say you value justice, it's in your nature. Meanwhile, you strongly dislike hypocrisy, it is -20% of your nature.
It turns out that you often pull the cat's tail too, but you do it gently. You fear that with this timeout, you may teach your daughter hypocrisy so you are weighing it against your other values.
Note that your nature does not add up to 1, but the conflict between these two motivations are what you are considering.
The internal net value interpreted through the lens of justice is a 1.2, but viewed through hypocrisy it is a -0.8. What do we do? We sum them.
The internal sum is 0.4, which means it has almost no net value to us. We are reluctant, but we eventually overcome our hesitation and put our daughter on the time out.
She cry’s, of course, which is that 0.1 harm number we feared, but she’ll be fine.
Do we choose net value or internal net value?
Internal net value allows for freedom and diversity. Net value is factored in, but the good of the agent is as well. Our actions have consequences on the world but also on us, so we factor both. With internal sums, virtues and motivations become as important as they intuitively seem to us. By optimizing them we can do more good, while still being diverse and free to create what seems valuable to us. Morality is about doing good, but agents need to be able to navigate conflicts and uncertainty wisely.
From this we may be able to derive a human being's inherent dignity and sovereignty.
Now let’s talk about problems.
When faced with a problem, sometimes action does not even take place. Two people talk past each other because they can’t agree if it's a problem at all.
That is the foundation of how we consider problems and how I suggest a valuist calculates them.
IF(problem) == TRUE, then we have a problem.
Some problems also have greater significance, which I trust makes intuitive sense.
Lastly, problems can be solved by other people and under consequentialism this is known as contingency. We valuists are basically children of that parent philosophy so we will adopt it.
In this framework contingency is the probability anyone else will solve the problem.
IF significance > 0
IF contingency > 0 && contingency < 1
We get the formula…
TRUE * significance / contingency = urgency
The explanation is that we are multiplying our problem by its significance and dividing by the chance others might do it.
This allows us to compare problems with other problems. Take a look:
Here we have two problems A and B.
Problem B is more than 2.5 times as important as problem A and they both have low contingency, meaning that not many people can solve this problem for us.
But problem A has a 0.1 contingency vs a 0.3. Note that even though we divide problems by contingency, they are less than 1 so they have a multiplicative effect. This matches the intuition that problems are to be solved even if they can be done so by other people.
When you do the math TRUE * significance / contingency, for both these problems, we see that problem A, even if less significant, has a greater urgency. Problem A is a 50 vs a 43. This problem will likely go unsolved if we don’t personally take action, so it takes priority.
More significance pushes up a problem's urgency. Less contingency, meaning less people can do it, also pushes up a problem's urgency. And more contingency pushes it down.
Since contingency is always less than 1 and significance is always positive, we can rank our problems and solve them in descending order. There is no hard line between personal, professional and worldly problems and valuists like solving them even given our limitations and specific strengths.
Let’s dig into contingency.
We intuitively understand that many problems will go on to be solved without our help. Also many small day to day problems are ours alone to handle.
Who is going to fill up the gas in my car? Almost certainly me. Who is going to do the dishes? Me or my wife, probably me.
Let’s dig into those examples using our intuition.
Imagine someone with two problems during the week, say a lawyer working on an important class action lawsuit. Why does he spend even a minute filling up his car?
Because when the gas gets low and IF(problem) goes from FALSE to TRUE, and even though filling up his car is one of the least significant problems there is, no one else is going to do it. The contingency is too low, so he takes an extended lunch and goes to the station.
The next one is interesting.
Say me and my wife have the same problem, in this case dirty dishes. How we view the same problem might be different. She cares more about order and cleanliness than I do, so this problem has a higher significance for her. Yet, dIshes are usually my responsibility, so I have a lower contingency which means she might do them but I'm more likely to. The math depends on those factors and it could go both ways.
So with this method we can compare problems, and compare the same problem as viewed by multiple people.
To recap we have:
The prior that value creation is one of the greatest goods to pursue.
The prior that actions have a value and harm component, which can be binary or inbetween.
The prior that value and harm are commensurable.
Outcomes judged by net value = |value| - |harm|
Actions judged by internal sums and internal net value, derived from [value, harm] * significance, motivations and nature
Problems judged by urgency = TRUE * significance / contingency
The flow of valueism is from problem to action to outcome.
So we’ve covered a lot of ground.
We talked about value and harm, generativity, effectiveness, efficiency, impact, respect, potential, redemption, reward, punishment, altruism, self-interest, justice, hypocrisy, motivations, nature, choices, problems, actions, and outcomes.
Seems like a good start.
I’ll leave by saying valueism is about creating value, not extracting nor destroying it.
If these ideas moved you, please help develop them.