I remember that you posted some variant of this idea as a short form or a post some time ago. I can see that you feel the idea is very important, and I want to respond to it on its terms. My quick answer is that even under the "same" morals, people can undertake quite destructive actions to others, because most choices are made under a combination of perceived values (moral beliefs) and perceived circumstances (factual beliefs). Longer answer follows:
C. S. Lewis once tried to create a rough taxonomy of worldwide moral standards, showing that ideas such as the golden rule (do unto others what you would have others do unto you) and variants like the inverse golden rule (do not do unto others what you would not have others do unto you) were surprisingly popular across cultures. This was part of a broader project which is actually quite relevant to discussions of transhumanism. He was arguing that what we would call eugenics and transformative technology would annihilate "moral progress", but we can set aside the transhumanist argument for now and just focus on the shared principles---things like "family is good" or "killing is bad".
First of all, it should be a bad sign for your plan that such common principles can be identified at all, since it suggests that people might already have similar morals but still come to different conclusions about what is to be done. Second, it becomes quickly clear that some shared moral principles might lead to quite strong conflicts: I'm thinking about morals like "me/my family/my ethnic group is superior and should come first/have the priority in situations of scarcity and danger". If four different nations are led by governments with that same belief, and the pool of resources becomes limited, fighting will almost certainly break out. This is true even if cooperation would prevent the destruction of valuable limited resources, increasing total resources available to the nations as a whole!
From a broader perspective, what I see in the steelman of your idea is something like "if we get people to discuss, they will quickly realise that their ideas about what is moral are insufficient, and a better set of morals will emerge". So they might converge on something like "we are all one global family, and we should all stick together even when we disagree because conflict is terrible". However, this is where the circumstances part of the choice comes in. I can agree in principle that unity is good and that life is sacred. However, if I believe that someone else does not share those ethics, and is (for example) about to kill me or rob me, I might act in self defence. Most of us would call that justified, even though it violates my stated values. Today many leaders make lip service about respecting human rights and international norms... but it's just that those evil evildoers are so evil that we need to do something serious to stop them. My values are pure, but circumstances forced my hand. And so on, and so forth.
Now, if you can truly convince everyone that everyone else is also a reasonable and nice human being, then maybe some progress can be made, but this is a very, very difficult thing, especially when there are centuries of conflict to deal with and legacies of complex and multilayered traumas in many parts of the world. So all in all I think this proposal is very unlikely to succeed. I hope this makes sense.
Yea, it'd be a bonus to convince/inform folks that, if this works out, other people won't be evil,
& if we don't do that then some folks still might do bad things bc they think other folks are bad,
But as long as one doesn't see a way this idea makes things actively worse, It's still a good idea!
Thanks for pointing that out tho. Will add that ("that" being "Making sure folks understand that, if this idea is implemented, other folks won't be as evil, and you can stop being as bad to them") to the idea.
Thanks!
Genuinely be willing to change their values/goals, if they hear a goal they think is more moral.
Regardless of all the other details, why exactly do you think world leaders care about what is moral?
Like, for example, I imagine Putin as a person who wants to be remembered as someone who made Russia bigger, stronger and more respected. I don't think he ever considers moral dimension of this goal. Why think he does?
Yep!
It's just that "Making Russia bigger, stronger and more respected" is what counts as "his 'moral' goals". It's his morals/goals/values. I'm using them as synonyms.
That part's really just a tiny word-choice thing, and I appreciate that you have enough attention to detail to call me out for using the slightly wrong word there. Definitely a good thing that you pointed that out.
Regardless of the content, the presentation is very disorganized. It gives me the impression that these are schizophrenic ramblings, not a serious idea.
Thanks for the feedback!
To me this looks organized...
...which means I can't tell if something looks organized or not.
Which means if I were to write it again & thought the new version was "organized", it still wouldn't be organized.
If you're so sure my writing was bad, then you'd do a much better job writing it.
So if you don't mind, I'd love for you to write a more organized one. (Like genuinely I'd super appreciate it if you rewrote this better than my sloppy job, please do that)
Otherwise it'll stay super disorganized.
(Also just FYI, that was genuinely rude)
People here can be a bit blunt. Compare to trying to present to a room full of annoyingly honest scientists, their non scientist nerd friends, and a few non scientist nerds who aren't really friends with anyone and are tolerated because their critiques reveal actual flaws.
If that's the group you want to talk to, and your point would hold up to scrutiny if made well, I recommend asking a language model with memory disabled and in a fresh context to "think through post's claims somewhat generously initially, but then rigorously, and to then critique the post claims and structure like an interested but skeptical lesswronger", and ask for "readability and structure advice as long as its not in grammar or engagement optimization directions". Do not ask the model to rewrite your post, folks can usually tell and it's against the rules; instead, edit your post manually, then start a new conversation with the model and try again. If you have a post that the model reviews as well structured and doesn't have major critiques, it's somewhat less likely to immediately fail on lesswrong.
It's hard to be both correct and novel. My suggested prompt there is a tall order; if it starts feeling too easy you might have messed up.
You will probably still get a negative to neutral reaction because most posts do. But less so.
Have you read the sequences or equivalent material by chance? I haven't, admittedly, but I know most things they contain. Recommend if not.
f it starts feeling too easy you might have messed up.
... it immediately felt too hard. :')
Also, since I don't have a good measure on what's organized & what's not, I genuinely think ME doing this would result in an even messier article with a ton of random edits scattered around that I think looks cleaner.
I just simply can't measure for something better than this current version, & thus if I tried to make a better version, I fundamentally couldn't tell if I made a better version or a worse version.
I guess what I'm trying to say is I'd super appreciate if you tried to iteratively edit it a bit to help this idea which is worth doing reach it's full potential & not just be a messy article hinting at a good idea but everyone who reads it gets stuck up on the messiness part.
Thanks, super appreciate the help! :DD
There’s a real chance that in the next 10 years we’ll all be dead because this year we didn’t get our act together. For the past 2 years, I’ve been researching & working on risks to Humanity's future, & this paper is about How Humanity Wins.
There's so many issues out there, so many that could kill millions,
So many that DO kill millions,
Did you know there are trains carrying more explosives than Hiroshima travelling around the United States right now that could tip over at any moment?
Apparently, The New World Screwworn is a tiny worm that devours the flesh of 1 BILLION mammals and birds & some humans every year, and that's another problem I need to worry about.
Just this week, you probably heard a news story that would shake you to your core just 6 years ago.
I'm sick of it.
I can't stand it anymore!
It's all too much!
For all of us!
Humanity deserves better!
This project is the sunshine future, and that bullcrap we had to go through ends right here, right now!
See, all world leaders want to do what they think are good things. Their values is to do the most good. They just disagree on what the most good is. A lot. Putin might think it's "good" for Russia to be bigger, and the president of Ukraine might think it's good for Ukraine to stay independent.
Otherwise, one side wouldn’t want X, and they wouldn’t both work to get it.
So here’s my idea: What if we had an international summit to get world leaders to agree on values/morals?
Based on these fundamental laws of Geopolitics, the world would be better in some fundamental ways if we did this simple summit!
It’s time to live in harmony as a shared humanity under a common goal to always do the moral, just thing is no longer a dream of the past. It is now our future, and this is a real thing that just needs an international summit.
All you need to do is make it happen!
(So that's what I'm asking for! Can you help me save the world by helping me figure out how to make this happen? I tried a lot of stuff with the UN, but it's just so unclear who to email there, and currently, Oct 2025, I'm sending this idea to the G20!)
The rest is just extra about why in the world this crazy idea would actually work,
I spoke to experts at the UN and they think it's a good idea,
but only read it if you're not convinced.
You already read the important part.
Before I continue, consider reading "How minds change" (Specifically chapters 8-9). I read it, it's an incredible book that tells you the actual, applicable phycology and directions on how to convince people.
The main argument against this project is:
"There's no way we could convince Putin to be moral." That's true until you read "How minds change".
Ok, now back to the paper:
Now, what came to mind when I first thought of this was “Why would world leaders like Putin ever go to this kind of summit? Wouldn’t they just reject it?”, and I almost gave up on the idea,
but then I thought about it some more, and I realized there are several fundamental reasons that, no matter what, Any world leader would want to both
Here’s the case I would make to Putin if he was wasn’t going to the summit:
And hey, if you hear a convincing case to change your values, that means you’ll get even better values! That’s a free improvement! So listen out for if anyone makes a good case that some other goal is more moral.”
Every other country that attended and advanced to more moral values will hate you,
your people will hate you, and you’ll hate yourself. Don’t be that person.”
These points convinced me, so I figured I’d try bringing the idea to some actual experts.
And I did! Some at the UN, some international relations experts outside the UN, and they all said it’s a generally good idea!
Think on my idea a little while, and if you can’t think of any ways it could cause harm, go ahead & do it!
To a moral humanity!
Yours truly,
Wes Reisen