The pithier (and thus easier to keep in mind and remember) version: Reflection -> What just happened? Abstraction -> What does that mean? Experimentation -> Test it!
I don't think you need to specify the "have the experience", since "Test it" will get you to do something.
I agree with your general point, but I have a different specific suggestion: the things that come to mind as "moralizing" (itself a negative connotation term to me) are not the same as talking about morality, nor even passionately talking about morality. The thing that is counterproductive is basically trying to pressure the other person into feeling bad about their behavior by making impassioned pleas, as opposed to getting more buy-in from them to make your defense.
Alice: "I don't see the problem with telling white lies. Why not lie about the size of the fish I caught for the sake of a story?" Bob: "I care a lot about honesty and think white lies are bad. Can I describe what it looks and feels like to me?" Alice: "Sure, go ahead." Bob: [passionate defense of honesty]
Imagine if Bob instead replied: Bob: "Excuse me? You want to sabotage my world model? Do you want to lobotomize me too? What if I go fishing, expecting to catch big fish, and then am disappointed when it's much harder than you made it out to be? And when you're caught once, I can't trust you! Why should I even care if you compliment me if I know you'll sometimes lie when asked if you think I look fat in this shirt?"
That probably won't go as well.
"Do you constantly look back and ask "How could I have thought that faster?"
I frequently ask myself that. The problem for me is: a lot of the time I either don't know, or are too lazy to think harder about figuring it out. As an example, it's very useful to do this after figuring out (or looking at the answer after failing to figure out) how to prove a math theorem, both by making you faster next time and by having you understand the proof to a deeper level so that you can compress it.
However, often the feeling of reward of understanding happens at a lower level than what I should be shooting for, and so I'll fundamentally feel like it's not worth it to think harder. Compounding on this is the fact that before you come up with an idea, it often feels like there's no way you'll be able to and you're not making any progress (especially since sometimes there may not be some clever way to think about the proof that lets you understand and reproduce the whole thing easily.
I am trying to adjust my expectation of the chance that I'll come up with something if I try. However, that alone might not be motivating enough - it feels like looking for a jewel that you're not even sure is in this cave instead of going on to the next new problem.
Even if I fixed my laziness issue there, I'd still have the problem of often not knowing how to change my thinking to be faster next time.
I think pair of pants is a great name - it's very intuitive (it looks like a pair of pants!). If you want to go shorter you can just think "pants".
Well, yeah, dath ilan is populated by people who are on median Eliezer Yudkowsky - part of the whole premise is that the homeworld is made of people like that. The only problem with typical minding is if lsusr is wrong about what he's like in different environments or what he's like as a kid or what people that are human_range_of_variation_recentered different from him are like.
But this isn't how Green would see Green? A justification rooted in Blue and Black instrumental motives is not what's going on. To the extent that I get something I like from Green, it's the extent that I think they are instrumentally useful - as one of the other colors would see it. For example, I wouldn't wantonly cut down a rare big tree, but only for the same reason I don't make big irreversible decisions regarding rare artifacts without careful consideration. It's like dropping a quest item in a videogame to me.
If there's some reason I should take on Green's actual justifications, I don't think the post really explained it - I simply aren't compelled by the feeling that the tree should be respected, if I have some feeling in the Green direction that isn't just instrumental then it's very small. Telling me that some people have some feelings they can't justify about why the tree should be respected, that isn't about instrumental utility for beings that have qualia... is not very convincing.
This is (mostly) a crosspost of my (pending review? and so i can't link to it?) comment from the EA forums replying to a commenter also asking for actual uses of Shapley values
The first real world example that comes to mind... isn't about agents bargaining. Namely, statistical models. The idea is that you have some subparts that each contribute to the prediction, and want to know which are the most important, and so you can calculate shapley values ("how well does this model do if it only uses age and sex to predict life expectancy, but not race", etc. for the other coalitions).
Here's a microecon stack exchange question that asks a similar thing as you. The only non stats answer states that a bank used Shapley values to determine capital allocation in investments. It sounds like they didn't have a problem using a 'time machine' because they had the performance of the investments and so could simply evaluate what returns they would've gotten had they invested differently. But I haven't read it thoroughly, so for all I know they stopped using it soon after, or had some other way to evaluate counterfactuals, etc.
Also the Lightcone (so, including you?) fundraising post mentioned trying to charge for half the surplus produced when setting Lighthaven prices (i.e. the 2 player special case of the Shapley value).
Of course, the 2 player case is much easier than even the 3 player case because you only need to know the other person's willingness to pay (that is, their value oven BATNA) and can then estimate your own costs (in total, one advantage-over-batna that doesn't just involve you needs to be determined) while for 3 players you need 3*2 = 6 comparisons and for n players you need total comparisons (each player giving the benefit they get for if that coalition occurred) of which are your comparisons (which, to be clear, aren't trivial, but at least you know your own preferences and situation and don't have to ask others about them). The first sum is which is faster than exponential growth, while the second sum is which means that discounting the comparisons that are about the value you get doesn't make the asymptotics better. This suggests that even just the communication costs get pretty high pretty fast unless you have a compact way to encode how much value you get out of the interactions (like in the bank example, I think you only need to be told the individual performance history, and then can just compute the value in each investment counterfactual). So if there's nonlinear relationships between people (read: real life most of the time) my intuition is that you are screwed?
I think you need a decision theory (+ a theory of counterfactuals, which is basically going to have to be a theory of logical counterfactuals if you want to prevent extortion from Omega, and uhh good luck figuring that out) for this. We compare to counterfactuals where the other agents aren't destroying value for the sake of extortion because agents with a good decision theory will refuse to give in in those cases. Now let's imagine that Greg, for genuinely unrelated reasons, will lead to the project's downfall (say, he usually mows his lawn in the morning, and the project requires quiet at that time). If Greg chooses to not mow his lawn to help the project, I'd call that "participating in the coalition", and he should get some value from doing so. The point, after all, is to incentivize people to contribute to the project and also to be resistant to extortion.
They definitely aren't Cartel independent! Let's take your example, and imagine that our "cartel" is Alice and Bob, forming a combined coalition player ("AlicoBob, sitting in a tree, k-i-s-s-i-n-g")
AlicoBob by themself can make $100. Bert by himself can make $0. AlicoBob + Bert can make $100. The synergy is $0, so the Shapley value is that AlicoBob gets everything and Bert goes home sad.
However, Alice could've also formed a cartel with Bert, and then Bob would go home sad. So there's like an equilibrium thing going on here where both Bob and Bert want to be the sole partners of Alice and leave the other one out, and so what I expect would happen in real life is that if for some reason one of the two got there first that they would naturally form a coalition and demand more concessions from the excluded party, while Alice would then also demand more concessions from Bob because she can threaten to go and collude with Bert instead.
This is basically the "stand alone core" property that is sometimes logically impossible to satisfy, so I guess it's not too sad that the Shapley value doesn't live up to it.
I would not normally vote on this post, as the technique of "How could I have thought that faster?" seems extremely obvious to me but also very important if you are not in fact trying to improve your thinking after being surprised (or any other shortcoming). Since this post has 241 upvotes and multiple comments from people (example: Said Achmiz, who is not an idiot!) and others disagreeing with the framing, I have review-upvoted this post.
I think the framing of "think it faster" is specifically something you should track, beyond just "What did I learn here really?" (which I see as important subskills that help you figure out how to think it faster) or "How could I have thought that with less information?" (which I see as fully subordinate to thinking it faster, because you get later info later). By focusing on thinking it faster, you focus on cognitive strategies - on how you could've approached the issue differently with what you knew at the time, or maybe you should've put more/less stock in a certain kind of evidence.
The main problem with this post is that it gives no guide for how to go about learning how to think faster. Maybe you can't come up with a good guide, but for this sort of thing a list of examples is itself useful.
Here's a list of examples (that are too abstracted - next time I encounter something that I see how I could've thought it faster, I'll write it down, and when I've gotten a bunch I'll either post about it or add to this comment):
Say I am trying to prove a theorem. I will pursue a couple approaches, and then finally get something that works. When I look back on what I did, I will often find that I should've known better. Common problems:
Likewise, for more mundane life stuff:
Or when I'm surprised by e.g. the news or a factoid, I might've:
The best updates are more general, but unfortunately those are harder to discover.