Money can be thrown at my Patreon here: https://www.patreon.com/reflectivealtruist
Hey Ron, I am working on my own version of this (inspired by this Sequence), and would love to get your advice! Right now I am focusing on crowdfunding via dominant assurance contracts on Ethereum.
How did you / would you verify that someone did something? What are specific examples of that happening for different actions? What kinds of evidence can be provided? I have a fuzzy sense of what this looks like right now. The closest sites I can think of just off the top of my head that involve verification are CommitLock (which I made a successful $1000 commitment contract on to get myself to do swim lessons) and DietBet, which requires a photo of your scale (it also has that 'split the pot' feature you mentioned, which I am pretty excited for).
I am very interested in practicing steelmanning/Ideological Turing Test with people of any skill level. I have only done it once conversationally and it felt great. I'm sure we can find things to disagree about. You can book a call here.
I’ve mentioned previously that I’ve been digging into a pocket of human knowledge in pursuit of explanations for the success of the traditional Chinese businessman. The hope I have is that some of these explanations are directly applicable to my practice.
Here’s my current bet: I think one can get better at trial and error, and that the body of work around instrumental rationality hold some clues as to how you can get better.
I’ve argued that the successful Chinese businessmen are probably the ones who are better at trial and error than the lousier ones; I posited that perhaps they needed less cycles to learn the right lessons to make their businesses work.
I think the body of research around instrumental rationality tell us how they do so. I’m thankful that Jonathan Baron has written a fairly good overview of the field, with his fourth edition of Thinking and Deciding. And I think both Ray Dalio’s and Nicholas Nassem Taleb’s writings have explored the implications of some of these ideas. If I were to summarise the rough thrust of these books:
Don’t do trial and error where error is catastrophic.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7)
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).
Wearing a mask in a pandemic. Not putting ALL of your money on a roulette wheel. Not balancing on a tightrope without a net between two skyscrapers unless you have extensive training. Not posting about controversial things without much upside. Not posting photos of meat you cooked to Instagram if you want to have good acclaim in 200 years when eating meat is outlawed. Not building AI because it's cool. Falling in love with people who don't reciprocate.
The unknown unknown risk that hasn't been considered yet. Not having enough slack dedicated to detecting this.
If you've gone on OkCupid for the past 7 years and still haven't got a date from it, maybe try a different strategy. If messaging potential tenants on a 3rd-party site doesn't work, try texting them. If asking questions on Yahoo Answers doesn't get good answers, try a different site.
Talk to 10x the number of people; message using templates and/or simple one-liners. Invest with Other People's Money if asymmetric upside. Write something for 5 minutes using Most Dangerous Writing App then post to 5 subreddits. Posting ideas on Twitter instead of Facebook, rationality content on LessWrong Shortform instead of longform. Yoda Timers. If running for the purpose of a runner's high mood boost, try running 5 times that day as fast as possible. Optimizing standard processes for speed.
Posting content to 10x the people 10x faster generally has huge upside (YMMV). Programming open-source something useful and sharing it.
Roam is good for this, perhaps SuperMemo. Posting things to social media and coming up with examples of the rules is also a good way of learning content. cough
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7
Did messaging or posting to X different places work? Try 2X, 5X, etc. 1 to N after successfully going 0 to 1.
Stating assumptions strongly and clearly so they are disconfirmable, then setting a Yoda Timer to seek counter-examples of the generalization.
Any updates on this in the past six months?
Mati, would you be interested in having a friendly and open (anti-)debate on here (as a new post) about the value of open information, both for life extension purposes and else (such as Facebook group moderation)? I really support the idea of lifelogging for various purposes such as life extension but have a strong disagreement with the general stance of universal access to information as more-or-less always being a public good.
Sure thing. What would you recommend for learning management?
(I count that as an answer to my other recent question too.)
Warning: TVTropes links
When should I outsource something I'm bad at vs leveling up at that skill?
How would you instruct a virtual assistant to help you with scheduling your day/week/etc?
Great post! It's like the "what if an alien took control of you" exercise but feels more playful and game-y. I started a Google doc to plan the month of April from Gurgeh's perspective.
See also: Outside.
Why does CHAI exclude people who don't have a near-perfect GPA? This doesn't seem like a good way to maximize the amount of alignment work being done. High GPA won't save the world and in fact selects for obedience to authority and years of status competition, leading to poor mental health to do work in, decreasing the total amount of cognitive resources being thrown at the problem.
(Hypothesis 1: "Yes, this is first-order bad but the second-order effect is we have one institutionally prestigious organization, and we need to say we have selective GPA in order to fit in and retain that prestige." [Translator's Note: "We must work with evil in order to do good." (The evil being colleges and grades and most of the economic system.)])
(Hypothesis 2: "GPA is the most convenient way we found to select for intelligence and conscientiousness, and those are the traits we need the most.")
(Hypothesis 3: "The university just literally requires us to do this or we'll be shut down.")
Won't somebody think of the grad students!