ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren't working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they're not working on your pet cause.
He also didn't ask them both questions at the same day.
EA/rationality has this tension between valuing independent thought, and the fact that most original ideas are stupid. But the point of independent thinking isn't necessarily coming up with original conclusions. It's that no one else can convey their models fully so if you want to have a model with fully fleshed-out gears you have to develop it yourself.
There's a thing in EA where encouraging someone to apply for a job or grant gets coded as "supportive", maybe even a very tiny gift. But that's only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn't a natural fit for, because "it's quick and there are few applicants". This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn't the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I'm not mad at you personally].
I've been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone "yeah you're probably not good enough".
A lot of EA job postings encourage people t... (read more)
I have a friend who spent years working on existential risk. Over time his perception of the risks increased, while his perception of what he could do about them decreased (and the latter was more important). Eventually he dropped out of work in a normal sense to play video games, because the enjoyment was worth more to him than what he could hope to accomplish with regular work. He still does occasional short term projects, when they seem especially useful or enjoyable, but his focus is on generating hedons in the time he has left.
I love this friend as a counter-example to most of the loudest voices on AI risk.You can think p(doom) is very high and have that be all the more reason to play video games.
I don't want to valorize this too much because I don't want retiring to play video games becoming the cool new thing. The admirable part is that he did his own math and came to his own conclusions in the face of a lot of social pressure to do otherwise.
As of October 2022, I don't think I could have known FTX was defrauding customers.
If I'd thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don't think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn't keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I'm quite sure that in a more open, information-sharing environment I would have gotten that information. And if I'd gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to... (read more)
None of my principled arguments against "only care about big projects" have convinced anyone, but in practice Google reorganized around that exact policy ("don't start a project unless it could conceivably have 1b+ users, kill if it's ever not on track to reach that") and they haven't home grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)
Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there's another factor: it's the only way to avoid the geeks->mops->sociopaths death spiral.
An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they're applied to. But you can only maintain the ratio that finely when you're very small. Eventually you need to decide if you're going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different.
"Decide" may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others.
Having AI voices read my drafts back to me feels like it's seriously leveled up my writing. I think the biggest, least replaceable feature is that I'm more less likely to leaves gaps in my writing- things where it's obvious to me but I need to spell it out. It also catches bad transitions, and I suspect it's making my copy editor's job easier.
Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They're not viable for everyone, but for people like me who:
They seem pretty ideal.
So why haven't I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere.
a person's skill level has a floor (what they can do with minimal effort) and ceiling (what they can do with a lot of thought and effort). Ceiling raises come from things we commonly recognize as learning: studying the problem, studying common solution. Floor raises come from practicing the skills you already have, to build fluency in them.
There's a rubber band effect where the farther your ceiling is from your floor, the more work you have to put in to raise it further. At a certain point the efficient thing to do is to grind until you have raised your floor, so that further ceiling raises are cheaper, even if you only care about peak performance.
My guess for why that happens is your brain has some hard constraints on effort, and raising the floor reduces the effort needed at all levels. E.g. it's easier to do 5-digit multiplication if you've memorized 1-digit times tables.
My guess is the pots theory of art works best when a person's skill ceiling is well above their floor. This is true both because it means effort is likely the limiting reagent, the artist will have things to try rather than flailing at random, and they will be able to assess how good a given pot is.
It's weird how hard it is to identify what is actually fun or restorative, vs. supposed to be fun or restorative, or used to be fun or restorative but no longer is. And "am I enjoying this?" should be one of the easiest questions to answer, so imagine how badly we're fucking up the others.
There's a category of good thing that can only be reached with some amount of risk, and that are hard to get out once you start. All of romance risks getting your heart broken. You never have enough information to know a job will always and forever be amazing for you. Will anti-depressants give you your life back or dull your affect in hard to detect ways?
This is hard enough when the situation is merely high variance with incomplete information. But often the situations are adversarial: abusive partners and jobs camouflage themselves. Or the partner/job might start out good and get bad, as their finances change. Or they might be great in general but really bad for you (apparently other people like working for Google? no accounting for taste).
Or they might be genuinely malicious and telling you the issue is temporary, or that their ex wasn't a good fit or you are.
Or they might not be malicious, it might genuinely be the situation, but the situation isn't going to get better so it's damaging you badly.
You could opt out of the risk, but at the cost of missing some important human experiences and/or food.
How do you calculate risks when the math is so obfuscated?
My sink is way emptier when my todo list item is "do a single dish" than "do all the dishes"
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientious
Problems I am trying to figure out right now:
1. breaking large projects down into small steps. I think this would pay off in a lot of ways: lower context switching costs, work generally easier, greater feelings of traction and satisfaction, instead of "what the hell did I do last week? I guess not much". This is challenging because my projects are, at best ill-defined knowledge work, and sometimes really fuzzy medical or emotional work. I strongly believe the latter have paid off for me on net, but individual actions are often lottery tickets with payouts ... (read more)
"Do or Do Not: There is No Try"
Like all short proverbs each word is doing a lot of work and you can completely flip the meaning by switching between reasonable definitions.
I think "there is no try" often means "I want to gesture at this but am not going to make a real attempt" in sentences like "I'll try to get to the gym tomorrow" and "I'll try to work on my math homework tonight".
"there is no try" means "I am going to make an attempt at this but it's not guaranteed to succeed" in sentences like "I'm going to try to bench 400 tomorrow", "I'm t... (read more)
People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there's often a lot of uncertainty in:
Actual axe-sharpening rarely turns into intellectual masturbation be... (read more)
Some things are coordination problems. Everyone* prefers X to Y, but there are transition costs and people can't organize to get them paid.
Some things are similar to coordination problems, plus the issue of defectors, Everyone prefers X (no stealing) to Y (constant stealing), but too many prefer X'(no one but me steals). So even if you achieve X, you need to pay maintenance costs.
Sometimes people want different things. These are not coordination problems.
Sometimes people endorse a thing but don't actually want it. These are not coordination pro... (read more)
That's a pretty reasonable guess, although I wasn't quite that dumb.
I'm temporarily working a full time gig. The meetings are quite badly run. People seemed very excited when I introduced the concept of memo meetings, but it kept not happening or the organizer would implement it badly. People (including the organizer) said nice things about the concept so I assumed this was a problem with coordination, or at least "everyone wants the results but is trying to shirk".
But I brought it up again when people were complaining about the length of one part of a meeting, and my boss said "no one likes reading and writing as much as you", and suddenly it made sense that people weren't failing to generate the activation energy for a thing they wanted, they were avoiding a thing they didn't want but endorsed (or I pressured them into expressing more enthusiasm than they actually felt, but it felt like my skip boss genuinely wanted to at least try it and god knows they were fine shooting down other ideas I expressed more enthusiasm over).
So the problem was I took people's statements that they wanted memo meetings but got distracted by something urgent to be true, when actu... (read more)
I have a new project for which I actively don't want funding for myself: it's too new and unformed to withstand the pressure to produce results for specific questions by specific times*. But if it pans out in ways other people value I wouldn't mind retroactive payment. This seems like a good fit for impact certificates, which is a tech I vaguely want to support anyway.Someone suggested that if I was going to do that I should mint and register the cert now, because that norm makes IC markets more informative, especially about the risk of very negative proje... (read more)
I have friends who, early in EA or rationality, did things that look a lot like joining nonlinear. 10+ years later they're still really happy with those decisions. Some of that is selection effects of course, but think some of it is the reasons they joined were very different.
People who joined early SingInst or CEA by and large did it because they'd been personally convinced this group of weirdos was promising. The orgs maybe tried to puff themselves up, but they had almost no social proof. Whereas nowadays saying "this org is EA/rationalist" gives you a b... (read more)
Sometimes different people have different reaction to the same organization simply because they want different things. If you want X, you will probably love the organization that pushes you towards X, and hate the organization that pushes you away from X.
If this is clearly communicated at an interview, the X person probably will not join the anti-X organization. So the problem is when they figure it out too late, when changing jobs again would be costly for them.
And of course it is impossible to communicate literally everything, and also sometimes things change. I think that a reasonable rule of thumb would be to communicate the parts where you differ significantly from the industry standard. Which leads to a question what is the industry standard. Is it somewhere documented explicitly? But there seems to be a consensus, if you e.g. go to Workplace Stack Exchange, about what is normal and what is not.
(...getting to the point...)
I think the "original weirdos" communicated their weirdness clearly.
Compared to that, the EA community is quite confusing for me (admittedly, an outsider). On one hand, they handle tons of money, write grant applications, etc. On the other hand, they sometim... (read more)
In the spirit of this comment on lions and simulacra levels I present: simulacra and halloween decorations
Level 1: this is actually dangerous. Men running at you with knives, genuinely poisonous animals.
Level 2: this is supposed to invoke genuine fear, which will dissipate quickly when you realize it's fake. Fake poisonous spiders that are supposed to look real, a man with a knife jumps with a fake knife but doesn't stab you, monsters in media that don't exist but hit primal fear buttons in your brain.
Level 3: reminds people of fear without eve... (read more)
I know we hate the word content but sometimes I need a single word to refer to history books, longrunning horror podcasts, sitcoms, a Construction Physics blog post, and themepark analysis youtube essays. And I don't see any other word volunteering.
Let's say there's a drug that gives people 20% more energy (or just cognitive energy). My intuition is that if I gave it to 100 people, I would not end up with 120 people's worth of work. Why?
I'm convinced people are less likely to update when they've locked themself into a choice they don't really want.
If I am excited to go to 6 flags and get a headache that will ruin the rollercoasters for me, I change my plans. But if I'm going out of FOMO or to make my someone else happy and I get a headache it doesn't trigger an update to my plans. The utilitarian math on this could check out, but my claim is that's not necessary, once I lock myself in I stop paying attention to pain signals and can't tell if I should leave or not.
AFAICT, for novel independent work:
genuine backchaining > plan-less intuition or curiosity > fake backchaining.
And most attempts to move people from intuition/curiosity to genuine backchaining end up pushing them towards fake backchaining instead. This is bad because curiosity leads you to absorb a lot of information that will either naturally refine your plans without conscious effort, or support future backchaining. Meanwhile fake backchaining makes you resistant to updating, so it's a very hard state to leave. Also curiosity is fun and fake backch... (read more)
I feel like it was a mistake for Hanson to conflate goodharting, cooperative coordination, accurate information transfer, and extractive deception.
[good models + grand vision grounded in that model] > [good models + modest goals] > [mediocre model + grand vision]
There are lots of reasons for this, but the main one is: Good models imply skill at model building, and thus have a measure of self-improvement. Grand vision implies skill at building grand vision unconnected to reality, which induces more error.
[I assume we're all on board that a good, self-improving models combined with a grand vision is great, but in short supply]
Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I'm working... (read more)
I think it's weird that saying a sentence with a falsehood that doesn't change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.
This feels especially weird when the "lie" is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.