Listening to people demand more specifics from If Anyone Builds it, Everyone Dies gives me a similar feeling to when a friend’s start-up was considering a merger.
Friend got a bad feeling about this because the other company clearly had different goals, was more sophisticated than them, and had an opportunistic vibe. Friend didn't know how specifically other company would screw them, but that was part of the point- their company wasn't sophisticated enough to defend themselves from the other one.
Friend fought a miserable battle with their coworkers over this. They were called chicken little because they couldn’t explain their threat model, until another employee stepped in with a story of how they'd been outmaneuvered at a previous company in exactly the way friend feared but couldn't describe. Suddenly, co-workers came around on the issue. They ultimately decided against the merger.
“They’ll be so much smarter I can’t describe how they’ll beat us” can feel like a shitty argument because it’s hard to disprove, but sometimes it’s true. The debate has to be about whether a specific They will actually be that smart.
If I told someone 'I bet stockfish could beat you at Chess' i think it is very unlikely they would demand that I provide the exact sequence of moves it would play.
I think the key differences are that (1) the adversarial nature of chess is a given (a company merger could or should be collabroative). (2) People know it is possible to 'win' chess. Forcing a stalemate is not easy. In naughts and crosses, getting a draw is pretty easy, doesn't matter how smart the computer is, I can at least tie. For all I (or most people) know company mergers that become adversarial might look more like naughts and crosses than chess.
So, I think what people actually want, is not so much a sketch of how they will loose. But more a sketch of the facts that (1) it is adversarial situation and (2) it is very likely someone will loose. (Not a tie). At that point you are already pretty worried (50% loss chance) even if you think your enemy is no stronger than you.
When I am well rested and exercised and just had a great lunch (but not too much) with a good friend, there is no social or emotional issue I can't handle. Everything is just a problem to be solved, and I'm sure I can do it.
For a long time I worked on increasing my problem solving under perfect conditions, and I succeeded... under perfect conditions. But this was not very helpful for when I was feeling hungry, or lonely, or, god forbid, underslept, which is when most problems happened.[1] Worse, my fragile skills gave me the impression I was very good at handling [problem], so when I struggled I would either have to violate my vision for myself or blame the other person for not cooperating with my excellent social skills.
I can only really know that this problem existed in me, but looking around my social circle, I see people who sure look like they're experiencing the exact same problem. I'd like to nudge the culture towards focusing more on "what you can reliably do under strain?" rather than "what's the tallest tower you can build?"
The more complicated version would say that working on the skill ceiling can pay dividends in your skill floor, and maybe I was in t
In the spirit of reverse advice, just today I found myself struggling with the thought of "I felt surprisingly good after that morning run, somehow it feels like my body has gotten better at actually enjoying exercise - but I feel reluctant to start doing more of it, because what if I only end up feeling good because I'm exercising and then I might lose those gains if I ever get a disability or become too old and frail to go on runs".
This opposite extreme doesn't feel very reasonable either. If I feel good and am more capable, then that's not "wasted" just because I'm unable to sustain it indefinitely, and a refusal to ever have perfect conditions is not a good way to build capacity for the non-perfect conditions.
For personal relationships, mitigating my worst days has been more important than improving the average.
For work, all that's really mattered is my really good days, and it's been more productive to try and invest time in having more great days or using them well than to bother with even the average days.
According to a friend of mine in AI, there's a substantial contingent of people who got into AI safety via Friendship is Optimal. They will only reveal this after several drinks. Until then, they will cite HPMOR. Which means we are probably overvaluing HPMOR and undervaluing FIO.
But did it inspire them to try to stop CelestAI or to start her? I guess you might need some more drinks for that one...
I tried to invite Iceman to LessOnline, but I suspect he no longer checks the old email associated with that account. If anyone knows up to date contact info, I’d appreciate you intro-ing us or just letting him know we’d love to have him join.
I think it's also "My Little Pony Fanfics are more cringe than Harry Potter fanfics, and there is something about the combo of My Little Pony and AIs taking over the world that is extra cringe."
You will always oversample from the most annoying members of a class.
This is inspired by recent arguments on twitter about how vegans and poly people "always" bring up those facts. I contend that it's simultaneous true that most vegans and poly people are either not judgmental, but it doesn't matter because that's not who people remember. Omnivores don't notice the 9 vegans who quietly ordered an unsatisfying salad, only the vegan who brought up factoring farming conditions at the table. Vegans who just want to abstain from animal products remember the omnivore who ordered the veal on purpose and made little bleating noises.
And then it spirals. A mono person who had an interaction with an aggro poly person will be quicker to hear judgement in the next poly person's tone, and vice versa. This is especially bad because lots of us are judging others a little. We're quiet about it, we place it in context instead of damning people for a single flaw, but we do exercise our right to have opinions. Or maybe we're not judging the fact, just the logistical impact on us. It is pretty annoying to keep your mouth shut about an issue you view as morally important or a claim on your time, only to have someone demand you placate them about their own choices.
AFAICT this principle covers every single group on earth. Conservatives hear from the most annoying liberals. Communists hear from the most annoying libertarians. Every hobby will be publicly represented by its members who are least deterred by an uninterested audience.
Every hobby will be publicly represented by its members who are least deterred by an uninterested audience.
I'd distinguish between oversampling the annoying members of a class (yes), and a class being publicly represented by its most annoying members (not necessarily). A class that's non-evangelical, that actively strategizes on how to control its evangelizers so that they'll be less annoying, or that has a limited moral component, will tend not to establish an annoying public image.
Consider Mormons. They're intensely moral, highly evangelical, but they have established a careful approach to evangelicism that lets them do an enormous amount of it while having their public image of evangelicism be nothing worse than a couple formally dressed young men politely knocking on your door.
Jews are also moral, but they do not attempt to convert non-Jews. What Jews often find intensely annoying (to say the least) about other Jews is when more conservative Jews tell typically less conservative Jews that they're "not really Jewish" (i.e. because they don't have an unbroken maternal chain of Jewish ancestry, even if they have been going to synagogue their entire life, etc).
Gardeners...
I mean, I'm pretty sure animosity towards rationalists on Hacker News is older than the existence of OpenAI and probably even DeepMind. Also most people on Hacker News don't work in AI. So I don't really know why this hypothesis is coming to mind, I don't think it's relevant for most of what's gone on.
I'd be more inclined to put it down to Hacker News having many standard online pathologies for bullying easy targets, and rationalists historically being a lot of weird and outcast kinds of people, along with some strains of anti-intellectualism in the tech/startup world.
This is one of those things where there's a lot of space between the central example of something and peripheral members, and the equivocation causes a lot of stress.
The central example of being judgmental involves being loud about the other person being bad in full generality, for whatever thing you're judging. Making a judgement about a particular action, quietly, while still fully respecting the other party as a person doing the best they can, is a peripheral example of being judgmental.
Sometimes people confuse the latter for the former, or feel entitled to the judger's good opinion on every aspect of themselves. Sometimes they aren't confused but use confusing phrasing to get people on their side.
"Do you want to vent or to problem solve?" was useful tech when it was invented but I hate it now. Here's why:
I have a model that:
Most of medicine focuses on problems with root causes. Even if you go to alternative modalities, they usually sell themselves as being better at finding root causes. But sometimes there either is no root cause, or it's not directly fixable, and you can still get tremendous gains by moving yourself to a better equilibrium.
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. Austin Chen got some extra anxiety joked about being anxious because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). Turns out they were fine but then I was w...
Sometimes people deliberately fill their environment with yes-men and drive out critics. Pointing out what they're doing doesn't help, because they're doing it on purpose. However there are ways well intentioned people end up driving out critics unintentionally, and those are worth talking about.
The Rise and Fall of Mars Hill Church (podcast) is about a guy who definitely drove out critics deliberately. Mark Driscoll fired people, led his church to shun them, and rearranged the legal structure of the church to consolidate power. It worked, and his power was unchecked until the entire church collapsed. Yawn.
What's interesting is who he hired after the purges. As described in a later episode, his later hiring was focused on people who were executives in the secular world. These people were great at executing on tasks, but unopinionated about what their task should be. Whatever Driscoll said was what they did.
This is something a good, feedback-craving leader could have done by accident. Hiring people who are good at the tasks you want them to do is a pretty natural move. But I think the speaker is correct (alas I didn't write down his name) that this is anti-correlated at the ta...
I somehow went the entire research project without seeing a picture of Mark Driscoll. Wow, what a strong yet trustworthy face
however some of his younger, unbearded photos look super douchey, can't see why anyone would follow him.
ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren't working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they're not working on your pet cause.
reposting comment from another post, with edits:
re: accumulating status in hope of future counterfactual impact.
I model status-qua-status (as opposed to status as a side effect of something real) as something like a score for "how good are you at cooperating with this particular machine?". The more you demonstrate cooperation, the more the machine will trust and reward you. But you can't leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.
There are exceptions. If you're exceptionally strategic you might make good use of that status by e.g. changing what the machine thinks it wants, or coopting the resources and splintering. It is also pretty useful to accumulate evidence you're a generally responsible adult before you go off and do something weird. But this isn't the vibe I get from people I talk to with the 'status then impact' plan, or from any of 80ks advice. Their plans only make sense if either that status is a fungible resource like money, or if you plan on cooperating with the machine indefinitely.
So I don't think people should pursue status as a goal in and of itself, especially if there isn't a clear sign for when they'd stop and prioritize something else.
I was fired from my first job out of college, and in retrospect that was a gift. It taught me that new jobs were easy to get (as a programmer in the late 00s) and took away my fear of job hunting, which otherwise would have been enormous. I watched so many programmer friends stay in miserable jobs when they had a plethora of options, because job hunting was too scary. Being fired early rescued me from that.
Everyone who waits longer than me to publicly share their ideas is a coward, afraid to expose their ideas to the harsh light of day. Everyone who publicly shares their ideas earlier than me is a maniac, wasting others people's time with stream of consciousness bullshit.
Brandon Sanderson is a bestselling fantasy author. Despite mostly working with traditional publishers, there is a 50-60 person company formed around his writing[1]. This podcast talks about how the company was formed.
Things I liked about this podcast:
The only non-Sanderson content I found was a picture book from his staff artist.
HEROIC/REACTIVE VS RESPONSIBLE/PROACTIVE AGENCY
A few month's ago, twitter's big argument was about this AITA, in which a woman left a restaurant to buy ranch dressing. Like most viral AITAs this is probably fake, but the discourse around it is still revealing. The arguments were split between "such agency! good for her for going after what she wants" and "what is she, 3?". I am strongly on the side of people doing what they want with their own food, but in this case I think the people praising her have missed the point, and the people criticizing her have focused on the wrong thing.
I think it's weird but harmless to drown all your food in ranch dressing. But it is, at best, terribly rude to leave a date for 20 minutes to run an errand. If it is so important to you to have ranch on all your food, either check with the restaurant ahead of time or just bring a small bottle by default.
So this woman is agentic in the sense of "refusing to accept the environment as it is, working to bring it more in line with her preferences". But it's a highly reactive form of agency that creates a lot of negative externalities.
I see this a lot in the way rationalists talk about agency. What...
Abstract issues raised by the Nonlinear accusations and counter-accusations
EA/rationality has this tension between valuing independent thought, and the fact that most original ideas are stupid. But the point of independent thinking isn't necessarily coming up with original conclusions. It's that no one else can convey their models fully so if you want to have a model with fully fleshed-out gears you have to develop it yourself.
Retractions should get more votes than other types of posts. It is good to incentivize retractions IMHO.
GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something b...
Why does Hollywood make movies "based on" books that have extraordinarily little to do with the book? not just simplifying for time or lowest common denominator, but having nothing in common besides a few names and a title?
According to Brandon Sanderson (link lost) and some other guy (link very lost), this comes about when a writer has an original script. Original scripts written on spec barely get purchased in Hollywood (although they do get commissioned), especially from young writers. But book adaptations do get purchased. So writers or producers will find a book that has an element in common with their story, buy the rights, and then write a script with the story they actually wanted.
It is also fairly common for directors/writers to use a book as a inspiration but not care about the specific details because they want to express their own artistic vision. Hitchcock refused to adapt books that he considered 'masterpieces', since he saw no point in trying to improve them. When he adapted books (such as Daphne du Maurier’s The Birds) he used the source material as loose inspiration and made the films his own.
...François Truffaut: Your own works include a great many adaptations, but mostly they are popular or light entertainment novels, which are so freely refashioned in your own manner that they ultimately become a Hitchcock creation. Many of your admirers would like to see you undertake the screen version of such a major classic as Dostoyevsky’s Crime and Punishment, for instance.
Alfred Hitchcock: Well, I shall never do that, precisely because Crime and Punishment is somebody else’s achievement. There’s been a lot of talk about the way in which Hollywood directors distort literary masterpieces. I’ll have no part of that! What I do is to read a story only once, and if I like the basic idea, I just forget all about the book and start to create cinema. Today I woul
There's a thing in EA where encouraging someone to apply for a job or grant gets coded as "supportive", maybe even a very tiny gift. But that's only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn't a natural fit for, because "it's quick and there are few applicants". This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn't the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I'm not mad at you personally].
I've been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone "yeah you're probably not good enough".
A lot of EA job postings encourage people t...
Why ketamine is always used with an adjunct when anesthetizing animals, but often without in humans:
As a follow up on my previous poll: If you've worked closely with someone who used stimulants sometimes but not always, how did stimulants affect their ability to update? Please reply with emojis <1% for "completely trashed", 50% for neutral, >99% for "huge improvement".
Comments with additional details are welcome.
(Mods: Consider having these number thingies sorted separately from other reacts, and by the number thingy rather than by # of votes.)
(Oh number thingy = percentage)
PSA: If you are older than ~30 you may have only received 1 dose of MMR vaccine (which includes measles), and should consider a second one. I have not done the EV math on this.
in 1989 the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP) all shifted from recommending 1 dose of the MMR vaccine to 2, with the second dose coming between 4-6 years old, because of outbreaks in adults who received only 1 dose. This means almost everyone who was 7 or older in 1989 received only one shot, people between 1-6 in 1989 may or may not have received a second, and I'm unclear how fast pediatricians adopted the standard so I'm not sure what the chances of a second shot are for people between ~30 and 35.
A third shot is generally considered harmless but if you're being cautious it's possible to check your childhood vaccine records. I was able to get mine from my school district; if your pediatrician's office is still around you can also try them.
I wanted to publish this with actual math on the costs and benefits, but the post was suffering from serious feature creep and I at least wanted to get this one fact out there quickly.
I have a friend who spent years working on existential risk. Over time his perception of the risks increased, while his perception of what he could do about them decreased (and the latter was more important). Eventually he dropped out of work in a normal sense to play video games, because the enjoyment was worth more to him than what he could hope to accomplish with regular work. He still does occasional short term projects, when they seem especially useful or enjoyable, but his focus is on generating hedons in the time he has left.
I love this friend as a counter-example to most of the loudest voices on AI risk.You can think p(doom) is very high and have that be all the more reason to play video games.
I don't want to valorize this too much because I don't want retiring to play video games becoming the cool new thing. The admirable part is that he did his own math and came to his own conclusions in the face of a lot of social pressure to do otherwise.
Being fired from my first programming job was a counterintuitive gift. Also counterintuitive: I probably would have been better off with worse interviewing skills. My interview skills outstripped my programming skill, which meant I got jobs I was underqualified for, which ultimately went badly. I would have been better off with interview skills better correlated with my actual job skills.
TBC I didn't spend my whole career like this, just the last 3-4 years. And yeah, for those years I felt I was bad at my job and underqualified, and it was extremely frustrating that when I reached out for help people would say "oh you just have imposter syndrome" without checking my actual performance. And when they finally caught up to reality, there was never any acknowledgement that I'd been trying to get this addressed for months.
Last week I got nerdsniped with the question of why established evangelical leaders had a habit of taking charismatic narcissists and giving them support to found their own churches[1]. I expected this to be a whole saga that would teach lessons on how selecting for one set of good things secretly traded off against others. Then I found this checklist on churchplanting.com. It’s basically “tell me you’re a charismatic narcissist who will prioritize growth above virtue without telling me you’re a…“. And not charismatic in the sense of asking reasonable object-level questions that are assessed by a 3rd party and thus vulnerable to halo effects[2].
The first and presumably most important item on the checklist is "Visioning capacity", which includes both the ability to dream that you are very important and to convince others to follow that dream. Comittment to growth has it's own section (7), but it's also embedded in, but there's also section 4 (skill at attracting converts). Section 12 is Resilience, but the only specific setback mentioned is ups and downs in attendance. The very item on the list is "Can you create a grand Faith" is the last item on the 13 point list...
None of my principled arguments against "only care about big projects" have convinced anyone, but in practice Google reorganized around that exact policy ("don't start a project unless it could conceivably have 1b+ users, kill if it's ever not on track to reach that") and they haven't home grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
Lord grant me the strength to persevere when things are hard
The courage to quit when things are impossible
And the wisdom to know the difference
(original post)
I've stuck with Amazing Marvin longer than any other project management/todo app. That includes some pretty awful times when 95% of my tasks were medical or basic home care, times I was firing on all cylinders and cranking through ambitious projects, and times inbetween.
Some things I like about Marvin
Hank Green on the worst things about no longer having cancer:
As of October 2022, I don't think I could have known FTX was defrauding customers.
If I'd thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don't think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn't keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I'm quite sure that in a more open, information-sharing environment I would have gotten that information. And if I'd gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to...
Recently Timothy TL and I published a podcast on OpenPhil and GoodVentures. As part of this, we contacted 16 people and organizations asking for comment. They were given access to the full recording as well as a searching transcript.
Last time I did this the results were impressive, with over half of respondents answering in <24 hours and only 2 non-responders. I walked away thinking asking for comments was a surprisingly cheap norm with a lot of upside. This time it felt like a slog, with the process dragging on for weeks beyond what I felt was a quite generous initial deadline.
How do stimulants affect your ability to update or change your mind? @johnswentworth and I are debating stimulant usage in an unpublished dialogue, and one crux is how stimulants affect one's ability to update.
People who have used stimulants, please percent-emoji with how they affect your ability to update- <1% for "completely trashed", 50% for neutral, >99% for "huge improvement". Comments with additional details are welcome.
Check my math: how does Enovid compare to to humming?
Nitric Oxide is an antimicrobial and immune booster. Normal nasal nitric oxide is 0.14ppm for women and 0.18ppm for men (sinus levels are 100x higher). journals.sagepub.com/doi/pdf/10.117…
Enovid is a nasal spray that produces NO. I had the damndest time quantifying Enovid, but this trial registration says 0.11ppm NO/hour. They deliver every 8h and I think that dose is amortized, so the true dose is 0.88. But maybe it's more complicated. I've got an email out to the PI but am not hopeful about a response clinicaltrials.gov/study/NCT05109…
so Enovid increases nasal NO levels somewhere between 75% and 600% compared to baseline- not shabby. Except humming increases nasal NO levels by 1500-2000%. atsjournals.org/doi/pdf/10.116….
Enovid stings and humming doesn't, so it seems like Enovid should have the larger dose. But the spray doesn't contain NO itself, but compounds that react to form NO. Maybe that's where the sting comes from? Cystic fibrosis and burn patients are sometimes given stratospheric levels of NO for hours or days; if the burn from Envoid came from the NO itself than those patients would be in agony.
I'm not fi...
Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there's another factor: it's the only way to avoid the geeks->mops->sociopaths death spiral.
An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they're applied to. But you can only maintain the ratio that finely when you're very small. Eventually you need to decide if you're going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different.
"Decide" may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others.
"Rocks for jocks" isn't a stereotype because geology is easy. It's a stereotype because rocks are heavy and field sites are far away.
Having AI voices read my drafts back to me feels like it's seriously leveled up my writing. I think the biggest, least replaceable feature is that I'm more less likely to leaves gaps in my writing- things where it's obvious to me but I need to spell it out. It also catches bad transitions, and I suspect it's making my copy editor's job easier.
Toy model:
a person's skill level has a floor (what they can do with minimal effort) and ceiling (what they can do with a lot of thought and effort). Ceiling raises come from things we commonly recognize as learning: studying the problem, studying common solution. Floor raises come from practicing the skills you already have, to build fluency in them.
There's a rubber band effect where the farther your ceiling is from your floor, the more work you have to put in to raise it further. At a certain point the efficient thing to do is to grind until you have raised your floor, so that further ceiling raises are cheaper, even if you only care about peak performance.
My guess for why that happens is your brain has some hard constraints on effort, and raising the floor reduces the effort needed at all levels. E.g. it's easier to do 5-digit multiplication if you've memorized 1-digit times tables.
My guess is the pots theory of art works best when a person's skill ceiling is well above their floor. This is true both because it means effort is likely the limiting reagent, the artist will have things to try rather than flailing at random, and they will be able to assess how good a given pot is.
Has anyone gotten good results with gemini doc editing? I have found it outrageously useless, but maybe I'm doing it wrong.
Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They're not viable for everyone, but for people like me who:
They seem pretty ideal.
So why haven't I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere.
[cross-posted from What If You Lived In the Least Convenient Possible World]
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don't think this post does a very good job of advocating for this position. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications. She's clearly confused about what convenient means (having to slow down to take care of yourself is very inconvenient), and I think this is significant and not just a poor choice of words. So I wrote my own version of the position.
Let's say Bob is right that the costs exceed the benefits of working harder or suffering. Does that need to be true forever? Could Bob invest in changing himself so that he could better live up to his values? Does he have an ~obligation[1] to do that?
We generally hold that people who can swim have obligations to save drowning children in lakes[2], but there's no obligation for non-swimmers to make an attempt that will in...
It's weird how hard it is to identify what is actually fun or restorative, vs. supposed to be fun or restorative, or used to be fun or restorative but no longer is. And "am I enjoying this?" should be one of the easiest questions to answer, so imagine how badly we're fucking up the others.
Projects I might do or wish someone else would do:
A very rough draft of a plan to test prophylactics for airborne illnesses.
Start with a potential superspreader event. My ideal is a large conference, many of whom travelled to get there, in enclosed spaces with poor ventilation and air purification, in winter. Ideally >=4 days, so that people infected on day one are infectious while the conference is still running.
Call for sign-ups for testing ahead of time (disclosing all possible substances and side effects). Split volunteers into control and test group. I think you need ~500 sign ups in the winter to make this work.
Splitting controls is probably the hardest part. You'd like the control and treatment group to be identical, but there are a lot of things that affect susceptibility. Age, local vs. air travel, small children vs. not, sleep habits... it's hard to draw the line
Make it logistically trivial to use the treatment. If it's lozenges or liquids, put individually packed dosages in every bathroom, with a sign reminding people to use them (color code to direct people to the right basket). If it's a nasal spray you will need to give everyone their own bottle, but make it trivial to get more if someone l...
This is a trial balloon for a longer post. Please let me know which parts you're interested in, if any.
The standard (North American) story of plate tectonics is of accidental discovery: continental drift was rejected for lack of evidence or a mechanism. 50 years the US Navy discovered an anomaly on the sea floor that eventually led to the discovery of plate tectonics. But before that accidental discovery, geologists were very close to codifying plate tectonics on purpose.
Some definitions:
Wegener proposed Continental Drift in 1912. The past is a different country, but when I look at the evidence, it sure looks plausible to me. He didn't just point to the jigsaw puzzle shorelines between Africa and South America, but to fossil evidence that made no sense without adjoining land, geological features...
things I found interesting about this video:
There's a category of good thing that can only be reached with some amount of risk, and that are hard to get out once you start. All of romance risks getting your heart broken. You never have enough information to know a job will always and forever be amazing for you. Will anti-depressants give you your life back or dull your affect in hard to detect ways?
This is hard enough when the situation is merely high variance with incomplete information. But often the situations are adversarial: abusive partners and jobs camouflage themselves. Or the partner/job might start out good and get bad, as their finances change. Or they might be great in general but really bad for you (apparently other people like working for Google? no accounting for taste).
Or they might be genuinely malicious and telling you the issue is temporary, or that their ex wasn't a good fit or you are.
Or they might not be malicious, it might genuinely be the situation, but the situation isn't going to get better so it's damaging you badly.
You could opt out of the risk, but at the cost of missing some important human experiences and/or food.
How do you calculate risks when the math is so obfuscated?
When I did my vegan nutrition write-ups, I directed people to Examine.com's Guide to Vegan+Vegetarian Supplements. Unfortunately, it is paywalled. Fortunately, it is now possible to ask your library to buy access, so you can read that guide plus their normal supplement reviews at no cost to yourself.
Library explainer: https://examine.com/plus/public-libraries/
Ven*n guide: https://examine.com/guides/vegetarians-vegans/
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
...But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientious
Problems I am trying to figure out right now:
1. breaking large projects down into small steps. I think this would pay off in a lot of ways: lower context switching costs, work generally easier, greater feelings of traction and satisfaction, instead of "what the hell did I do last week? I guess not much". This is challenging because my projects are, at best ill-defined knowledge work, and sometimes really fuzzy medical or emotional work. I strongly believe the latter have paid off for me on net, but individual actions are often lottery tickets with payouts ...
"Do or Do Not: There is No Try"
Like all short proverbs each word is doing a lot of work and you can completely flip the meaning by switching between reasonable definitions.
I think "there is no try" often means "I want to gesture at this but am not going to make a real attempt" in sentences like "I'll try to get to the gym tomorrow" and "I'll try to work on my math homework tonight".
"there is no try" means "I am going to make an attempt at this but it's not guaranteed to succeed" in sentences like "I'm going to try to bench 400 tomorrow", "I'm t...
"have one acceptable path and immediately reject anyone who goes off it" cuts you off from a lot of good things, but also a lot of bad things. If you want to remove that constraint to get at the good weirdness, you need to either tank a lot of harm, or come up with more detailed heuristics to prevent it
repurposed from my comment on a FB post on an article criticizing all antidepressants as basically placebos
epistemic status: kind of dreading comments on this because it's not well phrased, but honing it is too low a priority. Every time you criticize insufficient caveating an angel loses its wings.
medical studies are ~only concerned with the median person. Any unusual success or failure is written off as noise, instead of replicable variability. As conditions get defined they narrow that to "median person with condition X" rather than "median person...
People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there's often a lot of uncertainty in:
Actual axe-sharpening rarely turns into intellectual masturbation be...
Some things are coordination problems. Everyone* prefers X to Y, but there are transition costs and people can't organize to get them paid.
Some things are similar to coordination problems, plus the issue of defectors, Everyone prefers X (no stealing) to Y (constant stealing), but too many prefer X'(no one but me steals). So even if you achieve X, you need to pay maintenance costs.
Sometimes people want different things. These are not coordination problems.
Sometimes people endorse a thing but don't actually want it. These are not coordination pro...
That's a pretty reasonable guess, although I wasn't quite that dumb.
I'm temporarily working a full time gig. The meetings are quite badly run. People seemed very excited when I introduced the concept of memo meetings[1], but it kept not happening or the organizer would implement it badly. People (including the organizer) said nice things about the concept so I assumed this was a problem with coordination, or at least "everyone wants the results but is trying to shirk".
But I brought it up again when people were complaining about the length of one part of a meeting, and my boss said[2] "no one likes reading and writing as much as you", and suddenly it made sense that people weren't failing to generate the activation energy for a thing they wanted, they were avoiding a thing they didn't want but endorsed (or I pressured them into expressing more enthusiasm than they actually felt, but it felt like my skip boss genuinely wanted to at least try it and god knows they were fine shooting down other ideas I expressed more enthusiasm over).
So the problem was I took people's statements that they wanted memo meetings but got distracted by something urgent to be true, when actu...
I have a new project for which I actively don't want funding for myself: it's too new and unformed to withstand the pressure to produce results for specific questions by specific times*. But if it pans out in ways other people value I wouldn't mind retroactive payment. This seems like a good fit for impact certificates, which is a tech I vaguely want to support anyway.
Someone suggested that if I was going to do that I should mint and register the cert now, because that norm makes IC markets more informative, especially about the risk of very negative proje...
In this video essay, Patrick Willems talks about George Lucas and Francis Ford Coppola. Both of them took a huge risk in the early 80s to self-finance their own films (Empire Strikes Back and One From The Heart). Their goal was to make enough money to gain independence from the studio system and make the movies they wanted to make.
In the short term, George Lucas was the obvious winner here, in that Empire Strikes back is one of the most popular movies of all time and it indeed granted him complete independence from the studio system. He used that fre...
I have friends who, early in EA or rationality, did things that look a lot like joining nonlinear. 10+ years later they're still really happy with those decisions. Some of that is selection effects of course, but think some of it is the reasons they joined were very different.
People who joined early SingInst or CEA by and large did it because they'd been personally convinced this group of weirdos was promising. The orgs maybe tried to puff themselves up, but they had almost no social proof. Whereas nowadays saying "this org is EA/rationalist" gives you a b...
Sometimes different people have different reaction to the same organization simply because they want different things. If you want X, you will probably love the organization that pushes you towards X, and hate the organization that pushes you away from X.
If this is clearly communicated at an interview, the X person probably will not join the anti-X organization. So the problem is when they figure it out too late, when changing jobs again would be costly for them.
And of course it is impossible to communicate literally everything, and also sometimes things change. I think that a reasonable rule of thumb would be to communicate the parts where you differ significantly from the industry standard. Which leads to a question what is the industry standard. Is it somewhere documented explicitly? But there seems to be a consensus, if you e.g. go to Workplace Stack Exchange, about what is normal and what is not.
(...getting to the point...)
I think the "original weirdos" communicated their weirdness clearly.
Compared to that, the EA community is quite confusing for me (admittedly, an outsider). On one hand, they handle tons of money, write grant applications, etc. On the other hand, they sometim...
In the spirit of this comment on lions and simulacra levels I present: simulacra and halloween decorations
Level 1: this is actually dangerous. Men running at you with knives, genuinely poisonous animals.
Level 2: this is supposed to invoke genuine fear, which will dissipate quickly when you realize it's fake. Fake poisonous spiders that are supposed to look real, a man with a knife jumps with a fake knife but doesn't stab you, monsters in media that don't exist but hit primal fear buttons in your brain.
Level 3: reminds people of fear without eve...
I know we hate the word content but sometimes I need a single word to refer to history books, longrunning horror podcasts, sitcoms, a Construction Physics blog post, and themepark analysis youtube essays. And I don't see any other word volunteering.
Let's say there's a drug that gives people 20% more energy (or just cognitive energy). My intuition is that if I gave it to 100 people, I would not end up with 120 people's worth of work. Why?
Possibilities:
I'm convinced people are less likely to update when they've locked themself into a choice they don't really want.
If I am excited to go to 6 flags and get a headache that will ruin the rollercoasters for me, I change my plans. But if I'm going out of FOMO or to make my someone else happy and I get a headache it doesn't trigger an update to my plans. The utilitarian math on this could check out, but my claim is that's not necessary, once I lock myself in I stop paying attention to pain signals and can't tell if I should leave or not.
4 months ago I shared that I was taking sublingual vitamins and would test their effect on my nutrition in 2025. This ended up being an unusually good time to test because my stomach was struggling and my doctor took me off almost all vitamins, so the sublinguals were my major non-food source (and I've been good at extracting vitamins from food). I now have the "after" test results. I will announce results in 8 days- but before then, you can bet on Manifold. Will I judge my nutrition results to have been noticeably improved over the previous results?...
I've heard that a lot of skill in poker is not when to draw or what to discard, it's knowing how much to bet on a given hand. There isn't that much you can do to improve any given hand, but folding earlier and betting more on good hands are within your control.
feels like a metaphor for something.
Which of the following research reports would you find most useful? Feel free to elaborate in comments. It's especially useful to know what your thresholds are for information changing a decision- how safe does it have to be and how certain do we need to be about that? on ketamine for depression
for a prescription sleeping pill (probably gabapentin, trazadone, or seroquel. I'd love to do all 3 but the unit of comparison is 1 report)
Whatever I do is likely to focus on the costs. I'll give a very rough sketch of the upside, but these are all things where the...
Is there a lesswrong canon post for the quantified impact of different masks? I want to compare a different intervention to masks and it would be nice to use a reference that's gone through battle testing.
AFAICT, for novel independent work:
genuine backchaining > plan-less intuition or curiosity > fake backchaining.
And most attempts to move people from intuition/curiosity to genuine backchaining end up pushing them towards fake backchaining instead. This is bad because curiosity leads you to absorb a lot of information that will either naturally refine your plans without conscious effort, or support future backchaining. Meanwhile fake backchaining makes you resistant to updating, so it's a very hard state to leave. Also curiosity is fun and fake backch...
I feel like it was a mistake for Hanson to conflate goodharting, cooperative coordination, accurate information transfer, and extractive deception.
[good models + grand vision grounded in that model] > [good models + modest goals] > [mediocre model + grand vision]
There are lots of reasons for this, but the main one is: Good models imply skill at model building, and thus have a measure of self-improvement. Grand vision implies skill at building grand vision unconnected to reality, which induces more error.
[I assume we're all on board that a good, self-improving models combined with a grand vision is great, but in short supply]
Difficulties with nutrition research:
The best way through I see is to use population studies to find responsive markers, so people can run fast experiments on themselves. But it's still pretty iffy.
Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I'm working...
I think it's weird that saying a sentence with a falsehood that doesn't change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.
This feels especially weird when the "lie" is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.