All of Linch's Comments + Replies

2Quinn18dIt could be coincidental, but since then I think the rate of pondering founding/building ideas has increased. Perhaps my ability to see myself in a founder role has increased. (Which isn't specifically about profitable business models, so could be orthogonal to the billionaire suggestion: most of the "buildy" ideas I ponder are grant-reliant / unprofitable)
COVID and the holidays

If, on the other hand, the counterfactual is "you get covid a few years later", then the loss of expected life does not occur.

What's the intuition here? If we believe that infection confers less immunity than immunization, naively the counterfactual looks more like "get covid N-1 times" vs "get covid N times." Rather than "get covid once now" vs "get covid once some time in the future"

Frame Control

Sorry, do you mean this is "obviously" true for all humans, or only frame controllers? If the latter, I would consider this form of understanding intents useful Bayesian evidence for someone being a frame controller.

2Lukas_Gloor2moYeah, I think that's a good heuristic!
Frame Control

Frame control is an effect; very often, people who frame control will not be aware that this is what they’re doing, and have extensive reasoning to rationalize their behavior that they themselves believe. If you are close to a frame controller and squinting at them to figure out “are they hiding intent to control me,” you often will find the answer is “no.” 

I wonder if you can infer de facto intent from the consequences, ie, not the intents-that-they-think-they-had, but more the intents they actually had.

In particular, a lot of motivated cognition oft... (read more)

1Peter Hroššo2moI believe this is possible. When I was reading the OP, I was checking with myself how I am defending myself from malicious frame control. I think I am semi-consciously modeling the motivation (=intent they actually had, as you call it) behind everything people around me do (not just say, as the communication bandwidth in real life is much broader). I'd be very surprised if most people wouldn't be doing something similar at least on the sub-conscious level. The difficult part in my opinion is: 1) Make this subconscious information (aka intuition) consciously available and well calibrated 2) Actually trust this intuition, as the frame-controller is adversarially undermining your trust in your own sense making and actively hiding their true motivations, so usually your intuition will have high uncertainty
2Lukas_Gloor2moObviously. It's interpersonally exploitative cognition.
Speaking of Stag Hunts

Have people considered just making a survey and sending it out to former Leverage staff? This really isn't my scene but it seems like while surveys have major issues, it's hard for me to imagine that surveys are worse at being statistically representative than qualitative accounts that went through many selection filters,

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

Anna Salamon:
So, I think... So, look, I - mm. It's hard to say all the things in all the orders at once. I'm going to say a different thing and then I'll [inaudible], sorry.

So, once upon a time I heard from a couple junior staff members at CFAR that you were saying bad things to them about me and CFAR.

Geoff Anders:
Believe it.

Anna Salamon:
I forget. They weren't particularly false things. So that I don't accidentally [inaudible]-

Typo?

2Rob Bensinger3moWhat's the typo?
They don't make 'em like they used to

Thanks, appreciate the diagnosis!

Tell the Truth

Feels like a sleight-of-hand to me that your post did not make clear.

9dxu3moThis is not a sleight-of-hand; Indian Americans (or Indian Britons, or Chinese Australians, or members of ethnicity X living in country Y) do constitute an ethnic group, in precisely the same way e.g. African Americans constitute an ethnic group. This is because membership in these groups is decision-relevant, in a way that membership in broader groups such as "all Indians in the world" is not: e.g. when you are selecting from a pool of job applicants, you will in most cases be dealing with applicants who either (a) already live within the country, or (b) intend to move to the country--either of which subjects them to the selection effect induced by the H1-B visa process. And as it is in this context that "ethnic groups" (and moral questions surrounding the fair or unfair treatment thereof) are even a thing worth noticing to begin with, there is no sleight-of-hand in the original post.
They don't make 'em like they used to

Hmm so framed another way, I think the claim is that capitalism previously had created inner optimizers in individuals interested in "high quality craftsmanship," but over time the alignment problem has been better solved with more optimization power and now individuals/companies are better optimized for selling goods. Does this sound like an accurate paraphrase of your position? 

(FWIW it sounds pretty plausible to me)

3Matthew Barnett3moI'd say roughly, yes. However, I would interpret with caution the idea that there is a coherent objective function implied by the market that we have recently gotten better at solving.
They don't make 'em like they used to

As clone of saturn noted, one need not posit a conspiracy for planned obsolescence to occur. The ordinary process of increasing profits combined with information asymmetry is more than sufficient. I wouldn't go as far as saying that the old products are better, but I'd suspect that over time, manufacturers starting placing greater emphasis on "what will sell" relative to "what represents high quality craftsmanship."

I understand how this can be an explanation for level effects, but not how this can explain the delta.

4Matthew Barnett3moI don't have concrete data to back this up, but I'd expect the market for selling consumer goods is much more competitive than 100 years ago, given the rise of globalization, and increasing trade more generally.
They don't make 'em like they used to

I think that model would not predict the result at 0:06, fwiw.

4gjm3moThe video makes it really hard to tell exactly what's going on (particularly annoying is the bit at 1:32 where they show an overhead view, which would let us see what's happening to each car without bits of the other one being in the way -- and then cut away from it to yet another nigh-incomprehensible side view at the instant of contact. But I think there are two things going on here: the newer car has a slightly more squashable front portion, and a much less squashable passenger compartment. In a head-on collision between the cars, the former doesn't do much to make the newer car look better (though it does make the collision less bad for the occupants of both vehicles) because what's happening is that energy that would otherwise be used for crushing both drivers is used for crushing the newer car's front part instead. So part-way through 0:29 you can (I think) see that the newer car's front has scrunched up more. But there's still enough kinetic energy, or momentum, or whatever the relevant quantity actually is here, to keep scrunching. As we go through 0:30, the front of the older car also gets crushed. But so does the passenger compartment of the older car, whereas the passenger compartment of the newer car remains largely intact. So the newer car * has a front portion that can absorb more energy by crumpling, which helps reduce the (other) damage to both cars * has a stronger and more rigid passenger compartment, so that once the crash has proceeded far enough that the next thing that has to go is either the front of the older car or the passenger compartment of the newer car, it's the front of the older car that goes.
2Vanilla_cabs3moWell now I'm confused.
They don't make 'em like they used to

Because Moloch. If at least one major manufacturer add extra lifespan, that forces the others to compete. But the real profit-maximizing move for major manufacturers as a whole is to conspire into selling short-lived stoves.

Why would Moloch (the metaphorical God for "coordination problems are hard") be the appropriate metaphor for conspiracy?

1acylhalide3moBecause people at large are not able to coordinate to make what they really want, for each other. So they make do with a capitalist model - with rules and enforces of exchange, units of accounting, corporate structures, etc. Which then allows the few to coordinate at the cost of the many.
1Bezzi3moRight, "conspire" was the wrong word (as others have noted, information asymmetry is enough, and I don't think that manufacturers literally gather in smoke-filled rooms to adjust the lifespan of their products). But I still think Moloch to be a valid metaphor for a situation where: * customers are forced to buy short-lived products * manufacturers could unilaterally prolong the lifespan of their products at a small cost (or even a small gain), but they choose not to because they want to sell more now * long-lived products could be sold at higher prices
They don't make 'em like they used to

Another common belief is that older cars are more crash-resistant than modern cars, with varying explanations. I'm not sure about this but I suspect the belief is very wrong, as can be evidenced by this crash test between a 1959 Chevy and 2009 Chevy.

1TLW3moOlder cars are more resistant to low-speed collisions than new cars. In a low-speed collision you can often have a new car totaled where an older car would have been fine. (There was a period where there were US regulations requiring low-speed crashes to not cause significant damage, for one. 1970s or so. (Federal Motor Vehicle Safety Standard No. 215 I believe.)) In higher-speed collisions newer cars are significantly better at keeping the passenger compartment intact than older cars, where things would fail in a haphazard fashion once things do start buckling.
2Vanilla_cabs3moMy understanding is that old cars were made of stronger materials that deform less on impact. As a result, it was the content of the car who deformed on impact. The new cars are made less resistant so that the users have better chances to survive an impact. This is a definite progress (and a good excuse for making non-durable cars). In 1999 the new trend was already started. Try a 1960' or 1980' car.
Dating profiles from first principles: heterosexual male profile design

They can do that, but there's no strong reason to believe that they did do that.

0ChristianKl3moThe fact that they write articles about how they are not using ELO anymore is a strong reason to believe that they don't and do something more complex.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

To be slightly more precise, I think I historically felt like I identify with like 60% of framings in the general MIRI cluster(at least the way it appears in public outputs) and now I'm like 80%+, and part of the difference here was that I already was pretty into stuff like empiricism, materalism, Bayesianism, etc, but I previously (not very reflectively) had opinions and intuitions in the direction of thinking myself as an computational instance, and these days I can understand the algorithmic framing much better (even though it's still not very intuitive/natural to me).

(Numbers made up and not well thought out)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This sounds right to me. FDT feels more natural when I think of myself as an algorithm than when I think of myself as a computation, for example.

7Linch3moTo be slightly more precise, I think I historically felt like I identify with like 60% of framings in the general MIRI cluster(at least the way it appears in public outputs) and now I'm like 80%+, and part of the difference here was that I already was pretty into stuff like empiricism, materalism, Bayesianism, etc, but I previously (not very reflectively) had opinions and intuitions in the direction of thinking myself as an computational instance, and these days I can understand the algorithmic framing much better (even though it's still not very intuitive/natural to me). (Numbers made up and not well thought out)
4Duncan_Sabien3moI'm saying they involved circling often while I was there but that fact was something like 3-15% of their "character" (and probably closer to 3% imo) and so learning that some other thing also involves circling tells you very little about the overall resemblance of the two things.
5AnnaSalamon3moCFAR staff retreats often involve circling. Our last one, a couple weeks ago, had this, though as an optional evening thing that some but not most took part in.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I'm actually pretty surprised by this, the people I personally know in academia who aren't community members tend to a) be true believers about their impact or b) really love the problems they work on or their subfields or c) feel kind of burned. Liking academia for work-life balance reasons seem very surprising to me, even my friends in fields with a fair amount of free time (eg theoretical CS) usually believe that they can have an easier life elsewhere.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

If you pick a randomly selected academic or hobby conference, I will be much more surprised that they had circling than if they had food.

1Duncan_Sabien3moYeah. I am more pointing at "the very fact that Scott seems to think that 'trying to circle more than once' is sufficient to posit substantial resemblance between MIRI research retreats and CFAR staff retreats is strong evidence that Scott has no idea what the space of CFAR staff retreats is like."
Petrov Day Retrospective: 2021

Yeah I think this is a pretty important point. I pointed out this before here, here, and here (2 years ago). I personally still enjoyed the game as is. However I'm open to the idea that future Petrov Days should look radically different, and wouldn't have a gamefying element at all. But I think if we want a game that reflects the structure of Petrov's decision that day well in an honest way, I personally would probably want something that accounts for the following features:

1. Petrov clearly has strong incentives and social pressures to push the button.

2. ... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

As an example of the difficulties in illusions of transparency, when I first read the post, my first interpretation of "largely fake research" was neither of what you said or what jessicata clarified below but I simply assumed that "fake research" => "untrue," in the sense that people who updated from >50% of research from those orgs will on average have a worse Brier score on related topics. This didn't seem unlikely to me on the face of it, since random error, motivated reasoning, and other systemic biases can all contribute to having bad models of the world.

Since 3 people can have 4 different interpretations of the same phrase, this makes me worried that there are many other semantic confusions I didn't spot.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Are you including productivity/prescription drugs like off-label use of Adderrall or modafinil or only recreational drugs? 

I think the former is substantially less dangerous, as, among others, there's at least in theory substantially less motivated reasoning in users for wanting reasons to justify their use. 

1James_Miller3moI'm not including prescription but off label use of Adderall or Modafinil as I do indeed think they can increase productivity (for some) and buying them doesn't enrich drug gangs.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Agreed, there's two different errors here. One is conflating total harm with per-individual harm. The other, more subtle point you're alluding to is that a lot of the relative harms of alcohol/tobacco/etc has to do with frequency of use, which is a different question from whether doing X once in an individual or community setting is advisable.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I'm confused why there were ~40 comments in this subthread without anybody else pointing out this pretty glaring error of logical inference (unless I'm misunderstanding something)

2Viliam3moI was going to say something similar, that "how dangerous is substance X" only makes sense when you specify how much of the substance X and how often you consume. Like, when you calculate "the danger of alcohol", are you describing those who drink one glass of wine each year on their birthday, or those who start every morning by drinking a cup of vodka, or some weighted average? Same question for every other substance. And if the answer is "the danger of how the average user consumes substance X", well, what makes you sure that this number will apply to you? (Are you really going to make sure that your use is average, in both amount and frequency? Do you even know what those averages are?) Then consider the fact that different people can react to the same substance differently. If you specify the "danger" as one number, what is the underlying probability distribution? If substance X causes serious-but-not-crippling problems in 50% of users, and substance Y completely destroys 5% of users, which one is "more dangerous"?
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

A 2010 analysis concluded that psychedelics are causing far less harm than legal drugs like alcohol and tobacco. (Psychedelics still carry substantial risks, aren't for everybody, and should always be handled with care.)


? This is total harm, not per use. More people die of car crashes than from rabid wolves, but I still find myself more inclined to ride cars than ride rabid wolves as a form of transportation.

1ioannes3moGood point, though I think current evidence [https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=TSuFngrgyWkyaZBea] as a whole (anti-addictive; efficacy as a therapeutic modality; population surveys finding psychedelic use anticorrelated with psychological distress) pushes towards psychedelics' risk profile being less harmful though higher variance than alcohol and tobacco per use.

I'm confused why there were ~40 comments in this subthread without anybody else pointing out this pretty glaring error of logical inference (unless I'm misunderstanding something)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Just want to register that this comment seemed overly aggressive to me on a first read, even though I probably have many sympathies in your direction (that Leverage is importantly disanalogous to MIRI/CFAR)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

What I'm talking about is a system of moral duties and obligations connected to an explicitly academic mission. Academia is older than the corporation, and is a separate world. It's very important not to confuse them, and I wish that corporations (and "research labs" associated with corporations) would state very clearly "we are in no way an academic institution".

To be clear, my own organization is a nonprofit. We are not interested in making money, nor in doing other things of low moral value. 

I currently think emulating the culture of normal compani... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Thanks so much for the response! I really appreciate it.

I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hierarchical student-teacher relationships, etc.

I think we have more of a standard manager-managee hierarchal relationship, with the normal corporate guardrails plus a few more. We also have explicit lines of reporting for abuse or other potential issues to people outside of the organization to minimize potential coverups.

Here are my general thoughts:

An open question is when you have a duty of care

I'm kind... (read more)

5temporary_visitor_account3moThis seems like the beginning of a very good discussion, but: 1. I want to be clear that I'm not a member of the LW community, and I don't want to take up space here. 2. There are complex and interesting ideas in play on both sides that are hard to communicate in a back-and-forth, and are perhaps better saved for a structured long-form presentation. To that end, I'll suggest that if you like we chat offline. I'm in NYC, for example, and you're welcome to get in touch via PM.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Thanks for the outside perspective. If you're willing to go into more detail, I'm interested in a more detailed account from you on both what academia's safeguards are and (per gwillen's comment) where do you think academia's safeguards fall short and how that can be fixed. 

This is decision-relevant to me as I work in a research organization outside of academia (though not working on AI risk specifically), and I would like us to both be more productive than typical in academia and have better safeguards against abuse.

If it helps, we have about 15 rese... (read more)

Sure. I'm really glad to hear. This is not my community, but you did explicitly ask.

This is just off the top of my head, and I don't mean it to be a final complete and correct list. It's just to give you a sense of some things I've encountered, and to help you and your org think about how to empower people and help them flourish. Academia uses a lot of these to avoid the geek-MOP-sociopath cycle.

I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hiearchical student-teacher relationships, etc.

An open question is... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Sorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms:

1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed

2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed.

4Gunnar_Zarncke3moWell, I agree that the actual prevalence you have in mind would be roughly half of 38% i.e. ~20%. That is still much higher than the 12% you arrived at. And either value is so high that there is little surprise some severe episodes of some people happened in a 5-year frame.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.

If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.

Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed... (read more)

Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)

I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate

... (read more)
6habryka3moI feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from? 24% of people have depression, 17% have anxiety, resulting in something like 30%-40% having one or the other. I did not remember the section about the screening instruments over-identifying cases of depression/anxiety by approximately a factor of two, which definitely cuts down my number, and I should have adjusted it in my above comment. I do think that factor of ~2 does maybe make me think that we are doing a bit worse than grad students, though I am not super sure.
7Gunnar_Zarncke3moNote that the pooled prevalence is 24% (CI 18-31). But it differs a lot across studies, symptoms, and location. In the individual studies, the range is really from zero to 50% (or rather to 38% if you exclude a study with only 6 participants). I think a suitable reference class would be the University of California which has 3,190 participants and a prevalence of 38%.
In Wikipedia — reading about Roko's basilisk causing "nervous breakdowns" ...

What does "magiteral" mean here? 

At any rate, you're free to be the change you want to see in the world. :)

It was a dirty job, he thought, but somebody had to do it. 

As he walked away, he wondered who that somebody might be.


 

In Wikipedia — reading about Roko's basilisk causing "nervous breakdowns" ...

... if THE CLAIM is true then it brings to mind some potentially unkind questions about the psychological heath of a seemingly significant portion of the 'rationality community'.

So I think we have much stronger evidence of psychological health issues with the rationality community (which I assume is the same thing as the 'rationality community' though I'm uncertain) via things like the LW and SSC surveys. Perhaps you do not trust surveys because of self-report issues? But in that case I'd probably look at proxies like common correlates of mental healt... (read more)

5Robert Miles3moAgreed. On priors I would expect above-baseline rates of mental health issues in the community even in the total absence of any causal arrow from the community to mental health issues (and in fact even in the presence of fairly strong mental health benefits from participation in the community), simply through selection effects. Which people are going to get super interested in how minds work and how to get theirs to work better? Who's going to want to spend large amounts of time interacting with internet strangers instead of the people around them? Who's going to be strongly interested in new or obscure ideas even if it makes the people around them think they're kind of weird? I think people in this community are both more likely to have some pre-existing mental health issues, and more likely to recognise and acknowledge the issues they have.
-1ThurstonBT3mo@Linch: My observations (based on an admittedly limited set of observations and my lack of psychological training) agree with your "I'm personally pretty convinced that psychological issues in the rationalist community is substantially above baseline. I'm surprised, given the claimed truth-seeking and evidentiary rigor values of 'the rationalist community', that there is not a magiteral data-laden LessWrong essay that addresses "psychological issues in the rationalist community" that is cited when discussion turns to this topic. Can anyone point to such an essay?
Zoe Curzi's Experience with Leverage Research

My general feeling about this is that the information I know is either well-known or otherwise "not my story to tell." 

I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers.  As is common with human interactions, I appreciated many but not all of my interactions.

Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's be... (read more)

Book Analysis: New Thrawn Trilogy

In his first scene, Thrawn is fighting some Imperial troops which are camped in the forest. He figures out that their shield system must let through small forest animals, or they’d be constantly dealing with false alarms. So he tapes bombs to squirrels and blows up the Imperial camp.

Now, I have no idea if this is a legitimately clever military tactic, or if it makes sense that Thrawn is the first person to think of it. I’m not a tactician.

This is also the plot point of a different fantasy story I've read, and also not too different from some of the Taliban's actions in Afghanistan

Zahn probably got the idea from the many anecdotes/stories of https://en.wikipedia.org/wiki/Military_animal#As_living_bombs historically going back before the Mongols to even https://en.wikipedia.org/wiki/Olga_of_Kiev#Drevlian_Uprising (or earlier https://historyofyesterday.com/5-bizarre-uses-of-animals-as-weapons-in-war-by-armies-7a57108afcb ).

I'm sure Zahn knows at least some of them: they are a semi-common trivia point, and stealing from military history is a time-honored strategy - history is far more clever and imaginative than you are, it has built-i... (read more)

Common knowledge about Leverage Research 1.0

It's more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham. 

Zoe Curzi's Experience with Leverage Research

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevan

... (read more)

As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.

Linch's Shortform

[Job ad]

Rethink Priorities is hiring for longtermism researchers (AI governance and strategy), longtermism researchers (generalist), a senior research manager, and fellow (AI governance and strategy). 

I believe we are a fairly good option for many potential candidates, as we have a clear path to impact, as well as good norms and research culture. We are also remote-first, which may be appealing to many candidates.

I'd personally be excited for more people from the LessWrong community to apply, especially for the AI roles, as I think this community is u... (read more)

The LessWrong Team is now Lightcone Infrastructure, come work with us!

"Funding constraints" are almost always fake. Givedirectly can double their pay and just give less to recipients if they wanted to, for example. 

Institutions also usually have the option to just hire less people or fire more people. 

I feel like treating fake constraints as a clear decision boundary is silly; what happened here is that Lightcone+ surrounding ecosystems chose to make the fake constraints less of a constraint and more of a visible choice.

Babble challenge: 50 ways of sending something to the moon

Here's my attempt. I was only able to get to 25, and some of these ideas may have significant overlap. Also I couldn't figure out spoiler tags.

> 

  • Rocketship/whatever Apollo did
  • BIG space elevator
  • Bunch of nukes in succession
  • EM Railgun
  • Acausal trade with future space colonizers/aliens on the moon to form the thing I want to have on the moon
  • Hack the simulation, add stuff to moon
  • Crash an asteroid to the moon
  • Solve aging and xrisk, wait a very long time so the moon and the Earth would join
  • Extremely precise lasers to form the thing I want on the moon
  • Solar sa
... (read more)
Redwood Research’s current project

I get no visual feedback after clicking the "report" button in Talk to Filtered Transformer, so I have no idea whether the reported snippets got through.

For what it's worth, I got some violent stuff with a low score in my first few minutes of playing around with variations of the prompt below, but was unable to replicate it afterwards.

Joker: "Do you want to see a magic trick?" 

3Buck4moWe've now added this visual feedback, thanks for the suggestion :)
I read “White Fragility” so you don’t have to (but maybe you should)

There are >7 billion people on the planet, and likely >100 active threads on LessWrong. Your prior should strongly be against interaction with any specific person on any specific topic being the best use of your time, not for it. 

I read “White Fragility” so you don’t have to (but maybe you should)

Or the prediction that training cops to avoid shooting blacks could make a difference to the average lifespan of blacks.  This is impossible -- out of 42 million blacks in the U.S., a little over 200 per year are shot to death by cops.  For context that's more than the number that die from lightning strikes, but less than the number that die from drowning.

Concretely:

200 deaths/year*(75 years/lifetime)/42 million lifetimes)*40 years lost *(365 days/years) ~= 5.2 days/lifetime, so 5 days is the average lifetime lost for black people compared to if ... (read more)

Handicapping competitive games

"can six bronze players beat three grandmasters?"

Well, can they? 

It surprises me that this is remotely in question, like 3 GMs will almost certainly smoke 6 bronze players in Starcraft (I've seen far more impressive feats), and naively shooter games would be even more asymmetric (like if the GM player has much better aim, they can beat ~infinite bronze players).

3Firinn6moOverwatch is a hero shooter where every player has a different role and different abilities. As an experiment maybe a year ago, I once asked the best monkey player I knew at the time (4200 elo on a 0-5000 scale) to 1v1 the worst Bastion player I knew (under 1000 elo). In the neutral, the Bastion player consistently won despite the yawning chasm between their ratings. This is because monkey is a tank designed to take space and counter snipers and isolate squishy targets from their healers, and is not a character designed to 1v1 a Bastion. If you are missing three people from your team, you are missing three of the six key roles. The best player of all time playing Reinhardt could still probably lose a 1v1 to a bronze Pharah. Running 2-3 higher-skill players versus 4-6 lower-skill players in variety PUGs, I've generally found that the lower-skill players very consistently win unless we give the 3 higher-skilled players an additional advantage like extra HP or damage. But that's with a ton of obvious confounding factors - my higher rated players might be more inclined to just play for fun, plus the lower-skill players in my community are still reasonably strategic from exposure to team environments. The first result of my YouTube search is https://www.youtube.com/watch?v=ZfhdHUQbcNA [https://www.youtube.com/watch?v=ZfhdHUQbcNA] which, as you predict, goes in favour of the GMs. But I think there's very easy tweaks (such as to team composition) that would allow the bronze players to do better. You can see that the first round actually goes to overtime for quite a while, so on paper it's pretty close. Not really analysing this in depth as it's 8am.
Load More