3182

LESSWRONG
LW

3181
Community
Personal Blog

-60

Reasons against donating to Lightcone Infrastructure

by Mikhail Samin
2nd Nov 2025
7 min read
37

-60

Community
Personal Blog

-60

Reasons against donating to Lightcone Infrastructure
64Liron
9Mikhail Samin
56habryka
16Ben Pace
4Ben Pace
7Joern Stoehler
8habryka
8Joern Stoehler
8habryka
1Mikhail Samin
2habryka
3Mikhail Samin
1[comment deleted]
1David Joshua Sartor
3Mikhail Samin
2habryka
-10Mikhail Samin
13habryka
24Isaac King
0Mikhail Samin
16Vaniver
0Mikhail Samin
-9Mikhail Samin
6habryka
-2Jonathan Claybrough
7Isaac King
1habryka
4Jonathan Claybrough
23habryka
2Mikhail Samin
2habryka
-25Mikhail Samin
3habryka
10Liron
4Simon Lermen
2Isaac King
2Mikhail Samin
3james oofou
New Comment
37 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:16 PM
[-]Liron21h6449

Seems like the rapid-fire nature of an InkHaven writing sprint is a poor fit for a public post under a personally-charged summary bullet like “Oliver puts personal conflict ahead of shared goals”.

High-quality discourse means making an effort to give people the benefit of the doubt when making claims about their character. It’s worth taking time to carefully follow our rationalist norms of epistemic rigor, productive discourse, and personal charity.

I’d expect a high-evidence post about a very non-consensus topic like this to start out in a more norm-calibrated and self-aware epistemic tone, e.g. “I have concerns about Oliver’s decisionmaking as leader of Lightcone based on a pattern of incidents I’ve witnessed in his personal conflicts (detailed below)”.

Reply1
[-]Mikhail Samin21h9-3

I don't particularly want to damage the interests of Lighcone Infrastructure; I want people who'd find this information important for their decision-making to be aware of this information, and most of the value is in putting the information out there. People can make their own inferences about whether they agree with my and my friends' conclusions, and I don't particularly think that spending much resources on making a better-argued posts presenting the same information stronger is a very important thing.

I'm not particularly satisfied with the quality of this post, but that's my aesthetic preferences to a much larger extent than it is a judgement on the importance of putting this post out there.

(I would also feel somewhat bad about writing this post well after deriving better writing skills from Inkhaven, which means I wanted to publish it early on.)

Reply
[-]habryka1d*564

A lot of the claims about me, and about Lightcone, in this post are false, which is sad. I left a large set of comments on a draft of thist post, pointing out many of them, though not all of them got integrated before the post was published (presumably because this post was published in a rush as Mikhail is part of Inkhaven, and decided to make this his first post of Inkhaven, and only had like 2 hours to get and integrate comments).

A few quick ones, though this post has enough errors that I mostly just want people to really not update on this at all:  

Oliver said that Lightcone would be fine with providing Lighthaven as a conference venue to AI labs for AI capabilities recruiting, perhaps for a higher price as a tax.

This is technically true, but of course the whole question lies in the tax! I think the tax might be quite large, possible enough to cover a large fraction of our total operational costs for many months (like a 3-4x markup on our usual cost of hosting such an event, or maybe even more). If you are deontologically opposed to Lighthaven ever hosting anything that has anything even vaguely to do with capability companies, no matter the price, then yeah, I think that's a real criticism, but I also think it's a very weird one. Even given that, at a high enough price, the cost to the labs would be virtually guaranteed to be more than they would benefit from it, making it a good idea even if you are deontologically opposed to supporting AI companies. 

he said he already told some people and since he didn’t agree to the conditions before hearing the information, he can share it, even though wouldn’t go public with it.

The promise that Mikhail asked me to make was, as far as I understood it, to "not use any of the information in the conversation in any kind of adversarial way towards the people who the information is about". This is a very strong request, much stronger than confidentiality (since it precludes making any plans on the basis of that information that might involve competing or otherwise acting against the interests of the other party, even if they don't reveal any information to third parties). This is not a normal kind of request! It's definitely not a normal confidentiality request! Mikhail literally clarified that he thought that it would only be OK for me to consider this information in my plans, if that consideration would not hurt the interests of the party we were talking about.

And he sent the message in a way that somehow implied that I was already supposed to have signed up for that policy, as if it's the most normal thing in the world, and with no sense that this is a costly request to make (or that it was even worth making a request at all, and that it would be fine to prosecute someone for violating this even if it had never been clarified at all as an expectation from the other side).

He just learned that keeping secrets is bad in general, and so he doesn’t by default, unless explicitly agrees to.

This is not true! My policy is simply that you should not assume that I will promise to keep your secrets after you tell me, if you didn't check with me first. If you tell me something without asking me for confidentiality first, and then you clarify that the information is sensitive, I will almost always honor that! But if you show up and suddenly demand of me that I will promise that I keep something a secret, without any kind of apology or understanding that this is the kind of thing you do in advance, of course I am not going to just do whatever you want. I will use my best judgement!

My general policy here is that I will promise to keep things secret retroactively, if I would have agreed to accept the information with a confidentiality request in advance. If I would have rejected your confidentiality request in advance, you can offer me something for the cost incurred by keeping the secret. If you don't offer me anything, I will use my best judgement and not make any intense promises but broadly try to take your preferences into account in as much as it's not very costly, or offer you some weaker promise (like "I will talk about this with my team or my partner, but won't post it on the internet", which is often much cheaper than keeping a secret perfectly). 

Roughly the aim here is to act in a timeless fashion and to not be easily exploitable. If I wouldn't have agreed to something before, I won't agree to it just because you ask me later, without offering me anything to make up the cost to me!

And to repeat the above again, the request here was much more intense! The request, as I understood it, was basically "don't use this information in any kind of way that would hurt the party the information is about, if the harm is predictable", which I don't even know how to realistically implement at a policy level. Of course if I end up in conflict with someone I will use my model of the world which is informed by all the information I have about someone! 

And even beyond that, I don't think I did anything with the relevant information that Mikhail would be unhappy about! I have indeed been treating the informations as sensitive. This policy might change if at some point the information looks more valuable to communicate. Mikhail seems only angry about me not fully promising to do what he wants, without him offering me anything in return, and despite me thinking that I would not have agreed to any kind of promise like this in the first place if I was asked to do that before receiving the information (and would have just preferred to never receive the information in the first place).

I ask Oliver to promise that he’s not going to read established users’ messages without it being known to others at Lightcone Infrastructure and without a justification such as suspected spam, and isn’t going to share the contents of the messages.

We've had internal policies here for a long time! We never look at DMs unless one of the users in the conversation reports a conversation as spam. Sometimes DM contents end up in error logs, but I can't remember a time where I actually saw any message contents instead of just metadata in the 8 years that I've been working on LW (but we don't have any special safeguards against it).

We look at drafts that were previously published. We also sometimes look at early revisions of posts that have been published for debugging purposes (not on-purpose, but it's not something we currently have explicit safeguards or rules about). We never look at unpublished drafts, unless the user looks pretty clearly spammy, and never for established users.

It shouldn’t cost hundreds of thousands of dollars to keep a website running and moderated and even to ship new features with the help from the community.

Look, we've had this conversation during our fundraiser. There is zero chance of running an operation like LW 2.0 long-term without that not somehow costing at least $200k/yr. Even if someone steps up and does it for free, that is still them sacrificing at least $200k in counterfactual income, if they are skilled enough to run LessWrong in the first place. I think even at a minimum skeleton crew, you would be looking at at least $300k of costs.

The cost of running/supporting LessWrong is much lower than Lightcone Infrastructure’s spending.

This is false! Most of our spending is LessWrong spending these days (as covered in our annual fundraiser post). All of our other projects are much closer to paying for itself. Most of the cost of running Lightcone is the cost of running LessWrong (since it's just a fully unmonetized product).


IDK, I am pretty sad about this post. I am happy to clarify my confidentiality policies and other takes on honoring retroactive deals (which I am generally very into, and have done a lot of over the years), if anyone ends up concerned as a result of it. 

I will be honest in that it does also feel to me like this whole post was written in an attempt at retaliation when I didn't agree with Mikhail's opinions on secrets and norms. Like, I don't think this post was written in an honest attempt at figuring out whether Lightcone is a good donation target.

Reply421
[-]Ben Pace1d160

He just learned that keeping secrets is bad in general, and so he doesn’t by default, unless explicitly agrees to.

This is not true! My policy is simply that you should not assume that I will promise to keep your secrets after you tell me, if you didn't check with me first.

I can confirm; Oliver keeps many secrets from me, that he has agreed to others, and often keeps information secret based on implicit communication (i.e. nobody explicitly said that it was secret, but his confident read of the situation is that it was communicated with that assumption). I sometimes find this frustrating because I want to know things that Oliver knows :P 

Reply1
[-]Ben Pace1d40

And he sent the message in a way that somehow implied that I was already supposed to have signed up for that policy, as if it's the most normal thing in the world, and with no sense that this is a costly request to make (or that it was even worth making a request at all, and that it would be fine to prosecute someone for violating this even if it had never been clarified at all as an expectation from the other side).

Speaking generally, many parties get involved in zero-sum resource conflicts, and sometimes form political alliances to fight for their group to win zero-sum resource conflicts. For instance, if Alice and Bob are competing to get the same job, or Alice is trying to buy a car for a low price and Bob is trying to sell it to her for a high price, then if Charlie is Alice's ally, she might hope that Charlie all will take actions that help her get more/all of the resources in these conflicts.

Allies of this sort also expect that they can share information that is easy to use adversarially against them between each other, with the expectation it will be consistently used either neutrally or in their favor by the allies.

Now, figuring out who your allies are is not a simple process. There are no forms involved, there are no written agreements, it can be fluid, and picked up in political contexts by implicit signals. Sometimes you can misread it. You can think someone is allied, tell them something sensitive, then realize you tricked yourself and just gave sensitive information to someone. (The opposite error also occurs, where you don't realize someone is your ally and don't share info and don't pick up all the value on the table.)

My read here is that Mikhail told Habryka some sensitive information about some third party "Jackson", assuming that Habryka and Jackson were allied. Habryka, who was not allied with Jackson in this way, was simply given a scoop, and felt free to share/use that info in ways that would cause problems for Jackson. Mikhail said that Habryka should treat it as though they were allies, whereas Habryka felt that he didn't deserve it and that Mikhail was saying "If I thought you would only use information in Jackson's favor when telling you the info, then you are obligated to only use information in Jackson's favor when using the info." Habryka's response is "Uh, no, you just screwed up."

(Also, after finding out who "Jackson" is from private comms with Mikhail, I am pretty confused why Mikhail thought this, as I think Habryka has a pretty negative view of Jackson. Seems to me simply like a screw-up on Mikhail's part.)

Reply
[-]Joern Stoehler21h7-8

I don't know how costly/beneficial this screw up concretely was to humanity's survival, but I guess that total cost would've been lower if Habryka as a general policy were more flexible in when the sensitivity of information has to be negotiated.

Like, with all this new information I now am a tiny bit more wary of talking in front of Habryka. I may blabber out something that has a high negative expected utility if Habryka shares it (after conditioning on the event that he shares it) and I don't have a way to cheaply fix that mistake (which would bound the risk).

And there isn't an equally strong opposing force afaict? I can imagine blabbering out something that I'd afterwards negotiate to keep between us, where Habryka cannot convince me to let him share it, and yet it would've been better to allow him to share it.

Tbc, my expectations for random people are way worse, but Habryka seems below average among famous rationalists now? I rn see & feel in average zero pull to adjust my picture of the average famous rationalist up or down, but seems high variance since I didn't ever try to learn what policies rationalists follow wrt negotiating information disclosure. I definitely didn't expect them to use policies mentioned in planecrash outside fun low-stake toy scenarios.

Reply1
[-]habryka20h83

Like, with all this new information I now am a tiny bit more wary of talking in front of Habryka.

Feel free to update on "Oliver had one interaction ever with Mikhail in which Oliver refused to make a promise that Mikhail thought reasonable", but I really don't think you should update beyond that. Again, the summaries in this post of my position are very far away from how I would describe them.

There is a real thing here, which if you don't know you should know, which is that I do really think confidentiality and information-flow constraints are very bad for society. They are the cause of as far as I can tell a majority of major failures in my ecosystem in the last few years, and mismanagement of e.g. confidentiality norms was catastrophic in many ways, so I do have strong opinions about this topic! But the summary of my positions on this topic is really very far from my actual opinions.

Reply
[-]Joern Stoehler19h80

Thx, I think I got most of this from your top level comment & Mikhail's post already. I strongly expect that I do not know your policy for confidentiality right now, but I also expect that once I do I'd disagree with it being the best policy one can have, just based on what I heard from Mikhail and you about your one interaction.

My guess is that refusing the promise is plausibly better than giving it for free? But I guess that there'd have been another solution where 1) Mikhail learns not to screw up again, and 2) you get to have people talk more freely around you to a degree that's worth loosing the ability to make use of some screw-ups, and 3) Mikhail compensates you in case that 1+2 is still too far away from a fair split of the total expected gains.

I expect you'll say that 2) sounds pretty negative to you, and that you and the community should follow a policy where there's way less support for confidentiality, which can be achieved by exploiting screw-ups and by sometimes saying no if people ask for confidentiality in advance, so that people who engage in confidentiality either leave the community or learn to properly share information openly.

Reply
[-]habryka18h80

I mostly just want people to become calibrated about the cost of sharing information with strings attached. It is quite substantial! It's OK for that coordination to happen based on people's predictions of each other, without needing to be explicitly negotiated each time.

I would like it to be normalized and OK for someone to signal pretty heavily that they consider the cost of accepting secrets, or even more intensely, the cost of accepting information that can only be used to the benefit of another party, to be very high. People should therefore model that kind of request as likely to be rejected, and so if you just spew information onto the other party, and also expect them to keep it secret or to only be used for your benefit, that the other party is likely to stop engaging with you, or to tell you that they aren't planning to meet your expectations.

I think marginally the most important thing to do is to just tell people who demand constraints on information, without wanting to pay any kind of social cost for it, to pound sand. 

Reply
[-]Mikhail Samin18h1-2

(A large part of the goals of this post is to communicate to people that Oliver considers the cost of accepting information to be very high, and make people aware that they should be careful around Oliver and predict him better on this dimension, not repeating my mistake of expecting him not to do so much worse than a priest of Abadar would.)

Reply
[-]habryka17h23

I think you could have totally written a post that focused on communicating that, and it could have been a great post! Like, I do think the cost of keeping secrets is high. Both me and other people at Lightcone have written quite a bit about that. See for example "Can you keep this confidential? How do you know?" 

Reply
[-]Mikhail Samin12h3-4

This post focuses on communicating that! (+ being okay with hosting ai capabilities events + less important misc stuff)

Reply
[+][comment deleted]20h10
[-]David Joshua Sartor19h10

I agree that promise is overly restrictive.
'Don't make my helping you have been a bad idea for me' is a more reasonable version, but I assume you're already doing that in your expectation, and it makes sense for different people to take the other's expectation into account different amounts for this purpose.

Reply
[-]Mikhail Samin18h30

Yep, that request would be identical, and is what I meant.

Reply
[-]habryka18h20

Don't make my helping you have been a bad idea for me

Yeah, I think this is a good baseline to aspire to, but of course the "my helping you" is the contentious point here. If you hurt me, and then also demand that I make you whole, then that's not a particularly reasonable request. Why should I make you whole, I am already not whole myself! 

Sometimes interactions are just negative-sum. That's the whole reason why it does usually make sense to check-in beforehand before doing things that could easily turn out to be negative sum, which this situation clearly turned out to be!

Reply
[+]Mikhail Samin1d*-10-7
[-]Isaac King21h*2415

After a while in a conversation that involved me repeatedly referring to Lawfulness of the kind exhibited by Keltham from Yudkowsky’s planecrash, he said that he didn’t actually read planecrash.

Is this supposed to be a negative thing? I don't think there is any obligation that people read any particular work of fiction in order to run an infrastructure project...

Reply2
[-]Mikhail Samin21h0-12

i feel like if you're runnin a lightcone infrastructure project, you're lowkey supposed to have read the existing literature on decision theory (sorry)

Reply
[-]Vaniver1d1621

I don't think the decision theory described here is correct. (I've read Planecrash.)

Specifically, there's an idea in glowfic that it should be possible for lawful deities to follow a policy wherein counterparties can give them arbitrary information, on the condition that information is not used to harm the information-provider. This could be as drastic as "I am enacting my plan to assassinate you now, and would like you to propose edits that we both would want to make to the plan"!

I think this requires agreement ahead of time, and is not the default mode of conversation. ("Can I tell you something, and you won't get mad?" is a request, not a magic spell to prevent people from getting mad at you.) I think it also is arguably something that people should rarely agree to. Many people don't agree to the weaker condition of secrecy, because the information they're about to receive is probably less valuable than the costs of partitioning their mind or keeping information secret. In situations where you can't use the information against your enemies (like two glowfic gods interacting), the value of the information is going to be even lower, and situations where it makes sense to do such an exchange even rarer. (Well, except for the part where glowfic gods can very cheaply partition their minds and so keeping secrets or doing pseudohypothetical reasoning is in fact much cheaper for them than it is for humans.)

That is, I think this is mostly a plot device that allows for neat narratives, not a norm that you should expect people to be expecting to follow or get called out.

[This is not a complete treatment of the issue; I think most treatments of it only handle one pathway, the "this lets you get information you can use for harm reduction" pathway, and in fact in order to determine whether or not an agent should do it, you must consider all relevant pathways. But I think the presumption should not be "the math pencils out here", and I definitely don't think the math pencils out in interacting with Oli. I think characterizing that as "Oli is a bad counterparty" instead of something like "Oli doesn't follow glowfic!lawful deity norms" or "I regret having Oli as a counterparty" is impolite.]

Reply11
[-]Mikhail Samin21h00

I think characterizing that as "Oli is a bad counterparty" instead of something like "Oli doesn't follow glowfic!lawful deity norms" or "I regret having Oli as a counterparty" is impolite

I see your point and initially agreed that "Oliver is a bad counterparty" is indeed not polite and intended to change that, but saw that I actually wrote "Oliver is not a good counterparty".

That was produced by "Oliver is the kind of counterparty you might regret having dealt with, as I have".

It's less of an impolite judgement than "Oliver is a bad counterparty", but if you think it reads the same, I'll try to change that to be more polite while still expressing that I think it often makes sense for people to be careful around him.

Reply
[+]Mikhail Samin1d-90
[-]Liron22h10-6

Maybe Lightcone Infrastructure can just allow earmarking donations for LessWrong, if enough people care about that criticism.

Reply
[-]Simon Lermen20h43

Hearing a secret can create moral, legal, or strategic costs. Once you know it, you may be forced to act or conceal, both of which can carry risk. You could tell me something that makes it awkward for me to interact with some people or that forces me to lie. I don't necessary want such secrets. So why should people accept retroactive secrecy? I don't know the truth here but charitably he had already told someone else of the information before you asked for secrecy or he read that part.

As someone who donated to lightcone in the past, I think LessWrong and Lighthaven are great places which provide enormous values. Seems worth a few million, they have permanent engineers on staff, you can get feedback for your posts from real people for free. 

When I posted Current Safety Training Techniques Do Not Fully Transfer to Frontier Models, I later randomly happened to see a Meta AI researcher using a screenshot from it in a conference presentation. I had no contact with them, so that reach was entirely organic. It showed how LessWrong helps safety research circulate beyond its own circle. I also found Lighthaven unusually productive during my MATS work this summer, with good focus. Like you, I am also doing inkhaven right now and will see how useful I will find it in the end. The physical environment genuinely seemed optimized for deep work and I also think it makes me mentally feel good to be here compared to other co-working spaces.

There is a very small number of actually pro valid reasoning organizations trying to help save the world. Only a very tiny number of people actually support sane AI safety in the sense of stopping the current race and not building Superintelligence with anything close to current techniques. I think this place existing should probably be worth a lot more to humanity.

When I saw that sama had visited and gave a talk in lighthaven I felt it was a good thing. Cutting all connection to OpenAI religiously does not seem helpful, for what it is worth sama might be an AI CEO that the safety community can hope to influence a little bit despite all his flaws. Maintaining some ties here could be useful, though I don’t particularly expect anything to come out of this. 

About DM messages, I didn't have the impression that the messages here would be encrypted or specifically protected from admins. I think that would be weird to share some secret in the chat function of lesswrong, seems like a minimalist feature to share some feedback on posts or perhaps to exchange other information. I think it's probably good it exists. I certainly don't see any reason to think they are acting bad faith here.

I never really interacted much with habryka myself, but what I know of the other lightcone staff seems like they are great people.

Still would like to talk about your views on AI at some point during inkhaven.

Reply1
[-]Isaac King21h*20

the website of the venue literally says:

Whatever is supposed to show up here, isn't.

Reply
[-]Mikhail Samin21h20

Huh, it's displayed this way to me:

Reply
[-]james oofou19h30

The text "the website of the venue literally says" appears twice in your post. The first time it appears seems to be a mistake and isn't followed by a quotation. 

Reply1
Moderation Log
More from Mikhail Samin
View more
Curated and popular this week
37Comments
Deleted by Joern Stoehler, Yesterday at 8:40 PM
Reason: Mis-looked in the thread, there don't seem to be disagree votes yet.

Should you donate to Lightcone Infrastructure? If your goal is to spend money on improving the world the most: no. In short:

  • The resources are and can be used in ways the community wouldn’t endorse; I know people who regret their donations, given the newer knowledge of the policies.
  • The org is run by Oliver Habryka, who puts personal conflicts above shared goals, and is fine with being the kind of agent others regret having dealt with.
  • (In my opinion, the LessWrong community has somewhat better norms, design taste, and standards than Lightcone Infrastructure.)
  • The cost of running/supporting LessWrong is much lower than Lightcone Infrastructure’s spending.

Lightcone Infrastructure is fine with giving platform and providing value to those working on destroying the world

Lighthaven, a conference venue and hotel run by Lightcone Infrastructure, hosted an event with Sam Altman as a speaker.

When asked about it, Oliver said that Lightcone would be fine with providing Lighthaven as a conference venue to AI labs for AI capabilities recruiting, perhaps for a higher price as a tax.

While it’s fine for some to consider themselves businesses that don’t discriminate and platform everyone, Lighthaven is a venue funded by many who explicitly don’t want AI labs to be able to gain value from using the venue.

Some of my friends were sad and expressed regret donating to Lightcone Infrastructure upon hearing about this policy.

They donated to keep the venue existing, thinking of it as a place that helps keep humanity existing, perhaps also being occasionally rented out to keep being able to help good things. And it’s hard to blame them for expecting the venue to not host events that damage humanity’s long-term trajectory: the website of the venue literally says:

Lighthaven is a space dedicated to hosting events and programs that help people think better and to improve humanity's long-term trajectory

They wouldn’t have made the donations it if they understood it as more of an impartial business that provides value to everyone who pays for this great conference venue, including those antithetical to the expressed goals of Lightcone Infrastructure, only occasionally using it for actually important things.

I previously donated to Lightcone because I personally benefited from the venue and wanted to say thanks (in the fuzzies category of spending, not utilons category); but now I somewhat regret even that, as I wouldn’t donate to, say, an abstract awesome Marriott venue that hosted an EAG but also would be fine with hosting AI capabilities events.

Oliver Habryka is not a good counterparty

Lightcone Infrastructure is an organization run by Oliver Habryka.

My impression is that his previous experiences at CEA and Leverage taught him that secrets are bad: if you’re not propagating information about all the bad things someone’s doing, they’re gaining power and keep doing bad things, and that’s terrible.

I directionally agree: it’s very important to have norms about whistleblowing, about making sure people are aware of people who are doing low-integrity stuff, about criticisms propagating rather than being silenced.

At the same time, it’s important to separate (1) secrets that are related to opsec and being able to do things that can be damaged by being known in advance and (2) secrets related to some people doing bad stuff which they don’t want to be known.

One can have good opsec, and at the same time have norms on whistleblowing; about telling others or going public with what’s important to propagate and at the same time not share what people tell you because you might need to know it, with expectation that you don’t share it unless it’s related to some bad thing that someone’s doing that should be propagated.

I once came to Lighthaven to talk to Oliver about a project that me, him, and a third party were all helping with/related to; chatting about this project and related things; and sharing information about some related plans of that third party, to enable Oliver to coordinate with that third party. Oliver and that third party have identical declared goals related to making sure AI doesn’t kill everyone; but Oliver dislikes that third party, wants it to not gain power, and wouldn’t coordinate with it by default.

So: I shared information with Oliver about some plans, hoping it would enable coordination. This information was in no way of the kind that can be whistleblowed on; it was just of the kind that can be used to damage the plans relating to a common goal.

Immediately after the conversation, I messaged Oliver asking him to, just in case, only use this information to coordinate with the third party, as I shared this information to enable coordination between the two entities who I perceived as having basically common goals, except that one doesn’t want another to have much power, all else being equal (for reasons that I thought were misguided and hoped some of that could be solved if they talked; they did talk, but Oliver didn’t seem to change his mind):

(Also, apparently, just in case, please don’t act on me having told you that [third party] are planning to do [thing] outside of this enabling you to chat to/coordinate with them)

Oliver did not reply for a week. After a week, his reply started with “lol, no”, and he said he already  told some people and since he didn’t agree to the conditions before hearing the information, he can share it, even though wouldn’t go public with it.

After a while in a conversation that involved me repeatedly referring to Lawfulness of the kind exhibited by Keltham from Yudkowsky’s planecrash, he said that he didn’t actually read planecrash. (A Keltham, met with a request like that relating to a third party with very opposite goals, would sigh, saying the request should’ve been made in advance, and then not screw someone over, if they’re not trying to screw you over and it’s not an incredibly important thing.)

My impression of him is that he had no concept of being the kind of entity that others- even enemies with almost opposite goals- are not worse off by having dealt and coordinated with; has no qualms about doing this to those who perceive him as being generally friendly. I think he just learned that keeping secrets is bad in general, and so he doesn’t by default, unless explicitly agrees to.

My impression is that he regrets being told the information, given the consequences of sharing it with others. But a smart enough agent—a smart enough human—should be able to simply not use the information in ways that make you regret hearing it. Like, you’re not actually required to share information with others, especially if this information is not about someone doing bad stuff and is instead about someone you dislike doing good stuff that could be somewhat messed up by being known in advance.

I am very sympathetic to the idea that people should be able to whistleblow and not be punished for it (I think gag orders should not exist, I support almost everything that FIRE is doing, etc.); yet Oliver’s behavior is not the behavior of someone who’s grown-up and can be coordinated with on the actually important stuff.

Successful movements can do opsec when coordinating with each other, even if they don’t like each other; they can avoid defecting when coordination is useful, even if it’s valuable to screw the other part of the movement over.

There are people at Lightcone I like immensely; and I agree with a lot of what Oliver is saying and plausibly the person who upvoted the most of his EA Forum comments; and yet I would be very wary of coordinating with him in the future.

It’s very sad to not feel safe to share/say things that are useful to people working on preventing extinction, when being around Oliver.

(A friend is now very worried that his LessWrong DMs can be read and used by Oliver, who has admin access.

I ask Oliver to promise that he’s not going to read established users’ messages without it being known to others at Lightcone Infrastructure and without a justification such as suspected spam, and isn’t going to share the contents of the messages. I further ask Lightcone to establish policies about situations in which DMs and post drafts can be looked at, and promise to follow these policies.)

(Lightcone Infrastructure’s design taste is slightly overrated and the spending seems high)

This is not a strongly held view; I greatly enjoy the LessWrong interface, reactions, etc., but I think the design taste of the community as a whole is better than Lightcone Infrastructure’s.

One example: for a while, on Chrome on iOS, it was impossible to hold a link on the frontpage to open it in a new tab, because the posts opened on the beginning of the tap to reduce delays.

While processing events on the beginning of taps and clicks and loading the data to display on hovers is awesome in general, it did not work in this particular case because people (including me) really want to be able to open multiple tabs with many posts from the frontpage.

It took taking this to the Slack and other people agreeing to change this design decision.

Lightcone Infrastructure seems to be spending much more than would’ve been sufficient to keep the website running. My sense, though I could be wrong here, is that it shouldn’t cost many hundreds of thousands of dollars to keep a website running and moderated and even to ship new features with the help from the community.

Have I personally derived much value from Lightcone Infrastructure? 

The website’s and community’s existence is awesome, and has been dependent first on Yudkowsky and later on LW 2.0. I have derived a huge amount of value from it; found friends; engaging and important conversations; and incredible amount of fun. Even though now I wouldn’t feel particularly good about donating to Lightcone to support Lighthaven, I wouldn’t feel particularly bad about donating to the part of their work which is supporting the website, as a thanks, from the fuzzies budget.

But is it an effective donation?

I really doubt that and would not donate to Lightcone Infrastructure from the budget of donations to improve the world.