habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com

Sequences

Concepts in formal epistemology

Wiki Contributions

Comments

At least for the coming year, our expenses are pretty entangled between all the different projects in a way that makes differentially funding things hard. I do take preferences of our donors on how to focus our efforts into account, so donating and just telling us that you would prefer us to work more on one kind of thing vs. another will have some effect. 

My guess is you will mostly just have to average our impact across different areas and decide whether the whole portfolio is above your bar.

When we were looking for office space in Berkeley earlier this year we were seeing list price between $3.25-$2.75/month per square foot, or $780k-900k/year for 20,000 square feet. I'd expect with negotiation you could get somewhat better pricing than this implies, especially if committing to a longer time period.

Yep, if you commit for longer time periods you can definitely get better deals, and there are definitely other ways to save on office space costs. I didn't mean to imply this was the minimum you could rent office space for.

The $1.2M/yr estimate was roughly what we were paying at the time, and as such was the central comparison point we had. Comparing it to something more like $800k-$900k a year also seems reasonable to me, though I have less experience with the exact tradeoffs faced by doing that. One reason for that comparison is that the price estimate I did in the comment above included utilities and servicing the space, and I don't have a ton of experience with how much cost that adds to an unserviced office lease, though I still expect it to be a bunch lower than the WeWork prices.

With that in mind, I was surprised by the lack of information in this funding request. I feel mixed about this: high-status AIS orgs often (accurately) recognize that they don't really need to spend time justifying their funding requests, but I think this often harms community epistemics (e.g., by leading to situations where everyone is like "oh X org is great-- I totally support them" without actually knowing much about what work they're planning to do, what models they have, etc.)

Sorry about that! I've drafted like 3-4 different fundraising posts over the last few months, most of which were much longer and had more detail, but I also repeatedly ran into problems that when I showed them to people, they ended up misunderstanding how Lightcone relates to our future work and thinking about our efforts in a very narrow way by overfitting to all the details I put into the post, while missing something important about the way we expect to approach things over the coming years. 

I ended up deciding to instead publish a short post, expecting that people will write a lot of questions in the comments, and then to engage straightforwardly and transparently there, which felt like a way that was more likely to end up with shared understanding. Not sure whether this was the right call, and definitely a good chunk of my decisions here are also driven by finding fundraising to be the single most stressful aspect of running Lightcone, and I just find navigating the stress of that easier if I respond to things in a more reactive way.

What are Lightcone's plans for the next 3-6 months? (is it just going to involve continuing the projects that were listed?)

We are going to be wrapping up renovations and construction for the next month, and will then host a bunch of programs over the summer (like SERI MATS and a bunch of workshops and conferences). During that time I hope to reconnect a bit more with the surrounding AI-Alignment/X-Risk/Rationality/EA/Longtermist-adjacent diaspora, which I intentionally took a pretty big step away from after the collapse of FTX. 

I will also be putting a bunch of efforts into Lightspeed Grants. We will see how much traction we get here, but I definitely think there is a chance that blows up into the primary project we'll be working on for a while, since I think there is a lot of value in diversifying and improving the funding ecosystem, which currently seems to drive a lot of crazy status-dynamics and crazy epistemics within people working on AI risk stuff.

After that, I expect to focus a lot more on online things. I probably want to do a major revamp of the AI Alignment Forum, as well as focus a lot of my attention more on LessWrong again. I am particularly excited about finally properly launching the dialogues feature and driving adoption of that, probably in parts by me and other Lightcone team members participating in a lot of dialogues while we also continue developing the technology on the backend.

How is Lightcone orienting to the recent rise in interest in AI policy? Which policy/governance plans (if any) does Lightcone support?

I've been thinking a lot about this, and I don't yet have a clear answer. My tentative guess is something like "with a lot of the best people that I know going into AI policy stuff and the hype/excitement around that increasing, the comparative advantage of the Lightcone team is actually even more strongly pointing in the direction of focusing on research forums that ground the epistemic health of the people jumping headfirst into policy stuff". This means I currently expect to not get super deeply involved, but to interface a lot with people who are jumping into the policy fray, moving to DC, etc. and to figure out what infrastructure can allow those people to stay sane and grounded, since I do really expect that as we get more involved in advocacy, politics and policy-making that thinking clearly will become a lot harder. 

But again, I don't know yet, and at the meta-level I might just organize a bunch of events to help people orient to the shifting policy landscape while I am orienting myself to it as well.

What is Lightcone's general worldview/vibe these days? (Is it pretty much covered in this post?) Where does Lightcone disagree with other folks who work on reducing existential risk?

This sure seems hard to answer concisely. Hopefully you can figure out our vibe from my comments and posts. I still endorse a lot of the post you linked, though also changed my mind on a bunch of stuff. I might write more here later. I think this is a valid question, this comment is just already getting very long and I don't have an immediate good cached answer. 

What are Lightcone's biggest "wins" and "losses" over the past ~3-6 months?

In my books I think by far the biggest win is that I think we relatively successfully handled a really major shift in relationship to the rest of our surrounding community, as an organization whose lifeblood is building infrastructure. My sense is that in all previous organization I've worked in, I would have rather left or shut the organization down when FTX collapsed, because I wouldn't have been able to think clearly and see things with fresh eyes, and orient with my organization to the changing landscape. I think Lightcone successfully handled reorienting together, and I think this is really hard (and probably the result of us staying consistently very small in our permanent staff headcount, which is currently just 8 people).

We also built an IMO really amazing campus that I expect to utilize and get a lot of value out of for the next few years. I also think I am proud of me writing a lot of things publicly on the EA Forum during the time of the FTX collapse and afterwards. I think it also helped the rest of the ecosystem orient better, and a lot of the things were things that nobody else was saying and seemed quite important.

Will much of that $3-6M go into renovating and managing the Rose Garden Inn, or to cover work that could have been covered by existing funding if the Inn wasn't purchased?

Thinking about the exact financing of the Inn is a bit messy, especially if we compare it to doing something like running the Lightcone Offices, because of stuff like property appreciation, rental income from people hosting events here, and the hard-to-quantify costs of tying up capital in real estate as opposed to more liquid assets like stocks.

If you assume something like 5% property appreciation per year going forward, and you amortize the part of our construction costs that didn't increase the property value over the next 7 years (which seems like a reasonable estimate for how long the venue is going to get used), I get an annual cost of running the Rose Garden of about $1.15M (property value of ~$19MM, ~$3.5M in uncapitalized construction costs amortized over 7 years, plus $700k in annual upkeep and maintenance costs). 

This is pretty cost-effective compared to other options of even just providing office space, and I think is my preferred way of thinking about the cost of doing this construction project (most of which came from a loan that we took out specifically to finance the purchase and renovation of the place, which is insured against the Rose Garden property itself, so it didn't straightforwardly funge against our other funding). 

To compare this to other costs, renting two floors of the WeWork, which we did for most of the summer last year, cost around $1.2M/yr for 14,000 sq. ft. of office space. The Rose Garden has 20,000 sq. ft. of floor space and 20,000 additional sq. ft. of usable outdoor space for less implied annual cost than that.

In order to fund this initial capital investment, we did definitely cut into the funding we received in 2022, and we would have a bunch of money in our bank accounts right now if we didn't do this construction project. On the other hand, our spending over the next few years would also be a bunch larger, since we would have to pay more in rent if we wanted to do anything in-person community shaped, for both events and office space, than we will have to pay in interest and upkeep minus appreciation for the Rose Garden Inn. 

This doesn't take into account the cost of having Jaan Tallinn's capital tied up in real estate with 5% annual interest from us. In one sense it seems good for Jaan to diversify his funds away from crypto. On the other hand he has probably been making closer to 10%-20% annual returns over the last 10-15 years with his investments. My current guess is the diversification benefits here outweigh the higher potential returns (because man, x-risk efforts sure still are quite tech-stock and crypto loaded and I am very glad about more diversification), but I am really not confident, and this could easily dominate the relevant cost-calculations. 

It's also a bit unclear how to think about rental income from people working on existential risk stuff that we wouldn't otherwise support, which I expect to also make us back some of these funds.

I could write more about how to think about financing the Rose Garden, but I'll leave it at this. I think it it mostly makes sense for people to think that roughly $1.5M of our annual budget going forward will be used for our Rose Garden Inn, with ~$350k of that going into very illiquid long-term savings for the organization.

=====

By far the biggest source of uncertainty about our budget comes from the potential for FTX clawbacks, which explains most of the wide range of our budget. I currently think we owe back about $1.5M to FTX creditors, though the exact game theory here is a bit messy (and they might request up to $4M back from us). We don't know if and when they will ask for that money.

If so, I'm curious to hear more about the strategy behind buying and renovating the space, since it seems like a substantial capital investment, and a divergence from LightCone Infrastructure's previous work and areas of expertise.

The purchase of the Rose Garden Inn and really setting the whole renovation project in-motion was made before November last year, and as such before the collapse of FTX, and as is probably apparent from my comments and posts over the last few months (as well as our reasoning for closing down the Lightcone Offices), a lot of our strategy and perspective has shifted since then. 

That said, I am actually still quite excited about our plans with the Rose Garden, but I do think it's an important disclaimer to add that yeah, a lot of our plans for the Rose Garden were substantially changed when FTX collapsed, and our plans for it have become a bunch less straightforward (the original reasoning for it was something much closer to "this makes sense from a finance perspective even if we just compare it to renting office space at our current location"). 

That said I am still quite excited about our plans. Some concrete things that continue to drive my excitement for our Rose Garden Inn investment: 

  • I continue to think that in-person collaboration is really essential, especially for mentoring relationships. I think there continues to be a pretty substantial bottleneck in people getting oriented to working on AI Alignment that goes a lot better with in-person peers and via in-person mentoring from people who have made real contributions to the field, and investing in infrastructure for that is one of the best ways to drive progress on reducing existential risk from AI.
  • I really want to create a more distinct and intentionally separate culture both on LessWrong and at the Rose Garden Inn, and I think owning a physical space hugely helps with that. FTX, various experiences I've had in the EA space over the past few years, as well as a lot of safetywashing in AI Alignment in more recent years, have made me much more hesitant to build a community that can as easily get swept up in respectability cascades and get exploited as easily by bad actors, and I really want to develop a more intentional culture in what we are building here. Hopefully this will enable the people I am supporting to work on things like AI Alignment without making the world overall worse, or displaying low-integrity behavior, or get taken advantage of. 
  • I am very excited about the setup where we get to have both long-term office space and short-term events in one location. I am hopeful that this will enable us to have a lot of people flowing through the space, and will allow us to interface and experiment with a lot of different approaches to AI alignment and rationality, and then slowly grow by picking a small number of the people coming through to work more long-term from our office space.

We are just wrapping up renovations so not much yet (though we are done very soon). This summer we are likely hosting a good chunk of the SERI MATS scholars, as well as providing space for various other retreats and events (like the Singular Value Learning Theory workshop and we are talking to Manifold about maybe running a 100+ person forecasting conference here). 

In-parallel we are also providing office space to a small number of people that I expect to slowly grow over time, trying to build a tight-knit community of people working to reduce existential risk and develop an art of rationality. Currently this includes John Wentworth, Adam Scholl, Sydney von Arx and a few other people. 

I am currently holding this part of our plan kind of lightly, and we might end up mostly just running events here, since I think a bunch of things didn't work amazingly well with the way we ran the Lightcone Offices, and I don't want the Rose Garden to become a place with similar dynamics where working here could end up feeling like some kind of status marker or like some necessary component to helping with existential risk when living in the Bay Area. 

Those names do seem like at least a bit of an update for me.

I really wish that having someone EA/AI-Alignment affiliated who has expressed some concern about x-risk was a reliable signal that a project will not end up primarily accelerationist, but alas, history has really hammered it in for me that that is not reliably true. 

Some stories that seem compatible with all the observations I am seeing: 

  • The x-risk concerned people are involved as a way to get power/resources/reputation so that they can leverage it better later on
  • The x-risk concerned people are involved in order to do damage control and somehow make things less bad
  • The x-risk concerned people believe that mild-acceleration is the best option in the Overton window and that the alternative policies are even more accelerationist
  • The x-risk concerned people genuinely think that accelerationism is good, because this will shorten the period for other actors to cause harm with AI, or because they think current humanity is better placed to navigate an AI transition than future humanity

I think it's really quite bad for people to update on the involvement on AI X-Risk-adjacent people so hard as you seem to be doing here. I think it has both hurt a lot of people in the past, and I also think makes it a lot harder for people to do things like damage control because their involvement will be seen as an overall endorsement of the project.

I am confused why you are framing this in a positive way? The announcement seems to primarily be that the UK is investing $125M into scaling AI systems in order to join a global arms race to be among the first to gain access to very powerful AI systems.

The usage of "safety" in the article seems to have little to do with existential risk, and indeed seems mostly straightforward safety-washing. 

Like, I am genuinely open to this being a good development, and I think a lot of recent development around AI policy and the world's relationship to AI risk has been good, but I do really have trouble seeing how this announcement is a positive sign. 

If you get funding from other funds, it would be best if you update your application (you can edit your application any time before the evaluation period ends), or withdraw your application. We'll get notifications if you make edits and make sure to consider them. 

Yep, just paying a person a salary works, though the person needs to do enough things that are somewhat legibly for the public benefit to justify their salary to the IRS.

Load More