All of jacobjacob's Comments + Replies

The majority of tasks we do don't involve any memos. We write them occasionally when 1) thinking through some big, hard-to-reverse decision, 2) when setting goals for 2-week sprints, and 3) if someone wants to call an all-hands meeting, we usually require them to write a memo first (which is a nice rate-limit on how many all-hands meetings you get). 

I think the rate of memos written is maybe 0.5 per person per month, on average, but counting it is a bit messy.

1Scott Worland2mo
Got it, thanks! 


Thanks for sharing these! I can also attest to witnessing you hustle for items fast :) I especially like your point 1

Hard to give a general answer, but I think 2x someone's normal salary (especially if it's cash) is usually quite sufficient to get the job done, and kind of reliably has helped me in the past when I've try to find people happy to work night shifts

This is why less wrong needs the full suite of emoji reacts.

I was setting up a retreat venue, and they were pretty weird and special beds -- such that if they would've actually worked, it would've pivoted our strategy for setting up the space in a somewhat major way. 

Do you have a link to the bed in question?
2Tobias H6mo
I was thinking of money. :)
Both would be interesting. 

I think it depends on scale. If Ford produces cars in batches of 100(? 1000? more?) they probably can't rejigger the factory. In this case it was a local ironworker who probably had 10 or fewer guys working his shop, so a bit more flexibility. 

Yes, show up uninvited. That happens a lot in our slack. Our team is small enough that most people read most channels.

Can confirm Lightcone is very chaotic and sometimes works heroic hours, and it seems tangled up in our way of working for reasons that are not super clear to me. 

So when reading your comment I was asking myself why the above template couldn't be run by a project that wanted to work closer to 40 hours rather than 80 hours per week? One answer is that "Well, if people are importantly blocking elements, they must be available to reply on slack and unblock other people whenever", which is mostly true for us, except that 1) we almost never wake up people w... (read more)

I agree that most of these rules could smoothly transition to a less intense team, and nonetheless believe a less chaotic org would write a fairly different document, which is why I think it would be useful for them to do so.  One thing I can speak to a tiny bit is a software company I worked at that had a lot of the chaotic/hero-ness in certain roles, but absolutely had to be cross-continental, and thus was also remote and heavily asynchronous. It built up really great practices for documentation and async communication to make that work.  Alas it's been too long since I worked there for me remember specifics, so I can't say anything useful.

For what it's worth I don't consider this essay to be about "ops" that much. Also lots of people keep calling much of what Lightcone does "ops", but we often really don't think of ourselves as doing ops that much :) Some words that sound to me more like the thing I think of Lightcone as doing, most of the time: "uncertainty reduction", "product discovery", "trying to doing things fast". 

I might write up rants about hiring at some point, though I don't think I'm particularly experienced or great at it :) For now I'll just say I like YCombinator's content on this. Not sure how to find all the relevant essays and videos, but this might help

Things I've read / advice I've gathered that influenced me a lot, are: 

  • Paul Graham's essays and YCombinator's startup philosophy
  • lean manufacturing philosophy
  • Elon Musk's operational principles (there's a ton of junk content about this online, but I've been able to find some good first-hand accounts from people who work at tesla and spacex, as well as probably the single best piece of content, which is the 2-3h starbase tour ). Tesla also has a "25 Guns" team that runs on a to-me very similar philosophy
  • first-h
... (read more)

I don't have that much experience, so don't want to say too much. But I think it should apply well to things like startups searching for product-market fit (you're rate-constrained by how fast you can figure out what people want), or a factory increaseing how many widgets they can output per day, but less well to e.g. teams that are trying to maintain a system, like a gardener, or a janitorial service. 

I think this post makes sense from a "theory of constraints" perspective: if you're pursuing a mission that's no faster than some identifiable step, and the best thing you can do each week to move faster toward your mission is mostly about speeding up that step. 

The lame answer: yeah, it does mess with deep work, and I'm not super sure how to balance them. 

The spicy answer: I have an unwritten polemic entitled "Against Deep Work". I can't share it though since I have not written it. Fortunately, much of what I hope to say in that post is captured in Chapter 9 of the Lean Startup, which has a section that resonates a lot with my experience. I'll just go ahead and quote it because it's so damn good (starting on page 191 in the linked PDF).

Imagine you’re a product designer overseeing a new product and you need t

... (read more)

There's at least one hotel in Berkeley with rooms for $500/night or more, and I claim for the better hotels it is quite rare that you can get them for <$200. As evidence, you can select some dates and look at the hotels here:

I'm not sure, but I think I might have seen a sign in a rose garden room with a $500 rack rate. Second floor building B. I found it quite funny given how far from the current state it was, it read like a decades old relic of what the Inn once was.


There are really many things I found outstanding about this post. The key one, however, is that after reading this, I feel less confused when thinking about transformer language models. The post had that taste of deconfusion where many of the arguments are elegant, and simple; like suddenly tilting a bewildering shape into place. I particularly enjoyed the discussion of ways agency does and does not manifest within a simulator (multiple agents, irrational agents, non-agentic processes), the formulation of the prediction orthogonality thesis, ways i... (read more)

Thank you for this lovely comment. I'm pleasantly surprised that people were able to get so much out of it. As I wrote in the post, I wasn't sure if I'd ever get around to publishing the rest of the sequence, but the reception so far has caused me to bump up the priority of that.

At first glance of seeing this, I'm reminded of the safety questionnaires I had to fill out as part of running a study when taking experimental psychology classes in undergrad. It was a lot of annoyance and mostly a box ticking exercise. Everyone mostly did what they wanted to do anyway, and then hurriedly gerrymandered that questionnaire right before the deadline, so the faculty would allow them to proceed. Except the very conscientious students, who saw this as an excellent opportunity to prove their box ticking diligence. 

As a case in point, I migh... (read more)

[I work for Dan Hendrycks but he hasn't reviewed this.]

It seems to me like your comment roughly boils down to "people will exploit safety questionaires." I agree with that. However, I think they are much more likely to exploit social influence, blog posts, and vagueness than specific questionaires. The biggest strengths of the x-risk sheet, in my view, are:

(1) It requires a specific explanation of how the paper is relevant to x-risk, which cannot be tuned depending on the audience one is talking to.  You give the example from the forecasting paper and... (read more)

If someone asks what the rock is optimizing, I’ll say “the actions” - i.e. the rock “wants” to do whatever it is that the rock in fact does.

This argument does not seem to me like it captures the reason a rock is not an optimiser? 

I would hand wave and say something like: 

"If you place a human into a messy room, you'll sometimes find that the room is cleaner afterwards. If you place a kid in front of a bowl of sweets, you'll soon find the sweets gone. These and other examples are pretty surprising state transitions, that would be highly unlikely i... (read more)

Exactly! That's an optimization-at-a-distance style intuition. The optimizer (e.g. human) optimizes things outside of itself, at some distance from itself. A rock can arguably be interpreted as optimizing itself, but that's not an interesting kind of "optimization", and the rock doesn't optimize anything outside itself. Throw it in a room, the room stays basically the same.

An update on this: sadly I underestimated how busy I would be after posting this bounty. I spent 2h reading this and Thomas post the other day, but didn't not manage to get into the headspace of evaluating the bounty (i.e. making my own interpretation of John's post, and then deciding whether Thomas' distillation captured that). So I will not be evaluating this. (Still happy to pay if someone else I trust claim Thomas' distillation was sufficient.) My apologies to John and Thomas about that.

Cool, I'll add $500 to the distillation bounty then, to be paid out to anyone you think did a fine job of distilling the thing :)  (Note: this should not be read as my monetary valuation for a day of John work!)

(Also, a cooler pay-out would be basis points, or less, of Wentworth impact equity)

Needing to judge submissions is the main reason I didn't offer a bounty myself. Read the distillation, and see if you yourself understand it. If "Coherence of Distributed Decisions With Different Inputs Implies Conditioning" makes sense as a description of the idea, then you've probably understood it. If you don't understand it after reading an attempted distillation, then it wasn't distilled well enough.
1Thomas Kwa1y
I will attempt to fill this bounty. Does the fact that I'm on a grant preclude me from claiming it?

Thanks for being open to suggestions :) Here's one: you could award half the prize pool to compelling arguments against AI safety. That addresses one of John's points. 

For example, stuff like "We need to focus on problems AI is already causing right now, like algorithmic fairness" would not win a prize, but "There's some chance we'll be better able to think about these issues much better in the future once we have more capable models that can aid our thinking, making effort right now less valuable" might. 

3Thomas Kwa1y
It seems better to award some fraction of the prize pool to refutations of the posted arguments. IMO the point isn't to be "fair to both sides", it's to produce truth.

That idea seems reasonable at first glance, but upon reflection, I think it's a really bad idea. It's one thing to run a red-teaming competition, it's another to spend money building rhetorically optimised tools for the other side. If we do that, then maybe there was no point running the competition in the first place as it might all cancel out.

4Sidney Hough1y
Thanks for the idea, Jacob. Not speaking on behalf of the group here - but my first thought is that enforcing symmetry on discussion probably isn't a condition for good epistemics, especially since the distribution of this community's opinions is skewed. I think I'd be more worried if particular arguments that were misleading went unchallenged, but we'll be vetting submissions as they come in, and I'd also encourage anyone who has concerns with a given submission to talk with the author and/or us. My second thought is that we're planning a number of practical outreach projects that will make use of the arguments generated here - we're not trying to host an intra-community debate about the legitimacy of AI risk - so we'd ideally have the prize structure reflect the outreach value for which arguments are responsible. I'm potentially up to opening the contest to arguments for or against AI risk, and allowing the distribution of responses to reflect the distribution of the opinions of the community. Will discuss with the rest of the group.

How long would it have taken you to do the distillation step yourself for this one? I'd be happy to post a bounty, but price depends a bit on that.

Short answer: about one full day. Longer answer: normally something like this would sit in my notebook for a while, only informing my own thinking. It would get written up as a post mainly if it were adjacent to something which came up in conversation (either on LW or in person). I would have the idea in my head from the conversation, already be thinking about how best to explain it, chew on it overnight, and then if I'm itching to produce something in the morning I'd bang out the post in about 3-4 hours. Alternative paths: I might need this idea as background for something else I'm writing up, or I might just be in a post-writing mood and not have anything more ready-to-go. In either of those cases, I'd be starting more from scratch, and it would take about a full day.

Bought the hat. I will wear it with pride.

Curious if you have suggestions for a replacement term for "grabby" that you'd feel better about?

'Expansionist'? I hadn't thought about it, particularly. It just struck me how silly it was making everything sound, even to me. I can only imagine how quickly it would be dismissed by my more weirdness-resistant friends, who unfortunately are representative of a lot of the world, including many of the people we need to persuade to take these things seriously.

"likely evolved via natural selection"

My default expectation would be that it's a civilization descended from an unaligned AGI, so I'm confused why you believe this is likely.

A guess: you said you're optimistic about alignment by default -- so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?

I found a low six figure donor who filled 25% of their funding gap. The rest has not been filled. EDIT: They also got an ACX grant, but not enough to fill the whole funding gap I believe. I can intro donors who want to fill it to the radvac team. Email me at jacob at

Original thread and credit to ChristianKl who picked up on and alerted me to rumours of the funding gap:

Regarding the remaining funding gap you mention, did you take the recent $100,000 ACX grant into account?

Alex Hoekstra, $100,000, for the Rapid Deployment Vaccine Collaborative (RaDVaC) to make open-source modular affordable vaccines.
... RaDVaC still needs more funding ...

I'm happy to hear that my cries resulted in at least some founding being moved to them even if it's still sad that there's still a funding gap.

In addition to what Eliezer already said about RaDVaC being good because it uses technology that can easily scaled up, RaDVaC has the potential to reduce infection by COVID-19 when given in addition to traditional vaccines because it potentially creates a mucosal immune system response in the upper respiratory system which the vaccines we currently have don't. 

While there's some potential that Omicron will end COVID-19 that is optimistic. We might need something like RaDVaC to finally end this pandemic. 

This will be a bit of a disappointing answer (sorry in advance), but I indeed think UI-space is pretty high-dimensional and that there are many things you can do that aren't just "remove options for all users". Sadly, the best I way I know of how to implement this is to just do it myself and show the result; and I cannot find the time for that this week. 

I also tried and failed to get my family to use it :( Among other things, I think they bounced off particularly hard on the massive drop-down of 10 different risk categories of ppl and various levels of being in a bubble.

I don't think the blocker here was fundamentally quantitative -- they think a bunch about personal finance and budgeting, so that metaphor made sense to them (and I actually expect this to be true for a lot of non-STEM ppl). Instead, I think UX improvements could go a long way.

My guess is that the focus on bubbles no longer makes sense, since almost no one is doing that now. Beyond that, I struggle to know what trade offs could make microcovid UI more approachable, without making it not microcovid. A number of people (including me) already complain it's too restrictive, and cutting down on options makes that worse. It's really not obvious to me that the value generated by doing existing microcovid, but simpler, outweights the loss of configurability. Also I literally don't know how to make it simpler or more inviting beyond tossing out options. I don't mean it can't be done, I mean I'm terrible UI designer who literally can't think of anything. So I'd be really interested in: * arguments that microcovid is at the wrong place on the pareto frontier * ways to improve usability that don't trade off against specificity for power users * other numerical tools that could be useful to your family that aren't microcovid For the last one: raemon has suggested a unitless assesment of "how risky is today compared to other days?" (and maybe location comparisons as well), created using microcovid, local prevalence numbers, and a single default human. 

I voted disagree, because at this point there have been plenty of COVID forecasting tournaments hosted by Good Judgement, Metaculus and several 3rd parties. Metaculus alone has 400 questions in the COVID category, a lot of which have 100+ predictions. I personally would find it quite easy to put together a group of forecasters with legibly good track record on COVID, but from working in this space I also do have a sense of where to start looking and who to ask.

Ah, good to know.

Do you mean that if one would like to go to such a bootcamp but thinks they won't be able to get visa in time, they should apply now to get invited to a future cohort?

Apply now -- there's a question at the end asking if you're interested in future cohorts. And you can say in the final textbox that you're not available for Round 1. 

You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt. 

You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a partic... (read more)

Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote. I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.

Jaan/Holden convo link is broken :(

Following up: as a result of this thread, radvac will likely get a $100k donation (from a donor who was not considering them before). This does not fill their funding needs however, and they're looking to raise another $300k this year. 

For any interested funders, PM me and I can share detailed call notes. 

I'll pay at least $75 for this comment. If nothing else, alerting me to RaDVaC's funding gap is clearly worth that much. I think it offered some interesting considerations beyond that. E.g. the search term polyethylene glycol seems useful, though I haven't looked into it much at all and definitely don't have strong models of that domain. 

(I also think the fact that this comment bundled together a lot of different arguments and considerations caused the karma to take a downward hit.)

I'll pay at least $150 $100 for this, might increase later. And yes, it will go to John if he accepts it. 

Thanks for signal-boosting, I had missed this. I'll pay at least $300 for it. (The fact that it already had been written 12 days ago seems like a point in its favour!)

1Josh Jacobson2y
That’s awesome, thanks!

Do you know which, if any, risk-reducing precautions they were following? 

Some standard ones like masks, but not at all times. They probably were in close or indoor contact with infected people without precautions.

I have some good leads, will check in with them tomorrow.

(If I stop working on this/don't make any progress I'll post about that here, so as not to make this funding gap erroneously appear filled.)

Following up: as a result of this thread, radvac will likely get a $100k donation (from a donor who was not considering them before). This does not fill their funding needs however, and they're looking to raise another $300k this year.  For any interested funders, PM me and I can share detailed call notes. 
I'm very happy to hear this. 

According to a facebook discussion one person involved with RaDVaC said that RaDVaC is heavily cash constrained.

Sounds like a state of affairs that should not be allowed to persist. Very interested in more details/screenshots if possible without violating any privacy norms -- I'll send you my email in PM.

It was a public discussion was on Robert Wiblin feed. Given that they are actually searching for funding it feels like a good utilitarian idea to quote here (if someone thinks it shouldn't be quoted just tell me):

Me: Is funding a problem holding RaDVaC back? If so, it might be worth making the case for EA funds going to RaDVaC on the EA-forum or seeking a grant from OpenPhil. I expect that it would be possible to raise high six figures or low seven figures for RaDVaC by seeking EA donations.


Alex Hoekstra:

Christian Kleineidam funding is very much a bottl

... (read more)

I'm excited about mechanism design in this space. Like, if you have a prediction market (or forecasting question with a good aggregation algorithm), you can sort of selectively throw out pieces of information, and then reward people based on how much those pieces moved the market.  (And yes, there are of course lots of goodhart-y failure modes to iron out to make it work.) 

In this case I'm not going to be quite so formal. I don't have that strong of an initial view, so it might often be more of rewarding "provided a very useful write-up" than "provide a compelling counterargument to a thoroughly considered belief". 


I think this post strikes a really cool balance between discussing some foundational questions about the notion of agency and its importance, as well as posing a concrete puzzle that caused some interesting comments.

For me, Life is a domain that makes it natural to have reductionist intuitions. Compared to say neural networks, I find there are fewer biological metaphors or higher-level abstractions where you might sneak in mysterious answers that purport to solve the deeper questions. I'll consider this post next time I want to introduce some... (read more)

I like this point. 

One important nuance, though, is that some of your intense work can be investing in things that decrease the likelihood of getting stuck in a bad attractor. 

That way, you have shot at jumping to high-output equilibria that you can actually sustain. 

From personal experience, I needed at least 4 different things to go right at the same time before I could start doing 60-80h weeks that didn't burn me out: 

  1. using a Freewrite
  2. building a custom GTD system in Roam that used the API to tailor it very heavily to my preferences
  3. us
... (read more)
Agreed! Burst work can be most effective when done on things that amplify or aid steady work (e.g. setting up new software, process automation, learning a skill).

Curated. I enjoyed how this post was a little journey of deconfusion from the inside. It went through some of the actual cognitive motions one might make when trying to understand economics. (Or, rather, when trying become less confused about questions like "Why does everyone's lives today seem so much better than people I read about in history books?" or "How is it that the guy at Papa John's down the street can spend a few days making pizza, and then go to the store... and return with a little all-in-one pocket camera-computer-telephone-thing more powerf... (read more)

Load More