Related: https://en.wikipedia.org/wiki/Gemba
Thanks for sharing these! I can also attest to witnessing you hustle for items fast :) I especially like your point 1
Hard to give a general answer, but I think 2x someone's normal salary (especially if it's cash) is usually quite sufficient to get the job done, and kind of reliably has helped me in the past when I've try to find people happy to work night shifts
I was setting up a retreat venue, and they were pretty weird and special beds -- such that if they would've actually worked, it would've pivoted our strategy for setting up the space in a somewhat major way.
You mean money or social capital?
I think it depends on scale. If Ford produces cars in batches of 100(? 1000? more?) they probably can't rejigger the factory. In this case it was a local ironworker who probably had 10 or fewer guys working his shop, so a bit more flexibility.
Yes, show up uninvited. That happens a lot in our slack. Our team is small enough that most people read most channels.
Can confirm Lightcone is very chaotic and sometimes works heroic hours, and it seems tangled up in our way of working for reasons that are not super clear to me.
So when reading your comment I was asking myself why the above template couldn't be run by a project that wanted to work closer to 40 hours rather than 80 hours per week? One answer is that "Well, if people are importantly blocking elements, they must be available to reply on slack and unblock other people whenever", which is mostly true for us, except that 1) we almost never wake up people w...
For what it's worth I don't consider this essay to be about "ops" that much. Also lots of people keep calling much of what Lightcone does "ops", but we often really don't think of ourselves as doing ops that much :) Some words that sound to me more like the thing I think of Lightcone as doing, most of the time: "uncertainty reduction", "product discovery", "trying to doing things fast".
I might write up rants about hiring at some point, though I don't think I'm particularly experienced or great at it :) For now I'll just say I like YCombinator's content on this. Not sure how to find all the relevant essays and videos, but this might help https://www.ycombinator.com/library?categories=Hiring
Things I've read / advice I've gathered that influenced me a lot, are:
I don't have that much experience, so don't want to say too much. But I think it should apply well to things like startups searching for product-market fit (you're rate-constrained by how fast you can figure out what people want), or a factory increaseing how many widgets they can output per day, but less well to e.g. teams that are trying to maintain a system, like a gardener, or a janitorial service.
I think this post makes sense from a "theory of constraints" perspective: if you're pursuing a mission that's no faster than some identifiable step, and the best thing you can do each week to move faster toward your mission is mostly about speeding up that step.
The lame answer: yeah, it does mess with deep work, and I'm not super sure how to balance them.
The spicy answer: I have an unwritten polemic entitled "Against Deep Work". I can't share it though since I have not written it. Fortunately, much of what I hope to say in that post is captured in Chapter 9 of the Lean Startup, which has a section that resonates a lot with my experience. I'll just go ahead and quote it because it's so damn good (starting on page 191 in the linked PDF).
...Imagine you’re a product designer overseeing a new product and you need t
There's at least one hotel in Berkeley with rooms for $500/night or more, and I claim for the better hotels it is quite rare that you can get them for <$200. As evidence, you can select some dates and look at the hotels here: https://maps.app.goo.gl/pMwuNQoZVJzV9Kx77
I'm not sure, but I think I might have seen a sign in a rose garden room with a $500 rack rate. Second floor building B. I found it quite funny given how far from the current state it was, it read like a decades old relic of what the Inn once was.
Curated.
There are really many things I found outstanding about this post. The key one, however, is that after reading this, I feel less confused when thinking about transformer language models. The post had that taste of deconfusion where many of the arguments are elegant, and simple; like suddenly tilting a bewildering shape into place. I particularly enjoyed the discussion of ways agency does and does not manifest within a simulator (multiple agents, irrational agents, non-agentic processes), the formulation of the prediction orthogonality thesis, ways i...
At first glance of seeing this, I'm reminded of the safety questionnaires I had to fill out as part of running a study when taking experimental psychology classes in undergrad. It was a lot of annoyance and mostly a box ticking exercise. Everyone mostly did what they wanted to do anyway, and then hurriedly gerrymandered that questionnaire right before the deadline, so the faculty would allow them to proceed. Except the very conscientious students, who saw this as an excellent opportunity to prove their box ticking diligence.
As a case in point, I migh...
[I work for Dan Hendrycks but he hasn't reviewed this.]
It seems to me like your comment roughly boils down to "people will exploit safety questionaires." I agree with that. However, I think they are much more likely to exploit social influence, blog posts, and vagueness than specific questionaires. The biggest strengths of the x-risk sheet, in my view, are:
(1) It requires a specific explanation of how the paper is relevant to x-risk, which cannot be tuned depending on the audience one is talking to. You give the example from the forecasting paper and...
If someone asks what the rock is optimizing, I’ll say “the actions” - i.e. the rock “wants” to do whatever it is that the rock in fact does.
This argument does not seem to me like it captures the reason a rock is not an optimiser?
I would hand wave and say something like:
"If you place a human into a messy room, you'll sometimes find that the room is cleaner afterwards. If you place a kid in front of a bowl of sweets, you'll soon find the sweets gone. These and other examples are pretty surprising state transitions, that would be highly unlikely i...
An update on this: sadly I underestimated how busy I would be after posting this bounty. I spent 2h reading this and Thomas post the other day, but didn't not manage to get into the headspace of evaluating the bounty (i.e. making my own interpretation of John's post, and then deciding whether Thomas' distillation captured that). So I will not be evaluating this. (Still happy to pay if someone else I trust claim Thomas' distillation was sufficient.) My apologies to John and Thomas about that.
Sorry for late reply: no, it does not.
Cool, I'll add $500 to the distillation bounty then, to be paid out to anyone you think did a fine job of distilling the thing :) (Note: this should not be read as my monetary valuation for a day of John work!)
(Also, a cooler pay-out would be basis points, or less, of Wentworth impact equity)
Thanks for being open to suggestions :) Here's one: you could award half the prize pool to compelling arguments against AI safety. That addresses one of John's points.
For example, stuff like "We need to focus on problems AI is already causing right now, like algorithmic fairness" would not win a prize, but "There's some chance we'll be better able to think about these issues much better in the future once we have more capable models that can aid our thinking, making effort right now less valuable" might.
That idea seems reasonable at first glance, but upon reflection, I think it's a really bad idea. It's one thing to run a red-teaming competition, it's another to spend money building rhetorically optimised tools for the other side. If we do that, then maybe there was no point running the competition in the first place as it might all cancel out.
How long would it have taken you to do the distillation step yourself for this one? I'd be happy to post a bounty, but price depends a bit on that.
Bought the hat. I will wear it with pride.
Curious if you have suggestions for a replacement term for "grabby" that you'd feel better about?
"likely evolved via natural selection"
My default expectation would be that it's a civilization descended from an unaligned AGI, so I'm confused why you believe this is likely.
A guess: you said you're optimistic about alignment by default -- so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
That's awesome, I didn't know this.
I found a low six figure donor who filled 25% of their funding gap. The rest has not been filled. EDIT: They also got an ACX grant, but not enough to fill the whole funding gap I believe. I can intro donors who want to fill it to the radvac team. Email me at jacob at lesswrong.com
Original thread and credit to ChristianKl who picked up on and alerted me to rumours of the funding gap: https://www.lesswrong.com/posts/fBGzge5i4hfbaQZWy/usd1000-bounty-how-effective-are-marginal-vaccine-doses?commentId=XwA8mtvK8YCob2pLK#comments
Regarding the remaining funding gap you mention, did you take the recent $100,000 ACX grant into account?
Alex Hoekstra, $100,000, for the Rapid Deployment Vaccine Collaborative (RaDVaC) to make open-source modular affordable vaccines.
... RaDVaC still needs more funding ...
I'm happy to hear that my cries resulted in at least some founding being moved to them even if it's still sad that there's still a funding gap.
In addition to what Eliezer already said about RaDVaC being good because it uses technology that can easily scaled up, RaDVaC has the potential to reduce infection by COVID-19 when given in addition to traditional vaccines because it potentially creates a mucosal immune system response in the upper respiratory system which the vaccines we currently have don't.
While there's some potential that Omicron will end COVID-19 that is optimistic. We might need something like RaDVaC to finally end this pandemic.
This will be a bit of a disappointing answer (sorry in advance), but I indeed think UI-space is pretty high-dimensional and that there are many things you can do that aren't just "remove options for all users". Sadly, the best I way I know of how to implement this is to just do it myself and show the result; and I cannot find the time for that this week.
I also tried and failed to get my family to use it :( Among other things, I think they bounced off particularly hard on the massive drop-down of 10 different risk categories of ppl and various levels of being in a bubble.
I don't think the blocker here was fundamentally quantitative -- they think a bunch about personal finance and budgeting, so that metaphor made sense to them (and I actually expect this to be true for a lot of non-STEM ppl). Instead, I think UX improvements could go a long way.
I voted disagree, because at this point there have been plenty of COVID forecasting tournaments hosted by Good Judgement, Metaculus and several 3rd parties. Metaculus alone has 400 questions in the COVID category, a lot of which have 100+ predictions. I personally would find it quite easy to put together a group of forecasters with legibly good track record on COVID, but from working in this space I also do have a sense of where to start looking and who to ask.
This has now resolved false.
Do you mean that if one would like to go to such a bootcamp but thinks they won't be able to get visa in time, they should apply now to get invited to a future cohort?
Apply now -- there's a question at the end asking if you're interested in future cohorts. And you can say in the final textbox that you're not available for Round 1.
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt.
You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a partic...
Jaan/Holden convo link is broken :(
Following up: as a result of this thread, radvac will likely get a $100k donation (from a donor who was not considering them before). This does not fill their funding needs however, and they're looking to raise another $300k this year.
For any interested funders, PM me and I can share detailed call notes.
I'll pay $425 for this answer, will PM you for payment details.
I'll pay $50 for this answer, will message you for payment details.
I'll pay at least $75 for this comment. If nothing else, alerting me to RaDVaC's funding gap is clearly worth that much. I think it offered some interesting considerations beyond that. E.g. the search term polyethylene glycol seems useful, though I haven't looked into it much at all and definitely don't have strong models of that domain.
(I also think the fact that this comment bundled together a lot of different arguments and considerations caused the karma to take a downward hit.)
I'll pay at least $150 $100 for this, might increase later. And yes, it will go to John if he accepts it.
Thanks for signal-boosting, I had missed this. I'll pay at least $300 for it. (The fact that it already had been written 12 days ago seems like a point in its favour!)
Do you know which, if any, risk-reducing precautions they were following?
I have some good leads, will check in with them tomorrow.
(If I stop working on this/don't make any progress I'll post about that here, so as not to make this funding gap erroneously appear filled.)
According to a facebook discussion one person involved with RaDVaC said that RaDVaC is heavily cash constrained.
Sounds like a state of affairs that should not be allowed to persist. Very interested in more details/screenshots if possible without violating any privacy norms -- I'll send you my email in PM.
It was a public discussion was on Robert Wiblin feed. Given that they are actually searching for funding it feels like a good utilitarian idea to quote here (if someone thinks it shouldn't be quoted just tell me):
Me: Is funding a problem holding RaDVaC back? If so, it might be worth making the case for EA funds going to RaDVaC on the EA-forum or seeking a grant from OpenPhil. I expect that it would be possible to raise high six figures or low seven figures for RaDVaC by seeking EA donations.
...
...Alex Hoekstra:
Christian Kleineidam funding is very much a bottl
I'm excited about mechanism design in this space. Like, if you have a prediction market (or forecasting question with a good aggregation algorithm), you can sort of selectively throw out pieces of information, and then reward people based on how much those pieces moved the market. (And yes, there are of course lots of goodhart-y failure modes to iron out to make it work.)
In this case I'm not going to be quite so formal. I don't have that strong of an initial view, so it might often be more of rewarding "provided a very useful write-up" than "provide a compelling counterargument to a thoroughly considered belief".
Curated.
I think this post strikes a really cool balance between discussing some foundational questions about the notion of agency and its importance, as well as posing a concrete puzzle that caused some interesting comments.
For me, Life is a domain that makes it natural to have reductionist intuitions. Compared to say neural networks, I find there are fewer biological metaphors or higher-level abstractions where you might sneak in mysterious answers that purport to solve the deeper questions. I'll consider this post next time I want to introduce some...
Thanks, fixed!
I like this point.
One important nuance, though, is that some of your intense work can be investing in things that decrease the likelihood of getting stuck in a bad attractor.
That way, you have shot at jumping to high-output equilibria that you can actually sustain.
From personal experience, I needed at least 4 different things to go right at the same time before I could start doing 60-80h weeks that didn't burn me out:
Curated. I enjoyed how this post was a little journey of deconfusion from the inside. It went through some of the actual cognitive motions one might make when trying to understand economics. (Or, rather, when trying become less confused about questions like "Why does everyone's lives today seem so much better than people I read about in history books?" or "How is it that the guy at Papa John's down the street can spend a few days making pizza, and then go to the store... and return with a little all-in-one pocket camera-computer-telephone-thing more powerf...
The majority of tasks we do don't involve any memos. We write them occasionally when 1) thinking through some big, hard-to-reverse decision, 2) when setting goals for 2-week sprints, and 3) if someone wants to call an all-hands meeting, we usually require them to write a memo first (which is a nice rate-limit on how many all-hands meetings you get).
I think the rate of memos written is maybe 0.5 per person per month, on average, but counting it is a bit messy.