I'll pay at least $75 for this comment. If nothing else, alerting me to RaDVaC's funding gap is clearly worth that much. I think it offered some interesting considerations beyond that. E.g. the search term polyethylene glycol seems useful, though I haven't looked into it much at all and definitely don't have strong models of that domain.
(I also think the fact that this comment bundled together a lot of different arguments and considerations caused the karma to take a downward hit.)
I'll pay at least $150 $100 for this, might increase later. And yes, it will go to John if he accepts it.
Thanks for signal-boosting, I had missed this. I'll pay at least $300 for it. (The fact that it already had been written 12 days ago seems like a point in its favour!)
Do you know which, if any, risk-reducing precautions they were following?
I have some good leads, will check in with them tomorrow.
(If I stop working on this/don't make any progress I'll post about that here, so as not to make this funding gap erroneously appear filled.)
According to a facebook discussion one person involved with RaDVaC said that RaDVaC is heavily cash constrained.
According to a facebook discussion one person involved with RaDVaC said that RaDVaC is heavily cash constrained.
Sounds like a state of affairs that should not
be allowed to persist.
Very interested in more details/screenshots if possible without violating any privacy norms -- I'll send you my email in PM.
It was a public discussion was on Robert Wiblin feed. Given that they are actually searching for funding it feels like a good utilitarian idea to quote here (if someone thinks it shouldn't be quoted just tell me):
Me: Is funding a problem holding RaDVaC back? If so, it might be worth making the case for EA funds going to RaDVaC on the EA-forum or seeking a grant from OpenPhil. I expect that it would be possible to raise high six figures or low seven figures for RaDVaC by seeking EA donations....
Me: Is funding a problem holding RaDVaC back? If so, it might be worth making the case for EA funds going to RaDVaC on the EA-forum or seeking a grant from OpenPhil. I expect that it would be possible to raise high six figures or low seven figures for RaDVaC by seeking EA donations.
Alex Hoekstra:Christian Kleineidam funding is very much a bottl
Christian Kleineidam funding is very much a bottl
I'm excited about mechanism design in this space. Like, if you have a prediction market (or forecasting question with a good aggregation algorithm), you can sort of selectively throw out pieces of information, and then reward people based on how much those pieces moved the market. (And yes, there are of course lots of goodhart-y failure modes to iron out to make it work.)
In this case I'm not going to be quite so formal. I don't have that strong of an initial view, so it might often be more of rewarding "provided a very useful write-up" than "provide a compelling counterargument to a thoroughly considered belief".
Curated. I think this post strikes a really cool balance between discussing some foundational questions about the notion of agency and its importance, as well as posing a concrete puzzle that caused some interesting comments.
For me, Life is a domain that makes it natural to have reductionist intuitions. Compared to say neural networks, I find there are fewer biological metaphors or higher-level abstractions where you might sneak in mysterious answers that purport to solve the deeper questions. I'll consider this post next time I want to introduce some... (read more)
I like this point.
One important nuance, though, is that some of your intense work can be investing in things that decrease the likelihood of getting stuck in a bad attractor.
That way, you have shot at jumping to high-output equilibria that you can actually sustain.
From personal experience, I needed at least 4 different things to go right at the same time before I could start doing 60-80h weeks that didn't burn me out:
Curated. I enjoyed how this post was a little journey of deconfusion from the inside. It went through some of the actual cognitive motions one might make when trying to understand economics. (Or, rather, when trying become less confused about questions like "Why does everyone's lives today seem so much better than people I read about in history books?" or "How is it that the guy at Papa John's down the street can spend a few days making pizza, and then go to the store... and return with a little all-in-one pocket camera-computer-telephone-thing more powerf... (read more)
I have put your text inside spoiler tags, since comments appear in recent discussion. In the linked post you'll learn how to do it for future discussion. :)
(I lived in this house) The estimate was largely driven by fear of long covid + a much higher value per hour of time, which also factored in altruistic benefits from housemate's work that aren't captured by the market price of their salary.
There were also about 8 of us, and we didn't assume everyone would get it conditional on infection (household attack rates are much lower than that, and you might have time to react and quarantine). We assumed maybe like 2-3 others.
I totally expect we would have paid $84,600 to prevent a random one of us getting covid -- and it would've even looked like a pretty cheap deal compared to getting it!
I moved this post to drafts :)
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.[...]However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.
However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
Martin Sustrik's "Anti-Social Punishment" post is great real-life example of this
This model makes explicit something I’ve had intuitions about for a while (though I wasn’t able to crystallise them nearly as perspicaciously or usefully as UnexpectedValues). Beyond the examples given in the post, I'm reminded of Zvi’s discussion of control systems in his covid series, and also am curious about how this model might apply to valuing cryptocurrencies, which I think display some of the same dynamics.
The post is also very well-written. It has the wonderful flavour of a friend explaining something to you by a whiteboard, building up a ... (read more)
Have you been meaning to buy the LessWrong Books, but not been able to due to financial reasons?
Then I might have a solution for you.
Whenever we do a user interview with someone, they get a book set for free. Now one of our user interviewees asked that instead their set be given to someone who otherwise couldn't afford it.
So, well, we've got one free set up for grabs!
If you're interested, just send me a private message and briefly describe why getting the book was financially prohibitive to you, and I might be able to send a set your way.
Strange indeed... but, here is a working version:
I had nudging cached in my memory as, more or less, a UX movement.
Want to increase charity donation at your company? Make it opt-out, rather than opt-in. Want to increase completion rates of your survey? Make it shorter.
And so forth.
So I was surprised by Jacob Falkovich claiming that nudgerism caused the elaborate psychological theorising used to inform covid policy. Many such policies mostly seemed to be about oddly specific, second-order claims. Like, in the case of expected resistance to challenge trials, or vaccine hesitancy. Those argument... (read more)
Habryka, is the reasoning that politicians have a real incentive to accurately predict public response -- because it entirely determines whether they remain in power -- whereas behavioral scientists have a much weaker incentive, compared to the dominant incentive of publishing significant results?
I haven't looked at the links, but making problem lists like this seems really cool. I'm glad they tried it, and then followed up.
I'm curious whether you know anything about why they tried it?
Hamming's original lecture talks about how most scientists he had lunch with sort of flinched away from their field's Hamming problems. He asked why they weren't working on them. It's implied that the conversation usually didn't go down very well, and the next day he had to eat lunch with someone else.
Why were things different for the Accounts of Chemical Research people? Unusual amounts of curiosity, courage, accident, or something else?
There is an argument that the use of willpower is undesirable.
Would be good to add a source.
I'm currently on vacation, but I'd be interested in setting up a call once I'm back in 2 weeks! :) I'll send you my calendly in PM
I appreciate you following up on this!
The sad and honest truth, though, is that since I wrote this post, I haven't thought about it. :( I haven't picked up on any key new piece of evidence -- though I also haven't been looking.
I could give you credences, but that would mostly just involve rereading this and loading up all the thoughts
gum disease represents a very large and growing cause of both morbidity and economic burden for people in all economic situations.
Curious if you have some links for data/calculations on the disease burden?
Also, do we have a reason to believe this is an area where peptide vaccines would be especially helpful?
I went down the neoantigen rabbithole, and it was quite interesting.
I liked this talk on "Developing Personalized Neoantigen-Based Cancer Vaccines".
It seems a core part of their methodology is using machine learning to predict which peptides will elicit a T-cell response, based on sequencing the patient's tumour. (Discussed starting from around 11 minutes in.)
They use this algorithm, which seems to be a neural network with a single hidden layer just ~60 neurons wide, and some amount of handcrafting of input features (based on papers from 2003 and 200... (read more)
I'm updating fairly hard on the four radvac team members who found antibodies using custom-built ELISA assays (rather than commercial tests). I wasn't super compelled by arguments that those might be false positives, but I do find it important that we don't know the denominator off how many of them took that test.
It maybe moved my probability from 17% to 45% that it would work for me (so still less optimistic than Wentworth!)
Though I think even a 5% chance of it working would make the original question worth asking. As they say: huge if true :)
(Also, the m... (read more)
<1%, because RaDVaC team has tried it and didn’t manage to get any positive result.
That's false, they got several positive anitbody results in ~June or so last year. See a comment elsewhere on this post.
Curious if anyone ended up running this process, and, if so, what your results were?
This actually flies against my sense that Bell Labs was able to build the transistor because of their resources and build-up of particular knowledge and expertise they had after 20-years. Possibly their ideas were just getting spread around via their external contacts, or actually, solid-state physics was taking off generally.
Woah, this was striking to me. It seems like pretty big evidence against Bell Labs actually having a secret sauce of enabling intellectual progress. I would have to look into it more, though. (Also the update is tempered by the ... (read more)
@Davidmanheim you're a pretty big outlier here, and this is also the kind of question where I'd trust your judgement a fair bit:
So curious if you wanted to elaborate a bit on your model?
First, base rates are critical. Looking at potential drugs overall, the rate of approvals due to safety alone - i.e. "Investigational New Drugs" to phase-II efficacy trials, is very low. Phase 1 trials are typically 80-100 people, and most don't manage to make it past that stage. It would take much stronger evidence than I have seen to think that this vaccine is going to be outside of the norm.Second, even if the process as done was safe, I can't imagine that greater than 99% of people manage to do this without screwing up in some serious way. That's less ... (read more)
Well, this post was just crying out for some embedded predictions! So here we go:
Yep, this is indeed a reason proper scoring rules don't remain proper if 1) you only have a small sample size of questions, and 2) utility of winning is not linear in the points you obtain (for example, if you really care about being in the top 3, much more than any particular amount of points).
Some people have debated whether it was happening in the Good Judgement tournaments. If so, that might explain why extremizing algorithms improved performance. (Though I recall not being convinced that it was actually happening there). When Metaculus ran its c... (read more)
Curated! And in doing so, I feel proud to assume the role of Patron Saint of LessWrong Challenges, and All Those Who Test Their Art Against the Territory.
Some reasons I'm excited about this post:
1) Challenges help make LessWrong more grounded, and build better feedback loops for actually testing our rationality. I wrote more about this in my curation notice for The Darwin Game challenge, and wrote about it in the various posts of my own Babble Challenge sequence.
2) It was competently executed and analysed. There were nice control groups used; th... (read more)
Nice, this is interesting!
You need your business partners but they don't need you
I don't understand what this means and what it's measuring.
Makes sense! An intro paragraph could be good :)
Congratulations on your first LessWrong post! :) (Well, almost first)
As a piece of feedback, I will note that I found the "Rosenberg's crux" section pretty hard to read, because it was quite dense.
I feel like if I would've have read the original letter exchange, I could then have turned to this post, and gone "a-ha!" In other words, it felt like a useful summary, but didn't give me the original generators/models, such that I could pass the intellectual Turing test of what Dennett and Rosenberg actually believe.
By comparison, I think the section... (read more)
The forecasters were only quite loosely selected for "some forecasting experience". Some of them I know are very able forecasters, others are people much less experienced, and who I don't think are affiliated that much with the rationality or effective altruism communities.
I failed to meet all my commitments.
Operationalize three forecasting questions
Smashed this one and created 20+ questions.
Run one MTurk/Positly survey
I have a beginning draft of a survey for the Secret of Our Success. I hoped I could finish it up yesterday, but instead I had work on shipping the LessWrong Books. Will see if I can get it out later this week.
Have at least one 2h conversation about a particular post, and write up a review after, almost regardless of how I feel the conversation went
Didn't happen and didn't really come close.&n... (read more)
Reviews seem to me to have a lower karma on average than either posts, or comments on currently popular posts.
Following up on this: how did it go?
Author here: I think this post could use a bunch of improvements. It spends a bunch of time on tangential things (e.g. the discussion of Inadequacy and why this doesn't come through in textbooks, spending a while initially setting up a view to then tear down).
But really what would be nice is to have it do a much better job at delivering the core insight. This is currently just done in two bullets + one exercise for the reader.
Even more important would be to include JenniferRM's comment which adds a core mechanism (something like "cultural learn... (read more)
Ah, woe is me! Fixed now, thanks!
Yeah I thought about that. I'm curious whether one could operationalise the field-picking into an interesting poll question.
Formulations are basically just lifted from the post verbatim, so the response might be some evidence that it would be good to rework the post a bit before people vote on it.
I thought a bit about how to turn Katja's core claim into a poll question, but didn't come up with any great ideas. Suggestions welcome.
As for whether the claims are true or not --
The "broken parts" argument is one counter-argument.
But another is that it matters a lot what learning algorithm you use. Someone doing deliberate practice (in a field where that's possible)... (read more)
Here are prediction questions for the predictions that TurnTrout himself provided in the concluding post of the Reframing Impact sequence.
Ey, awesome! I've updated the post to include them.
Reading the OP quickly, I wasn't entirely sure what I was supposed to babble about... "100 ways to light a candle" is easier than "...anything" :)
Consider giving some prompts that people could default to, unless they have something else in mind already?
I made some prediction questions for this, and as of January 9th, there interestingly seems to be some disagreement with the author on these.
Would definitely be curious for some discussion between Matthew and some of the people with low-ish predictions. Or perhaps for Matthew to clarify the argument made on these points, and see if that changes people's minds.
(You can find a list of all 2019 Review poll questions here.)