I went down the neoantigen rabbithole, and it was quite interesting.
I liked this talk on "Developing Personalized Neoantigen-Based Cancer Vaccines".
It seems a core part of their methodology is using machine learning to predict which peptides will elicit a T-cell response, based on sequencing the patient's tumour. (Discussed starting from around 11 minutes in.)
They use this algorithm, which seems to be a neural network with a single hidden layer just ~60 neurons wide, and some amount of handcrafting of input features (based on papers from 2003 and 2009). I wonder what one could accomplish with more modern tools (though I haven't yet read the papers deeply enough to have a model of how big of a bottleneck this is to creating an effective treatment, and how much room for improvement there is).
I'm updating fairly hard on the four radvac team members who found antibodies using custom-built ELISA assays (rather than commercial tests). I wasn't super compelled by arguments that those might be false positives, but I do find it important that we don't know the denominator off how many of them took that test.
It maybe moved my probability from 17% to 45% that it would work for me (so still less optimistic than Wentworth!)
Though I think even a 5% chance of it working would make the original question worth asking. As they say: huge if true :)
(Also, the more competent version of me who solved it in a month would need to be competent on many other dimensions as well, not just knowing about peptide vaccines. Thinking about it, just the peptide delivery time could be longer than a month, as could the vaccine booster schedule. I do think there are worlds where it's actually a month, but I'll update the question to say "a few")
<1%, because RaDVaC team has tried it and didn’t manage to get any positive result.
That's false, they got several positive anitbody results in ~June or so last year. See a comment elsewhere on this post.
Curious if anyone ended up running this process, and, if so, what your results were?
This actually flies against my sense that Bell Labs was able to build the transistor because of their resources and build-up of particular knowledge and expertise they had after 20-years. Possibly their ideas were just getting spread around via their external contacts, or actually, solid-state physics was taking off generally.
Woah, this was striking to me. It seems like pretty big evidence against Bell Labs actually having a secret sauce of enabling intellectual progress. I would have to look into it more, though. (Also the update is tempered by the fact that another argument for Bell Labs' greatness is the sheer number of inventions, like UNIX, satellites, lasers, information theory, and other stuff.)
@Davidmanheim you're a pretty big outlier here, and this is also the kind of question where I'd trust your judgement a fair bit:
So curious if you wanted to elaborate a bit on your model?
Well, this post was just crying out for some embedded predictions! So here we go:
Thanks johnswentworth for help with some of the operationalisations!
I included many different ones, as I think it is often good try to triangulate high stakes questions via different operationalisations. This reduces some some "edge-case noise" stemming from answering vague questions in overly specific ways.
Yep, this is indeed a reason proper scoring rules don't remain proper if 1) you only have a small sample size of questions, and 2) utility of winning is not linear in the points you obtain (for example, if you really care about being in the top 3, much more than any particular amount of points).
Some people have debated whether it was happening in the Good Judgement tournaments. If so, that might explain why extremizing algorithms improved performance. (Though I recall not being convinced that it was actually happening there). When Metaculus ran its crypto competition a few years ago they also did some analysis to check if this phenomenon was present, yet they couldn't detect it.
Curated! And in doing so, I feel proud to assume the role of Patron Saint of LessWrong Challenges, and All Those Who Test Their Art Against the Territory.
Some reasons I'm excited about this post:
1) Challenges help make LessWrong more grounded, and build better feedback loops for actually testing our rationality. I wrote more about this in my curation notice for The Darwin Game challenge, and wrote about it in the various posts of my own Babble Challenge sequence.
2) It was competently executed and analysed. There were nice control groups used; the choice of scoring rule was thought through (as well as including what would've been the results of other scoring rules); the data was analysed in a bunch of different ways which managed to be both comprehensive while at the same time maintaining my curiosity and being very readable.
Furthermore, I can imagine versions of this challenge that would either feel butchered, in such a way that I felt like I didn't learn anything from reading the results, or needlessly long and pedantic, in such a way that getting the insight wouldn't have been worth the trek for most people. Not so with this one. Excellent job, UnexpectedValues.
3) I want to celebrate the efforts of the participants, some of whom devised and implemented some wonderful strategies. The turtle graphic fingerprints, gzip checks, mean-deviation scatter, and many others were really neat. Kudos to all who joined, and especially the winners, Jenny, Reed, Eric, Scy, William, Ben, Simon, Adam and Viktor!
I would love to see more activities like these on LessWrong. If you want to run one and would like help with marketing, funding for prizes, or just general feedback -- do send me a message!
Nice, this is interesting!
You need your business partners but they don't need you
I don't understand what this means and what it's measuring.