Early COVID response on LW was a generalized "this is a big deal." I can't find the post that originally caught my eye, but I remember hitting the supermarkets in Buenos Aires, stocking up on masks and hand sanitizer, and two weeks later seeing the city freak the hell out. Jacob's "Seeing the Smoke" was a strong early signal, and Zvi's updates often considered explicit numbers.
I'm glad you like it!
Fixed the footnotes. They were there at the end, but unlinked. Some mixup when switching between LW's Markdown and Docs-style editor, most likely.
I see... so trolling by patenting something akin to convolutional neural networks wouldn't work because you can't tell what's powering a service unless the company building it tells you.
Maybe something on the lines of "service that does automatic text translation" or "car that drives itself" (obviously not these, since a patent with so much prior art would never get granted) would be a thing that you could fight over?
You're welcome! I'd like hearing a bit about how it helped, if you are ok with sharing.
Hi! I wrote a summary with some of my thoughts in this post as part of an ongoing effort to stop sucking at researching stuff. This article was a big help, thank you!
I'm glad you enjoyed it! I agree that more should be done. Just listing the specific search advice on the new table of contents would help a lot.
I'm gonna do the work, I promise. I'm just working up the nerve. Saying, in effect, "this experienced professional should have done his work better, let me show you how" is scary as balls.
First of all: thank you for setting up the problem, I had lots of fun!
This one reminded me a lot of D&D.Sci 1, in that the main difficulty I encountered was the curse of dimensionality. The space had lots of dimensions so I was data-starved when considering complex hypotheses (performance of individual decks, for instance). Contrast with Voyages of the Grey Swan, where the main difficulty is that broad chunks of the data are explicitly censored.
I also noticed that I'm getting less out of active competitions than I was from the archived posts. I'm so concerned with trying to win that I don't write about and share my process, which I believe is a big mistake. Carefully composed posts have helped me get my ideas in order, and I think they were far more interesting to observers. So I'll step back from active competitions for a bit. I'll probably do the research summaries I promised, "Monster Carcass Auction", "Earwax" (maybe?), then come back to active competitions.
Thank you for doing the work of correcting this usage; precision in language matters.
I made some progress (right in the nick of time) by...
Massaging the data into a table of every deck we've seen, and whether the deck won its match or lost it (the code is long and boring, so I'm skipping it here), then building the following machinery to quickly analyze restricted subsets of deck-space.
q = "1 <= dragon <= 6 and 1 <= lotus <= 6"
decks.query(q)["win"].agg(["mean", "sum", "count"])
q is used to filter us down to decks that obey the constraint. We then check the correlation of each card to winrate. Finally, we show how many decks were kept, and what the winrate actually is.
q can be pretty complicated, with expressiveness limits defined by pd.DataFrame.query. A few things that work:
(angel + lotus) == 0
1 <= dragon and 1 <= lotus and 4 <= (dragon + lotus)
1 <= dragon and lotus == 0
(pirate-1) <= sword <= (pirate+1)
My deck submission (PvE and PvP) is:
See response to Ben Pace for counterpoints.