Reflections on my performance:
I failed to stick the landing for PVE; looking at gjm’s work, it seems like what I was most missing was feature-engineering while/before building ML models. I’ll know better next time.
For PVP, I did much better. My strategy was guessing (correctly, as it turned out) that everyone else would include a Professor, noticing that they’re weak to Javelineers, and making sure to include one as my backmidline.
Reflections on the challenge:
I really appreciated this challenge, largely because I got to use it as an excuse to teach myself to build Neural Nets, and try out an Interpretability idea I had (this went nowhere, but at least failed definitively/interestingly).
I have no criticisms, or at least none which don’t double as compliments. The ruleset was complicated and unwieldy, increasing the rarity of “aha!” moments and natural stopping points during analysis, and making it hard to get an intuitive sense of how a given matchup would shake out (even after the rules were revealed) . . . but that’s exactly what made it such a useful testing ground, and such valuable preparation for real-world problems.
Just recording for posterity that yes, I have noticed that
Rangers are unusually good at handling Samurai, so it might make sense to have one on my PVE team.
However, I've also noticed that
Rangers are unusually BAD at handling Felons, to a similar or greater degree.
As such,
I think it makes more sense to keep Pyro Professor as my mid-range heavy-hitter in PVE.
(. . . to my surprise, this seems to be the only bit of hero-specific rock-paper-scissors that's relevant to the PVE challenge. I suspect I'm missing something here.)
Threw XGBoost at the problem and asked it about every possible matchup with FRS; it seems to think
my non-ML-based pick is either optimal or close-to-optimal for countering that lineup.
(I'm still wary of using ML on a problem instead of thinking things through, but if it confirms the answer I got by thinking things through, that's pretty reassuring.)
Therefore, I've decided
to keep HLP as my PVE team.
And I've DM'd aphyer my PVP selection.
My main finding thus far:
There's a single standard archetype which explains all the most successful teams. It goes like this: [someone powerful from the MPR cluster, ideally P], [a frontman, selected from GLS], [someone long-ranged, selected from CHJ]. In other words, this one is all about getting a good range of effective ranges in your team.
My tentative PVE submission is therefore:
Hurler, Legionary, Professor
However:
So it'll take me a while to decide on my PVP allocation, and I'm reserving the right to change my PVE one.
Reflections x3 combo:
Just realized this could have been a perfect opportunity to show off that modelling library I built, except:
A) I didn't have access to the processing power I'd need to make it work well on a dataset of this size.
B) I was still thinking in terms of "what party archetype predicts success", when "what party archetype predicts failure" would have been more enlightening. Or in other words . . .
. . . I forgot to flip the problem turn-ways.
Reflections on my performance:
This stings my pride a little; I console myself with the fact that my "optimize conditional on Space and Life" allocation got a 64.7% success rate.
If I'd allocated more time, I would have tried a wider range of ML algorithms on this dataset, instead of just throwing XGBoost at it. I'm . . . not actually sure if that would have helped; in hindsight, trying the same algorithms on different subsets ("what if I built a model on only the 4-player games?") and/or doing more by-hand analysis ("is Princeliness like Voidliness, and if so, what does that mean?") might have provided better results.
Reflections on the challenge:
I found this one hard to get started with because it had a de facto 144 explanatory columns ("does this party include a [Class] of [Aspect]?") along with its 1.4m rows, and the effects of each column was mediated by the effects of each other column. This made it difficult - and computationally intensive! - to figure out anything about what classpect combinations affect the outcome.
That said, I appreciated this scenario. The premise was fun, the writing was well-executed, and the challenge was fair. Also, it served as a much-needed proof-by-example that "train one ML model, then optimize over inputs" isn't a perfect skeleton key for solving problems shaped like this. If it was a little obtuse on top of that . . . well, I can chalk that up to realism.
The jankiness here is deliberate (which doesn't preclude it from being a mistake). My class on Bayesianism is intended to also be a class on the limitations thereof: that it fails when you haven't mapped out the entire sample space, that it doesn't apply 'cleanly' to any but the most idealised use cases, and that once you've calculated everything out you'll still be left with irreducible judgement calls.
(I have the "show P(sniper)" feature always enabled to "train" my neural network on this data, rather than trying to calculate this in my head)
That's among the intended use cases; I'm pleased to see someone thought of it independently.
>You link to index C twice, rather than linking to index D.
Whoops! Fixed now, thank you.