I think Nuno's time-capped analysis is good.
Thanks for the post. I would recommend reading the original blog post by Noam Brown as it has the proper level of exposition and more details/nuances.
Overall, it seems that Pluribus is conceptually very similar to Libratus; sadly, no new insights about >2-player games. My impression is that because poker players don't collude/cooperate too much, playing something close to an equilibrium against them will make you rich.
If one has 2 possible models to fit a data set, by how much should one penalize the model which has an additional free parameter?
Penalization might not be necessary if your learning procedure is stochastic and favors simple explanations. I encourage you to take a look on the nice poster/paper «Deep learning generalizes because the parameter-function map is biased towards simple functions» (PAC-Bayesian learning theory + empirical intuitions).
Rohin, thank you for the especially long and informative newsletter.
When there are more samples, we get a lower validation loss [...]
I guess you've meant a higher validation loss ?
a higher validation loss
Alexey, happy birthday to your podcast! I've just subscribed and hope you would post consistently in the future. How many subscribers do you have?
If you are curious why Russian chatroom is so big I encourage you to read about Kocherga. With 174 karma and 54 votes, it is the highest rated non-curated LW post at the moment.
I would like to highlight Russian LessWrong Slack, which has 2000+ registered users, ~150 WAU (among which ~50 are posting) and ~80 DAU (~25 are posting).
Said Achmiz's LessWrong Diaspora Map lists 12 chatrooms.
I augment Pomodoros with
EA Forum: Donating effectively is usually better than impact investing.