All of asdfrty6's Comments + Replies

If you can find any high-level coaches of 1v1 games who are interested in running experiments, that's great. I don't have the option of just becoming a pro Starcraft coach in order to run a 'better' experiment. 

I'm also curious why you think this; skills of communication/teamwork are pretty central to what I'm thinking. We already have lots of information about how good smart people are at chess and how smart pro chess players are, too, so it's just a matter of figuring out where individual games lie on the spectrum from something like chess (very strategic) to something like Smash (very twitchy). We have much less information about FPS, so to me it's a much more interesting experiment.

I mean, it's easier to find two people willing to play than ten. So you'll get more data. With one or two teams it will be hard to draw any conclusions at all.

I would prefer not to take on people with history of being gold players because it seems like bad science. However, I don't have a ton of interest at the moment, so I might consider whether it's a good idea?

"Picking only smart people is like a school accepting only good students and then miraclously having good grades for their students." - I don't think this is true. Yes, if I picked a bunch of smart students and then my students all turned out to be good at mathematics or programming or Greek, it wouldn't be surprising. However, if I picked people purely... (read more)

I am not surprised that gold background is a undesirable trait. however this is how we get high side-effects for women in drugs sold at stores, because testers prefer male over female. If humans in the wild have a 20% trait rate and your sample has 1% or 0% that is going to lead in a bad result in its own way. Having a WEIRD sample is not particularly representative. If you have a discipline that supports multiple frameworks and recruit on resonance with a particular framework then the result tells less about the frameworks properties. For example one could try to provde that chess is an endurance game of bothering to check enough positions and reqruit based on stamina in order to "prove" it is not a game of intellect. I remember when balancing away dive was a talking point. Then a lot of the teams were squamish in scrimming other strategies. If you need to redo the whole strategy stack instead of just adjusting the top layers then teams will eventually do it but it can take long while. If you tell a high rank player to push they will know to still reftrain from being mindlessly suicidal, to not push all the way throught spawn etc. If you desribe somethings color in grue and bleen if helps if the communication reciever has existing support for those concepts. Even if there is no explicit culture sharing the learning curve could provide a way for "on the onset" some fundamentals to be evident and then when those are taken into account then more fine-graded concepts can make sense. But part of the point is that  the incentive gradient to make the distinction doesn't exist at all stages. This can be seen as an aspect to the "smiley face maximiser" error state of aligment problem, the defintions and concepts that humans actually use don't exist in a neat context-free way. Telling a human to go "make people smile" result in sensible action while a literal minded Ai will tile things destructively with inapproriate patterns.

Yeah, this definitely doesn't explain my gold players who spend hours every day in Kovaaks.

No, elo is not a flat distribution. Roughly 2-3% of accounts are in Grandmaster (4000+), the next 5% in master (3500-4000), the next ~10% in diamond (3000-3500), the next 30% in platinum (2500-3000), the next 30% in gold (2000-2500)... but this is skewed for a few reasons. Casual players are more likely to stick to Quick Play and not rank in Competitive, and higher-level players are significantly more likely to have multiple accounts, so the percentage of accounts in... (read more)

Huh, my experience doesn't support this. I run an organization that has lower-ranked teams as well as higher-ranked teams. Many of my lower-ranked players have been attending scrims and reviews for years (definitely far more work than the equivalent of casting 100 games) and are still below average. I find that a lot of them don't have good mental tools for integrating information and applying it, or don't signal to me when they've fundamentally misunderstood something, or quickly forget things and reverse improvements, or aren't good at introspecting abou... (read more)

I actually think the fun part explains it even more.  I have a buddy I game with all the time.  I always end up better then them.  They ask for help.  I point out something I've identified as a fundamental in the game (the equivalent of aiming/positioning in FPS games, or building workers in RTS games) and some little practice method that I went away and did for 2 or 3 hours one day to get better at that fundamental.  Then, every time, they say "that would make it not fun" and just spam some games.  Because there's a fun inefficient way to practice they just do that instead of the less fun efficient way to practice. Just to clarify, we're still talking about getting above 3500 when the average is 2500 and pro is 4500?  So, getting to the top 20-25% or so of the game?  What do you find to be the limiting factor on the people stuck below 3500? It's my impression that when we're talking about that sort of rank for a game we're still talking about people who haven't gotten down the basic fundamentals and haven't gotten to the point higher level strategy is super important.  For the equivalent rank in sc2 you can still just pick any random strat you want and work on your fundamentals.  It wasn't till around top 2% I felt the need to learn actual strategy instead of just "spend all your money as fast as possible".  Tons of top20% players would be like "I spent all week practicing this new strategy I saw someone do in the last tournament" but still be floating tons of minerals because practicing spending money faster is boring.