Some general thoughts from a former masters SC2 player, who has also been decently highly ranked at many other games
There was a famous starcraft caster who was constantly being asked how to become a starcraft caster by people who said it was their dream. He told them all "Go record yourself trying to cast 100 games then send me a message". Literally only one person took him up on that, and now they're a famous starcraft caster.
My prediction is that people willing to do the work can get good insanely quickly and people who aren't won't. I think "most people say they are willing to do the work but aren't" explains the vast majority of the phenomenon you call out. You can train a dedicated person to be good really fast but most coaches find 95% of their clients are people looking to get good quick with no work. I think being willing to put in that effort is a far more important variable than raw intellect. If you have someone willing to spend 1000 hours deliberately practicing aiming but isn't smart enough to keep up with the pros when it comes to thinking presciently and can only handle a few very common scenarios then I expect them to get to an obscenely high ranking (much like how "never stop building workers" usually gets you to the diamond equivalent in any RTS game without having to practice any other skills)
Huh, my experience doesn't support this. I run an organization that has lower-ranked teams as well as higher-ranked teams. Many of my lower-ranked players have been attending scrims and reviews for years (definitely far more work than the equivalent of casting 100 games) and are still below average. I find that a lot of them don't have good mental tools for integrating information and applying it, or don't signal to me when they've fundamentally misunderstood something, or quickly forget things and reverse improvements, or aren't good at introspecting about how/why they make mistakes.
I think most-people-don't-try-very-hard explains why people are bad at many skills, but it struggles to explain why people are bad at video games. Video games are fun, so it's not difficult to find someone willing to put in 1000 hours. I know lots of people who have put in over 1000 hours and are still bad.
I actually think the fun part explains it even more. I have a buddy I game with all the time. I always end up better then them. They ask for help. I point out something I've identified as a fundamental in the game (the equivalent of aiming/positioning in FPS games, or building workers in RTS games) and some little practice method that I went away and did for 2 or 3 hours one day to get better at that fundamental. Then, every time, they say "that would make it not fun" and just spam some games. Because there's a fun inefficient way to practice they just do that instead of the less fun efficient way to practice.
Just to clarify, we're still talking about getting above 3500 when the average is 2500 and pro is 4500? So, getting to the top 20-25% or so of the game? What do you find to be the limiting factor on the people stuck below 3500? It's my impression that when we're talking about that sort of rank for a game we're still talking about people who haven't gotten down the basic fundamentals and haven't gotten to the point higher level strategy is super important. For the equivalent rank in sc2 you can still just pick any random strat you want and work on your fundamentals. It wasn't till around top 2% I felt the need to learn actual strategy instead of just "spend all your money as fast as possible". Tons of top20% players would be like "I spent all week practicing this new strategy I saw someone do in the last tournament" but still be floating tons of minerals because practicing spending money faster is boring.
Yeah, this definitely doesn't explain my gold players who spend hours every day in Kovaaks.
No, elo is not a flat distribution. Roughly 2-3% of accounts are in Grandmaster (4000+), the next 5% in master (3500-4000), the next ~10% in diamond (3000-3500), the next 30% in platinum (2500-3000), the next 30% in gold (2000-2500)... but this is skewed for a few reasons. Casual players are more likely to stick to Quick Play and not rank in Competitive, and higher-level players are significantly more likely to have multiple accounts, so the percentage of accounts in higher ranks represents a smaller percent of actual players. Sometimes the top 10 accounts in a region (Europe / Americas / Asia) are held by the same 3 people, playing on several accounts each. So 3500+ is much more of an achievement than top 20-25%.
I genuinely think that the limiting factor for lots of people stuck below 3500 is related to conceptual understanding, learning or cognition. They can have fundamental concepts explained to them, but they don't really understand them, or they understand what you're telling them about one specific situation but can't generalise it to future situations. I also see lots of players with issues with tilt, mentality, attitude, multitasking, communication and general 'thinking speed'. I know a lot of people who will make the right decision on a 30-second delay, by which point it's a bad decision - that's not "reflexes", it's how well you can offload concepts to sys1 so you see things faster. Keep in mind this is from my perspective as mainly a scrim/tourney coach; I don't really see individual ladder games, so the play I tend to look at is significantly more strategic and less mechanical. There's a reason I specified I think they could scrim 3500. I see consistently poor group decision-making from teams below 3500.
If you can find any high-level coaches of 1v1 games who are interested in running experiments, that's great. I don't have the option of just becoming a pro Starcraft coach in order to run a 'better' experiment.
I'm also curious why you think this; skills of communication/teamwork are pretty central to what I'm thinking. We already have lots of information about how good smart people are at chess and how smart pro chess players are, too, so it's just a matter of figuring out where individual games lie on the spectrum from something like chess (very strategic) to something like Smash (very twitchy). We have much less information about FPS, so to me it's a much more interesting experiment.
I mean, it's easier to find two people willing to play than ten. So you'll get more data. With one or two teams it will be hard to draw any conclusions at all.
I have significant history of being a gold player so that nmakes me think I wouldn't be eligble for this thing. A-B testing between "natural learning" and "proper learning" could still be relevant.
If the ability of the good players would consist of factors that could be communicated or transferred and people have the motivation to do so the good players would lose their edge. Different routes migth have different conveoyance limits. For example it is very hard to give verbal instructions to how to effectively use a bike but bike-skills are still frequent as a little experimental practise quickly aquires it. It is not a competetive market in the sense that everybody does the game thing,as there are actual barriers to entry. Some of the barriers might play larger or smaller roles but everybody doesn't collapse to a single rating.
Picking only smart people is like a school accepting only good students and then miraclously having good grades for their students. If the point is to measure the impact of couching itm ight make sense to avoid be overly selective. However if the focus is on the minimum time and effort to hit a highish bar that might make sense.
With regard to "advice sink time" I could also describe that as "low cognitive autonomy", "high suggestibility" or "meta-monkeying". There is also the issues on whether a communcation succeds or not whether that is due to the success or failure of the transmitter or the receiver. Concepts made by 4000 to be consumed by 4000 people might be hard for others to adopt not because of cognitiive domination but it being relevant to that style and culture. What I have seen opinion leaders do is that some advice for pros should not be folowed by lo SR people and that some people actively hurt themjselfs for trying. "Under gold just get your aim correct and don't even think about anything else".
There are probaly bad memes about being "super good at reaction speed". But there are also differnces within anticipation. There is atleast the distinction between remembering and calculating what is going to happen, that of being habituated what happens in situations like these ie memory and extrapolating current situation into the future. I think for example in high level chess players lose ability to articulate particular reasons why moves are good or bad. So for games there migth be situation where extrapolation couterintuitive gives a bad result and a pure associative link can get past this.
There is also a difference of being able to execute a strategy or tactic that is good in the current scene versus being able to adapt and come up with such things.
A lot of people play games to be entertained to be fun. Some pros can gain pleasure form being good but it seems it tends to have a "harsh practise big winout" structure. The problem for the casual player is that learning "properly lethal" techniques is fun negative in the first half. This prevents people from randomly fluctuating into them.n The proble is even worse if "playing crappy" produces actively more entertaining games. In a game if you are fairly matchmade it is always a challenge but the entertainment gained from different styles of play might not be similar. That is pro-like games can be more fragile in their entertainment payout rather than unskilled versus unskilled. This can form a phenomenon where a player learns that if they improve they just get put into games where they have more chance to make unfun mistakes which can effectively make learning punishing.
I am also interested about the hypotheses that if we take randoms and artifiically make them scrim and be deliberate for example 2 weeks, but don't provide couching how much this would help things.
"You should not expect to get anything out of this other than ~80-100 hours of fun video game coaching." vs "I cannot guarantee you will have fun." these 2 are contradictory. In agreeing to a group setting and a schedule you can have experiences which would not be possible playing as a solo to the random wrath of matchmaking. It would make sense for me that if the participants are expected to commit to put in the time there could/would be a symmetric part for the couches. Currently it seems you would bail out the second you think you are wrong. If you don't actively sabotage the fun it is probably be expected to be net-fun but the challenges of coaching are going to be how to do stuff that is indifferent or contrary to the fun-gradient.
I would prefer not to take on people with history of being gold players because it seems like bad science. However, I don't have a ton of interest at the moment, so I might consider whether it's a good idea?
"Picking only smart people is like a school accepting only good students and then miraclously having good grades for their students." - I don't think this is true. Yes, if I picked a bunch of smart students and then my students all turned out to be good at mathematics or programming or Greek, it wouldn't be surprising. However, if I picked people purely on IQ and then it turned out they were all very good marathon runners, it actually would be very surprising! My point is that many people think that esports is a similar domain to marathon running (you primarily need genetics/reflexes and lots of time) whereas I think esports is in a similar domain to mathematics (smart people can become good at it quickly). This is precisely the point I am trying to prove; I am not trying to prove that I am a good coach or measure the impact of coaching or anything along those lines. That would be sort of egotistical.
"Concepts made by 4000 to be consumed by 4000 people might be hard for others to adopt not because of cognitiive domination but it being relevant to that style and culture." - This is occasionally true, but primarily because of the way other people play in lower SR games. For instance, I might tell a 4400 player to do something which relies on the assumption that they will be backed up and supported by others, whereas in gold your teammates will leave you to die and you cannot demand so many resources. I think if you train an entire team simultaneously, this effect is wholly nullified. 4400 players are just better than gold players, and they play the strategies they play because they win, not because of a stylistic difference.
"There is also a difference of being able to execute" - yes, but this is not relevant for the goal of reaching ~3500. It is not even relevant if your goal was 4200. This is relevant if you wanted to become a 4500+ coach, and basically never relevant as a player.
On learning being less fun - I suspect this is significantly less true for LW-y nerdy types, who will enjoy a game more (not less) if it's a tricky intellectual strategy game rather than a mindless spray-and-pray aiming adventure (which is the primary way I see the 'but doing it properly wouldn't be fun' thing ever come up).
I can expect that you will have fun without guaranteeing that you will have fun. I think there is a high probability that you will have fun.
I am not planning on bailing out the second I think I am wrong. However, I will cancel the "mandatory activities that all six of you have to do as a team" part if I think the amount of fun/success you will have is not worth the mandatory-attendance-activity-time. In such a scenario I'm still more than happy to do optional one-on-one work if someone just wants to be better at video games for their own sake.
I am not surprised that gold background is a undesirable trait. however this is how we get high side-effects for women in drugs sold at stores, because testers prefer male over female. If humans in the wild have a 20% trait rate and your sample has 1% or 0% that is going to lead in a bad result in its own way. Having a WEIRD sample is not particularly representative.
If you have a discipline that supports multiple frameworks and recruit on resonance with a particular framework then the result tells less about the frameworks properties. For example one could try to provde that chess is an endurance game of bothering to check enough positions and reqruit based on stamina in order to "prove" it is not a game of intellect.
I remember when balancing away dive was a talking point. Then a lot of the teams were squamish in scrimming other strategies. If you need to redo the whole strategy stack instead of just adjusting the top layers then teams will eventually do it but it can take long while.
If you tell a high rank player to push they will know to still reftrain from being mindlessly suicidal, to not push all the way throught spawn etc. If you desribe somethings color in grue and bleen if helps if the communication reciever has existing support for those concepts. Even if there is no explicit culture sharing the learning curve could provide a way for "on the onset" some fundamentals to be evident and then when those are taken into account then more fine-graded concepts can make sense. But part of the point is that the incentive gradient to make the distinction doesn't exist at all stages. This can be seen as an aspect to the "smiley face maximiser" error state of aligment problem, the defintions and concepts that humans actually use don't exist in a neat context-free way. Telling a human to go "make people smile" result in sensible action while a literal minded Ai will tile things destructively with inapproriate patterns.
I am interested in conducting an experiment involving learning & teaching gaming. I am recruiting either 6 or 12 participants to attempt to learn a video game intensively for around 2 weeks. This is not paid either way (I am not paying for volunteers, and you are not paying for coaching) and far from a professional academic thing; more of a bet between friends. It's getting published to Twitter, not published to a journal. We'll be learning Overwatch because that's my personal area of expertise.
Conventional wisdom is that gamers take months or years of practise to get good - and that unchangeable factors like in-built 'reaction times', 'natural talent' or 'instincts' are extremely important. This would suggest that, if I took 6-12 people who haven't previously played much FPS games and tried to teach them Overwatch for a couple of weeks, they wouldn't be able to reach a very high level. Perhaps at best they might be average at best by the end of it. My hypothesis is that a team built by a good coach could scrim 3500+ SR within a couple of weeks. (SR or Skill Rating is an ELO system.) For context, average is around 2500+, I currently work with players around 4300-4500, and pro players will typically peak around 4600+. I agree that this is very ambitious and a sort of daring hypothesis, which is why I should test it and lose Bayes points (and also coaching points) if I'm wrong. On the other hand, if I'm right, I totally deserve more Twitch followers.
(For the people going "video gamers have coaches???" - yes, the good ones do, and for team games there is a lot of theory and concepts to learn. Talking shop with high-level coaches and analysts involves just as much jargon and verbal chess as arguing with smart scientists.)
Some beliefs that are influencing my hypothesis here:
Some things that push against my beliefs:
This is mostly aimed at people who are students, on holiday, not working, or on pandemic-related breaks from work. You will maybe be able to balance learning with a full-time job, but I don't think the experiment will work if you're getting insufficient sleep. I wouldn't be planning on beginning until after my team finishes some important tournaments, so we're looking at perhaps late March or early April. If I don't get enough interested people I will possibly not do the experiment at all, or possibly work with some individuals rather than a team of 6. If it becomes obvious that I'm wrong within a few days of starting (like if participants just can't even figure out how to pilot their characters at all), I can abort. I currently am not working due to the pandemic, but would have to cancel if that changed.
There will probably be a daily schedule involving 2-4 hours of team practice, 2-ish hours of theory, and an expectation of some individual or 1on1 work beyond that. As part of the whole data-recording-because-I-swear-this-is-for-science thing, I'd quite like if everyone involved kept some kind of journal/blog about what they're learning. This is overall a ~100-120 hour experiment, which I honestly wouldn't expect anyone to be interested in doing unpaid (self included), except that right now there's a decent contingent of people sitting at home bored due to COVID. If there's no interest in this, but interest in a more stretched-out 2-hours-a-day-for-8-weeks version, I might consider doing that instead.
You should not expect to get anything out of this other than ~80-100 hours of fun video game coaching. You will not be a pro player after this. You might not even be an average player after this. I am not a pro and I don't have an official certificate of Knowing How To Coach Video Games because those aren't a thing. All my gamer friends are kind of feminist communists, so I reserve the right to reject you if you don't get along well with the guest coaches and mentors I want to bring in. I cannot guarantee you will have fun. You should be able to use a mouse and keyboard. I might be able to help with acquiring a copy of Overwatch but you will need a computer that can run it with 30+ fps (frames per second).
I think this website has a private message feature? So just write me there if interested and I can put together a Discord server. This is super tentative so please don't like, promote it or anything, I don't really know how this site works, I was just told it'd be a good place to post. I hope I've set this up as a personal blog post correctly...
I am also happy to take questions about esports generally or Overwatch theory.