[Shoutout to the LW team for very helpful (and free!) feedback on this post.]

I. Prelude

When my wife first started playing the wonderful action roguelike Hades, she got stuck in Asphodel. Most Hades levels involve dodging arrows, lobbed bombs, melee monsters, and spike traps all whilst hacking and slashing as quickly as possible, but Asphodel adds an extra twist: in this particular charming suburb of the Greek underworld, you need to handle all of the above whilst also trying not to step in lava. Most of the islands in Asphodel are narrower than your dash is far, so it’s hard not to dash straight off solid ground into piping-hot doom.

I gave my wife some pointers about upgrade choices (*cough* Athena dash *cough*) and enemy attack patterns, but most of my advice was marginally helpful at best. She probably died in lava another half-dozen times. One quick trick, however, had an instant and visible effect.

"Stare at yourself."

Watch your step.

By watching my wife play, I came to realize that she was making one fundamental mistake: her eyes were in the wrong place. Instead of watching her own character Zagreus, she spent most of her time staring at the enemies and trying to react to their movements and attacks.

Hades is almost a bullet hell game: avoiding damage is the name of the game. Eighty percent of the time your eyes need to be honed on Zagreus's toned protagonist butt to make sure he dodges precisely away from, out of, or straight through enemy attacks. In the meantime, most of Zagreus's own attacks hit large areas, so tracking enemies with peripheral vision is enough to aim your attacks in the right general direction. Once my wife learned to fix her eyes on Zagreus, she made it through Asphodel in only a few attempts.

This is a post about the general skill of focusing your eyes, and your attention, to the right place. Instead of the standard questions "How do you make good decisions based on what you see?" and "How do you get better at executing those decisions?", this post focuses on a question further upstream: "Where should your eyes be placed to receive the right information in the first place?"

In Part II, I describe five archetypal video games, which are distinguished in my memory by the different answers to "Where do your eyes go?" I learned from each of them. I derive five general lessons about attention-paying. Part II can be safely skipped by those allergic to video games.

In Part III, I apply these lessons to three specific minigames that folks struggle with in graduate school: research meetings, seminar talks, and paper-reading. In all three cases, there can be an overwhelming amount of information to attend to, and the name of the game is to focus your eyes properly to perceive the most valuable subset.

II. Lessons from Video Games

Me or You?

Hades and Dark Souls are similar games in many respects. Both live in the same general genre of action RPGs, both share the core gameplay loop "kill, die, learn, repeat," and both are widely acknowledged to be among the best games of all time. Their visible differences are mostly aesthetic: for example, Hades' storytelling is more lighthearted, Dark Souls' more nonexistent.

But there is one striking difference between my experiences of these two games: in Hades I stared at myself, and in Dark Souls I stared at the enemy. Why?

One answer is obvious: in Dark Souls, the camera follows you around over your shoulder, so you're forced to stare at the enemies, while in Hades the isometric camera is centered on your own character. This is good game design because the camera itself gently suggests the right place for your eyes to focus, but it doesn't really explain why that place is right.

The more interesting answer is that your eyes go where you need the most precise information.

In both games, gameplay centers around reacting to information to avoid enemy attacks, but what precisely you need to react to is completely different. Briefly, you need spatial precision in Hades, and temporal precision in Dark Souls. 

In Hades, an enemy winds up and lobs a big sparkly bomb. The game marks where it'll land three seconds later as a big red circle. You don't need to know precisely when the bomb was lobbed and by whom - getting out of the red circle one second early is fine. But you do need to see precisely where it'll land so you can dash out of the blast zone correctly. When there's dozens of bombs and projectiles flying across the screen, there might be only a tiny patch of safe ground for you to dash to, and being off by an inch in any direction spells disaster. So you center your vision on yourself and the ground around you, to get the highest level of spatial precision about incoming attacks.

In Dark Souls, a boss winds up and launches a three-hit combo: left swipe, right swipe, pause, lunge. As long as you know precisely when it's coming, where you're standing doesn't matter all that much - the boss’s ultra greatsword hits such a huge area that you won’t be able to dash away in time regardless. Instead, the way to avoid damage is to press the roll button in the right three 0.2-second intervals and enjoy those sweet invincibility frames. The really fun part, though? The boss actually has five different attack patterns, and whether he's doing this particular one depends on the direction his knees move in the wind-up animation. So you better be staring at the enemy in Dark Souls to react at precisely the right time.

Human eyes have a limited amount of high-precision central vision, so make it count. Don’t spend it where peripheral vision would do just as well.

Present or Future?

Rhythm games have been popular for a long time, so you’ve probably played one of the greats: Guitar Hero, Beat Saber, Osu!, piano. Let's take Osu! as a prototypical example. The core gameplay is simple: circles appear on the screen to the beat of a song, and to earn the most points, you click them accurately and at the right rhythm. Harder beatmaps have smaller circles that are further apart and more numerous; a one-star beatmap might have you clicking every other second along gentle arcs, while a five-star map for the same song forces your cursor to fly back and forth across the screen eight times a second.

There’s one key indicator that I’m mastering a piece in a rhythm game: my eyes are looking farther ahead. When learning a piano piece for the first time, I start off just staring at and trying to hit the immediate next note. But as I get better at the piece, instead of looking at the very next note I have to play, I can look two or three notes ahead, or even prepare for an upcoming difficulty halfway down the page. My fingers lag way behind the part I’m thinking about.

Exercise: head on over to https://play.typeracer.com/ and play a few races, paying attention to how far ahead you can read compared to what you’re currently typing. I predict that with more practice, you’ll read further and further ahead of your fingers, and your typing will be smoother for it. It’s a quasi-dissociative experience to watch yourself queue up commands for your own body two seconds in advance.

Act on Information

As a weak Starcraft player (and I remain a weak Starcraft player), I went into every game with the same simple plan. Every game, I’d build up my economy to a certain size, and then switch over to producing military units. When my army hit the supply cap, I’d send it flooding towards the enemy base.

At some point, I heard that “Scouting is Good,” so at the beginning of each match I’d waste precious resources and mental energy sending workers to scout out what my enemy was doing. Unfortunately, acquiring information was as far as my understanding of scouting extended. Regardless of what I saw at the enemy base, I’d continue following my one cookie-cutter build order. At very best, if I saw a particularly dangerous army coming my way, I’d react by executing that build order extra urgently. This amounted to the veins in my forehead popping a bit more while nothing tangible changed about my gameplay.

To place your eyes in the right place is to gather the right information, and the point of information-gathering is to improve decision-making. Conversely, the best way to improve at information-gathering is to act on information. If you don’t act on information, not only do you not benefit from gathering it, you do not learn to gather it better. If I went back to learn scouting in Starcraft again, I’d start by building a flowchart of what choices I’d change depending on the information I received.

Filter Aggressively

I was introduced to Dota 2 and spent about four hundred hours on it in the summer of 2020 (of course, this makes me an absolute beginner, so take this part with a healthy pinch of salt). Dota is overwhelming because of the colossal amount of information and systems presented to you - hundreds of heroes and abilities, hundreds of items and item interactions, and multitudes of counterintuitive but fascinating mechanics that must have gone through the “it’s not a bug, it’s a feature” pipeline.

To play Dota is to be constantly buffeted by information. I watch the health bars of the enemy minions to last hit them properly, or I won’t make money. I watch my own minions to deny them from my opponent. I track my gold to buy the items I need as soon as I can afford them. I pay attention to the minimap to make sure nobody from the enemy team is coming around to gank. I watch my team’s health and mana bars, and the enemy team’s, to look for a weak link or opportunity to heal. I can click on the enemy heroes to look at their inventories, figure out who is strong and who is weak, and react accordingly. And this might all be extraneous information: maybe the only important information on the screen is the timer at the top of the screen which says the game started 1 minute and 50 seconds ago.

The clock might be the most important information on this screen.

To understand why the game timer might be the most decision-relevant information out of all of the above, you have to understand a particularly twisted game mechanic in Dota, the jungle monster respawn system. You see, the jungle monsters in Dota each spawn in a rectangular yellow box that you can make visible by holding ALT. The game is coded such that, every minute, a monster respawns if its spawn box is empty. You read that right - the monsters don’t have to die to respawn, they just have to leave the box. Exploiting this respawn mechanic to make many copies of the same monster is called “stacking,” and is a key job for support players: if you attack the jungle monsters about 7 seconds before the minute, they’ll chase you just far enough that a duplicate copy of the monster spawns. This means that near the beginning of the game, a good support player can “stack” two, three, or even four copies of a jungle monster for his teammates to kill later, even if nobody on the entire team is strong enough to fight a single one directly. Fifteen minutes later, leveled up teammates can come back and kill the entire stack for massive amounts of gold and experience.

Stacking is further complicated by an endless litany of factors, but the most interesting is probably this: the enemy can easily disrupt your stacks. The game code only checks if the yellow spawn box is empty, not what’s inside the box. A discerning opponent can foil your whole stacking game-plan just by walking into the appropriate box at the 1:00 mark and standing there for a second. More deviously yet, he might buy an invisible item at the beginning of the game to drop in the box before you even reach the area.

Anyhow, some support players have a job to do every minute, which is either stacking or a closely related thing called “pulling.” When they’re gone doing this job, this leaves the hero they’re supporting vulnerable to a two-on-one. This is where the game timer comes in: the enemy support leaving the lane might be my best opportunity to be aggressive and land an early kill. And so, out of all the fancy information on the screen, I need to be checking the game timer frequently. Some early game strategies hinge on correctly launching all-out attacks at 1:50, and not, say, 1:30.

Treat the world as if it’s out to razzle dazzle you, and your job is to get the sequins out of your eyes. Filter aggressively for the decision-relevant information, which may not be obvious at all. 

Looking Outside the Game

There is a certain class of video games that are difficult, if not impossible, to play without a wiki or spreadsheet open on a second monitor. This can be due to poor game design, but just as often it’s the way the game is meant to be played for good reason, and it’s the mark of an inflexible mind to refuse to look outside the game when this is necessary.

Consider Kerbal Space Program. You can learn the basics by playing through the tutorials, and can have plenty of fun just exploring the systems. But unless you’re a literal rocket scientist you’ll miss many of the deep secrets to be learned through this game. There’s no way you’ll come up with the optimal gravity turn trajectory yourself. If you don’t do a little Googling or a lot of trial-and-error, your attempts at aerobraking will probably devolve into unintentional lithobraking. You’ll have a nightmarish time building a spaceplane without knowing the correct relationship between center of mass, center of lift, and center of thrust, and it’s highly unlikely you’ll figure out that a rocket is a machine that imparts a constant amount of momentum instead of a constant amount of energy, or that you can exploit this fact by accelerating at periapsis. And god forbid you try to eyeball the perfect transfer window in the next in-game decade to make four sequential gravity assists like Voyager II.

Some games are meant to be played on multiple monitors.

The right place to look can be outside the game entirely. Whether it’s looking up guides or wikis, plugging in numbers to a spreadsheet to calculate optimal item builds, or using online tools to find the best transfer windows between planets, these can be the right place to put your eyes instead of on the game window itself.

III. Applications to Research

In this last part of the post, I apply the principles above to three core minigames in academic mathematics: talks, papers, and meetings. For each of these minigames, we’ll try to figure out the best places for our eyes to go, informed by the following questions.

  1. Should I focus on myself or the other person?
  2. How far into the future should I be looking?
  3. How can I act on the information I receive?
  4. Out of all the information being thrown at me, what is decision-relevant?
  5. Might the best way to get better at the game be outside the game itself?


When giving a talk, self-consciousness is akin to keeping your eyes on yourself. Telling yourself not to be self-conscious is about as useful as trying not to think about the polar bear; negative effort rarely helps. Move your eyes elsewhere: to the future and to other people. Rehearse your presentation and anticipate the most difficult parts to explain. Pay attention to your audience and actually look at them. See if you can figure out who is engaged and who is daydreaming. Find one or two audience members with lively facial expressions, study them, and act on that information - their furrowed brows will tell you whether you’re going too fast.

When listening to a talk, realize that there will typically be more information than any one audience member can digest. Sometimes this is the fault of the speaker, but just as often, this information overload is by design and functions similarly to price segmentation. Noga Alon recently joked to me, “Going to a talk is difficult for everyone because nobody understands the whole thing, but it’s especially difficult for undergraduates because they still expect to.” Information at a variety of levels of abstraction are presented in the same talk, so that audience members with widely varying backgrounds can all get something from it. An undergraduate student might understand only the opening slide, and a graduate student the first ten minutes, while that one inquisitive faculty member will be the only person who understands the cryptic throwaway remarks about connections to class field theory at the very end. Filter aggressively for the parts of the talk aimed at you in particular. 

Remember that the topic of the talk itself is rarely as interesting as the background material mentioned in passing in the first ten minutes. These classics - the core theorems and examples mentioned time and time again, the simplest proof techniques that appear over and over - are the real gems if you don’t know them already. Sometimes you can learn a whole new subfield by sitting in on a number of talks in the area and only listening to the first ten minutes of each. Bring something discreet to keep yourself occupied for the other fifty.

Remember also: information you never act on is useless information. A couple years back, I was taking a nap in a computer science lecture about a variation on an Important Old Theorem. As I was nodding off, I noticed to my surprise that my adviser, sitting nearby, was quite engaged with the talk. I was very curious what caught his attention, and he enlightened me on our walk back to the math department: instead of listening to the talk, he’d spent most of the hour looking for a better proof of the Important Old Theorem introduced in the first five minutes. From this, I learned that the most important information in a talk might be an unsolved problem, because it is certainly the easiest information to act on.

My PhD adviser after a seminar talk.

This conversation with my adviser had a great effect on me, and every so often I practice this perspective by going to a talk for the sole purpose of hearing new problems. As soon as I hear an interesting problem, I zone out and try to solve it immediately. Anecdotally, it worked a couple times.


Most of this section was already covered in Of Math and Memory (part I, part II, part III), but I’ll reiterate here the relevant bits. Mathematical proofs are rarely meant to be written or read linearly. Instead, they are ideally arranged as a collection of outlines of increasing detail: a five-word title, a paragraph-long abstract, two pages of introduction, a four page technical outline, and only then the complete 20-page proof. Each outline is higher-resolution than the next, giving readers the chance to pick the level of understanding that suits their needs.

This organization is meant to solve one basic difficulty: it’s very hard to follow a proof without knowing where it’s going. Without reading the proof outline, you can’t tell which of the ten lemmas are boilerplate and which are critical innovations. Without running through a calculation at a high level, there’s no way to know which of alpha, n, epsilon, x, and y are important to track, and which are throwaway error terms. Reading through a paper line-by-line without knowing where you’re going, it’s easy to get lost in the weeds and dragged down endless rabbit holes - black boxes from previous papers to unpack, open problems to mull over, lightly explained computations which might contain typos - and while these rabbit holes might be worth exploring, you would do well to map them all out before picking one to dive into. 

When reading a paper, orient your eyes towards the future whenever possible, like reading several words ahead in a game of TypeRacer. Scan the paper at a high level to understand the big picture, then read all the theorem and lemma statements to see how they fit together, and only then decide which weeds to get into. Only check a difficult computation once you already know what its payoff will be.


In research, the vast majority of your time is spent in one of two ways: bashing your head against a wall alone, or bashing your head against a wall with company. These two activities can be more or less traded freely for each other to suit your level of introversion, and I’ve found that I usually prefer meeting with others to work on research together over working alone.

One of the pitfalls of working with others, especially when you are young and underconfident, is that you can naturally slide into the role Richard Hamming calls a “sound absorber”:

For myself I find it desirable to talk to other people; but a session of brainstorming is seldom worthwhile. I do go in to strictly talk to somebody and say, ``Look, I think there has to be something here. Here's what I think I see ...'' and then begin talking back and forth. But you want to pick capable people. To use another analogy, you know the idea called the `critical mass.' If you have enough stuff you have critical mass. There is also the idea I used to call `sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, ``Yes, yes, yes.'' What you want to do is get that critical mass in action; ``Yes, that reminds me of so and so,'' or, ``Have you thought about that or this?'' When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, ``Oh yes,'' and to find those who will stimulate you right back. 

~ Richard Hamming, You and Your Research

On top of underconfidence, I suspect that the chief mistake “sound absorbers” make is that they have the wrong idea about where their eyes should be in a research meeting. I think a “sound absorber” is completely fixated on personally solving the problem. Not having generated interesting ideas for solving the problem, they contribute nothing at all. Again, this mistake is akin to being too self-conscious, and keeping your eyes on yourself when there is no useful information to be had there.

Personally solving the problem is certainly a great outcome for a research meeting, but it’s by no means the only goal. First of all, there’s a world of difference between personally solving the problem and getting the problem solved. If your collaborators are any good, they are just as likely to come up with the next crucial idea as you are, so truly optimizing for getting the problem solved involves spending a substantial fraction of your time supporting the thought processes of others. Repeat their thoughts back to them, write down and check their calculations, draw a nice picture or analogy for what they’re doing on the blackboard, project your enthusiasm for their insights. You can do all this without generating a single original thought, and still help with getting the problem solved.

Getting the problem solved is a higher value than personally solving the problem, but higher still is the value of improving at problem-solving in general, and this holds doubly if you’re still performing your Gravity Turn. Especially when meeting with your PhD adviser or another senior mentor, focus a substantial minority of your attention on modelling the thought processes of your mentor. Figure out and note down what examples and lemmas they pull out of their toolbox time and time again, what calculations and simplifications they do instinctively, and how they react when stuck on a problem. Learn the particularities of how they perform literature searches, who they ask for help about what, and how they decide if and when to give up. None of these decisions are arbitrary; they form an embodied model of the terrain in your field. Watching the other person can often be a better use of your time than staring blankly at the problem.

In hurried conclusion, research is like a simpler version of Dota: we are bombarded by information on all fronts, most of which we don’t even notice, and tasked to make complicated, heavy-tailed decisions. A fundamental skill in any such game is orienting your eyes - literally and figuratively - at the most valuable and decision-relevant information. Reacting to and executing on this information comes later, but you can never act properly if you don’t even see what you need to do.

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 4:28 AM

Neat! I've never figured out the trick of getting value from a math paper but maybe this will help!

When reading a paper, orient your eyes towards the future whenever possible, like reading several words ahead in a game of TypeRacer. Scan the paper at a high level to understand the big picture, then read all the theorem and lemma statements to see how they fit together, and only then decide which weeds to get into. Only check a difficult computation once you already know what its payoff will be.

The way I read the papers that I can get something out of (which are all experimental papers that collect givens from the physical world) is: (1) read the abstract to figure out if downloading the PDF is worth the effort, (2) look at the figures and if the figures tell a story that might be important if true, then (3) double check by reading the conclusions to see if the authors know the story they are themselves trying to justify and then (4) read the methods to figure out if the experiment had subtle bullshit hiding in it that grossly invalidates the central idea, and finally (5) read the methods carefully to figure out how to copy their techniques.

In the olden days (maybe still true, I stopped bothering to try to verify this) computer science papers were totally garbage for the most part, because they never included the code or a link to the code, which is the only part of them that would help with step four or five "for really reals" and so all of academic CS was basically just a bunch of "real science" larping :-(

I'm interested in trying your technique on a few math papers. Do you know of any classics that would be good to practice on?

Here are two recentish papers I really enjoyed reading, which I think are fairly reasonable to approach. Some of the serious technical details might be out of reach.



I enjoyed this quite a bit. Vision is very important in sports as well, but I hadn't thought to apply it to other areas, despite generally being into applying sports lessons to research (i.e. https://bounded-regret.ghost.io/film-study/).

In sports, you have to choose between watching the person you're guarding and watching the ball / center of play. Or if you're on offense, between watching where you're going and watching the ball. Eye contact is also important for (some) passing.

What's most interesting is the second-level version of this, where good players watch their opponent's gaze (for instance, making a move exactly when the opponent's gaze moves somewhere else). I wonder if there's an analog in video games / research?

I love the film study post, thanks for linking! This all reminds me of a "fishbowl exercise" they used to run at the MIRI Summer Fellows program, where everyone crowded around for half an hour and watched two researchers do research. I suppose the main worry about transporting such exercises to research is that you end up watching something like this.

Sub-example for music games:  where you look might depend on your level of skill at the game!  Beginner DDR players have to look down from time to time to re-center themselves on the dance pad, because they don't know how to feel where they're stepping.  Intermediate DDR players need to turn the scroll rate fast enough to read the patterns, but want them slow enough that they have time to read ahead and process the pattern  ("is this upcoming pattern a crossover?").  Advanced DDR players have no problem decoding patterns, and are more focused on accurate timing.  So, "visual" players will turn the scroll rate even faster, and focus on where the arrows overlap the outlines as they step, using the slight offsets in their steps to adjust.

Your intuition for Starcraft is good (adapt based on info and branch plans), but you might be surprised where the biggest gain for attention will be: the bottom bar of the screen with your own production buildings selected!  Good players will hotkey their production buildings and constantly cycle through them (even as their screen focuses on the army) to see which buildings are ready for the next round of production.  Why?  Losing a battle due to bad position and control puts you behind.  Having a ton of money in the bank and no replacement army ready will lose you the game.  It's better to slightly lose one battle if you have a replacement army ready to go and your opponent doesn't.  This is also why good players will try and harass.  It's not just the direct benefit, you're also costing your opponent's attention.     

Curated. I'd never given so much thought to where I place my attention in a research context, it's great to get such clear directions. Pedagogically, the commonalities with videogames make it easier to both understand and remembers the lessons.

I've played the Touhou bullet hell series (credentials: beat Perfect Cherry Blossom on Phantasm but haven't played the more recent ones as much) and viewpoint on that:

When you're first learning, your eyes are on your character. As you get better, your eyes are actually on a broader area that isn't focused on your character, most commonly somewhat above them (since most bullets come from above).

While you're learning on low difficulties, it suffices to keep your eyes on your character, see every bullet that gets close enough to be a threat, and reactively dodge it. On higher difficulties, you need to be thinking ahead, dodging towards areas of low incoming bullet density, reacting to high-level patterns, or manipulating shot patterns to not get walled off, and you can't do this if you are tunnel-visioned on the small bit of screen around your character. Most commonly your eyes will be a few inches above your character watching incoming bullets, and you'll be executing dodges based on a kind of peripheral vision and memory of where the bullets that passed through your focus one second ago were going. (It's hard to describe).

I tried Touhou Perfect Cherry Blossom at one point and never got past any difficulty, so I defer to your expertise here. There's a general skill of getting better at focusing one's attention in tandem with getting better at execution and this post is only a first approximation.

This makes me connect with the post the other day about research speedruns. Those posts interested me because it was a little like looking over the writer’s shoulder, and seeing how they approached the challenge. It seems to me like this could be another useful rationalist training program. I imagine I could learn from both roles, and suspect many others could too.

Yea, I think there's some general pattern of the form:

  1. Research is weird and mysterious.
  2. Instead of studying research, why don't we study the minds that do research?
  3. But minds are equally weird and mysterious!
  4. Ah yes, but you are yourself possessed of a mind, which, weirdly enough, can imitate other minds at a mysteriously deep level without consciously understanding what they're doing.
  5. Profit.

Forwarded to my computer game playing sons. I wonder what your recommendation for Minecraft would be. 

Eye patterns are interesting. For example, when walking outside, I've always been more focused on the big static overall view. But recently I discovered that it's pretty fun to do "motion detection" instead - look at each moving person / car / bird for a split second to register which way they're going and how fast, then flit to the next one, and ignore non-moving objects completely. It's quite relaxing.

As further reading The Emprint Method: A Guide to Reproducing Competence by Leslie Cameron-Bandler et al is good at describing the different ways people can do things and providing notation for it. With that framework you can go and study how successful people do what they do. 

I've noticed when I'm getting stressed in a difficult situation in games (including Hades) I tend to look away from the screen, perhaps subconsciously believing that what I can't see won't hurt me. This is of course the total opposite of what you want to do in those situations. Consciously trying to fight this helps somewhat.

Great post! A few months ago I realized that when playing League of Legends, I have a problem losing the sight of my character in chaotic 5v5 teamfights. At the same time, I never had this problem in a casual ARAM mode. It took me some time to realize that in ARAM my camera was fixed on the character, while the regular mode had it floating free. Nowadays when the teamfight is coming, I lock my camera on my character so I can play the game like it's Hades.

I had some thoughts about what I was calling 'visual schema' as a way of talking about deliberate practice, using tetris as an example for learning where your eyes should go. It seems like a useful lead in for talking about mindfulness of attention in meditation. The move is the same between visual attention and more general attention voluntarily vs involuntarily moving and what and how it moves between objects.

osu! should be written in lowercase. a tweet from osu!

Our attention is one of the most valuable resources we have, and it is now through recent AI developments in NLP and machine vision that we are realizing that it might very well be a fundamental component of intelligence itself. 

This post brings this point to attention (pun intended) by using video games as examples, and encourages us to optimize the way we use this limited resource to maximize information gain and to improve our cooperation skills by avoiding being 'sound absorbers'. 

Maybe this isn't the most productive comment but I just wanted to say that this was a really good post. It's right down my alley with video games and academics at the same time and I would therefore like to declare it a certified hood classic. (apparently Grammarly thinks this comment is formal which is pretty funny.)

Reading the prelude, I was already thinking about my experiments with deliberately moving my eyes to attend to different things while playing Osu!. Much to my delight, I discover a few paragraphs later that YOU ACTUALLY PLAY OSU!

Anyway, here's my profile. Feel free to play with me!

Profile: https://osu.ppy.sh/users/18771571

Haven't played Osu! for many years now unfortunately. I only got into it briefly to practice mouse accuracy for FPS games, but that motivation has dried up. I suspect Osu! would still be damn good fun without it, so I'll let you know if it gets to the top of my gaming queue. :)

Definitely eye tracking, else people wouldn't have given me so much shit about my picture folder. I mean I don't even block my front facing camera on my phone anymore. Well good for them with their camera technology.

[+][comment deleted]3y10