Last month, mobile gaming superstar Angry Birds was out-sold in some countries by DragonBox, a kids game in which players solve alegbra equations.
How does the game work? Jonathan Liu explains:
There are five “worlds,” each with twenty levels, and as you progress through the levels the “dragons” hatch and grow into their full-sized versions. While this in itself has nothing to do with algebra, I mention this because my kids love this. It’s a very tiny incentive (along with earning stars) but they really want to beat the next level to watch the dragon grow into its next form. I was told that the dragons were all drawn by a fourteen-year-old girl, and they’re a lot of fun. (They aren’t all typical dragons — One starts off more like a fish, one looks like a squid, and so on.)
You are presented with a big screen with two trays, each containing a number of “cards” with different images on it. Somewhere on the screen there will be a little box with a star on it, sparkling and glowing. The app gives very minimal instructions in a hand-written font with arrows pointing to relevant spots on the screen, but it tells you to get the box by itself. At first you do this simply by tapping the green spirally cards, which vanish when you tap them. Then, you’ll start to get some “night” versions of cards — drag these onto the “day” versions and they become green swirls, which you already know how to handle.
After you’ve gotten past several levels of moving cards around and tapping on swirls, you’ll get a few cards down at the bottom which you can drag onto the trays — but whenever you drag a card onto one side, you have to also drag a copy to the other side as well. (This, of course, simulates adding the same number to both sides.) And then, a few levels on, you learn that you can flip these extra cards from day to night (and vice versa) before dragging them onto the trays.
As the game progresses, you’ll start seeing cards that are above and below each other, with a bar in the middle — and you’ll learn to cancel these out by dragging one onto the other, which then turns into a one-dot. And you’ll learn that a one-dot vanishes when you drag it onto a card it’s attached to (with a little grey dot between them). These, of course, are fractions — multiplication and division — but you don’t need to know that to play the game, either.
The key to DragonBox's success is not that it's the best algebra tutorial available, but rather that it's actually fun for its target audience to play.
Others have noticed the potential of "computer-assisted education" before. Aubrey Daniels writes:
When you analyze [video games], you will see that the player is clear about what is expected of him, that [the] player's behavior is continually measured, and [that] the player is provided with feedback so he knows what the measurements reveal about his performance. Finally, and most importantly, as he plays the game, the player receives high rates of reinforcement which motivate him to play the game over and over again. In fact, reinforcement occurs up to 100 times a minute.
Remember what works in reinforcement: Small reinforcements are fine, but the reinforcer should immediately follow the target behavior, and it should be conditional on the specific behavior you want to strengthen.
Video games are perfect for that! Little hits of reinforcement can be given many times a minute, conditional on exactly the kind of behavior your want to reinforce, and conditional on exactly the behavior you want to reinforce.
DragonBox is just a particularly successful implementation of this insight.
One of the goals for the Center for Applied Rationality is to develop rationality games and apps. But it's tricky to think of how to make addictive games that actually teach rationality skills. So I'd like to provide a place for people to brainstorm ideas about what would make an addictive and instructive rationality game.
See also: Rationality and Video Games, Gamification and Rationality Training, Raytheon to Develop Rationality-Training Games.
This was (more or less) the discussion topic at the last Toronto meetup. Here's what we discussed (NOTE: these are minutes of a LW meetup so don't expect it to be 100% on-topic). Also see the wiki page with previous discussion threads
No definitive game idea yet, but lots of interesting suggestions came up.
Harry Potter & the Methods of Rationality: The Game.
Realtime stategy game where you recruit units rather than building them
Epistemic rationality: the game
Bayes Theorem game
Game teaches you real world stuff incidentally.
NPC's prone to different biases
Fix moral system
Game that starts off like the Sims and ends up like Civilization
Existing games which we like and/or which came up in the discussion:
Psuedo-Intrade. Use a fake currency, but otherwise keep it basically identical. Players can use their not-money to buy worthless-but-pretty trophies and whatnot for their Throne Rooms of Rationality. Probably include some occasional free money to keep the game above zero-sum, which could be discouraging (i.e. create bots and give them free money to place on bad bets).
A while ago I thought it'd be pretty neat to have a MMORPG based around in-game maths/physics/programming/logic puzzles. Kind of like a dynamic Project Euler, but with a prettier steampunk-hipster-narrative front-end, and instead of levelling up by gaining some abstracted measure of experience, you levelled up by actually getting good at doing stuff.
I'm mostly throwing this idea out there so someone else doesn't have to, not because I think it's actually a good idea. I struggle to think up a way a rationality-based MMORPG would be fun to play.
Product rule skill level 2 achieved! New skill unlocked: Chain rule!
A wild composite function appears!
*dies* Didn't you hear everyone yelling at you to stop using the chain rule? If you'd read the wiki page before the fight, you'd have known that it draws aggro like mad in this dungeon. That's why everyone but the tank was practicing with limits last night.
Calibrated Project Euler
(for programmers only)
Have a set of little programming/maths exercises, like Project Euler or some programming challenges. -things like "return the number of prime numbers in a list", "find the longest increasing subsequence in a list", etc.
First you give an estimate for how long it will take to write a solution, then you write your solution (in the app itself), then you give an estimate of how likely it is that your solution is correct, and then your solution is executed and you see whether it works or not.
This could help mitigate the planning fallacy and overconfidence and allows pretty quick iterations, but only works for programmers.
This is a good idea, but might be generalized and simplified. Simply put down a task and the time you think it will take you to complete it. This is then published publicly or semi-publicly - the key thing is that someone (probably another user) can now verify when you've completed a task.
Simpler, applicable to any kind of task, and calibrated by a third-party.
True, but that requires dedication; the trivial inconveniences of the system for publishing the task etc. means you'll often just skip it. Though it is the kind of thing that could be integrated in a todo list or something; for example a todo list software that detects when an item has been on the list for say two days, and asks you for a confidence interval for when you expect to do it.
An additional benefit of a dedicated app/game is that you can integrate things like leaderboards, you can compare yourself to others, etc. - competition can be quite the motivator.
I hadn't thought about leaderboards - definitely an advantage for any uniform-task game. I'm still averse to doing programming with anything that's not vim and cli tools, though.
Have you ever done Google Code Jam? Something like that backend would be good for solution submission and checking, while still allowing you to program with whatever the hell you want to.
I wonder if it would be possible to build a wagering/probabilistic Zendo crossbreed. That is, the computer is willing to be Dutch-booked, if you can only correctly estimate the probabilities given some examples. You might even be able to make scenarios representing various failures of rationality, like the Linda example ("green is more likely than red; stars than triangles; smiling than frowning; bouncing than glowing -- now, which is more likely: the star, or the green, bouncing, smiling star?", or the 2-4-6 case, or maybe even the Availability heuristic (the system will be inclined to show you examples where you made a lot of money, in contexts where betting on them would lose you money).
Recording the set of one's past games would help a lot with relieving the availability heuristic.
A game where players race to design a self-improving AI. Winner gets free paperclips.
Missiles keep attacking your base. You can block each missile by clicking on the correct shield. Each missile contains the words of a fallacious argument and the shield is either the formal name of the fallacy being committed, a rational counter-argument to the fallacious argument, or the proper reason why you should reject the fallacy.
Missile "You shouldn't be a vegetarian because Hitler was a vegetarian." Correct shield "Reverse stupidity isn't intelligence." Incorrect shield "Hitler lied about so many things that we shouldn't believe his claims of being a vegetarian." Incorrect shield "Applause light"
That reminds me a little of the Objection! games. I've played at least one of them and it was quite fun and addictive.
From a review:
-Have users fill out a survey when they first use the game to get a sense of their beliefs. The game will then pick a mixture of assertions that they likely agree/disagree with.
-Also, have some 'missles' be actually valid arguments, that give a bonus to whatever missle base they hit if you don't mistakenly try to block them with a shield.
A demonstration of DragonBox.
This is brilliant, and has kind of been a dream of mine for a long time.
There is no reason why any kind of mathematics could not, in principle, be made into a video game.
Alligator eggs is a similar concept that could potentially be used to teach lambda calculus.
Without actual game levels I can't decide whether I'd enjoy Alligator eggs or not. In Manufactoria you have to build Turing machines. I doesn't sound like fun, but I loved it immensely.
Yep, though it may not be worth the trouble for some "advanced" maths where the number of people learning it drops enough for it not to be worth the effort.
Heck, most of the content of classes at school could be better taught through games.
A different kind of mental training via gaming is available via lumosity.
Is there evidence of these games working?
Luminosity ≠ lumosity.
Consider offering free versions of the kinds of games available on lumosity.
First thing that popped into my head:
A farming game that teaches expected outcomes and dealing with sunk costs.
Phase 1 is "Make a Plan." You choose what to plant, what upgrades you want to achieve, etc. If you want to reinforce the value of planning, other elements (expanding your house, what kind of farm you want to have) should also have you make a plan. Give opportunities for people to prepare for risks (storm-proofing, crop diversity), and if the risk happens, remind them that they should have prepared.
Phase 2 is where the mindless clicking would normally go in a farming game - but now the idea is to replace some of it with sunk cost type decisions. You planted corn, but someone offers you a deal on grape vines that would require losing a field of corn. You planned to hire a plant scientist, but now they're more expensive than they were. Will you go against the plan?
Has something in common with Agricola. Although I think Agricola and Seven Wonders are best at making you think about opportunity costs - there are lots of good things you want to do, and you can't do them all.
If the rules of the game changed once, what is the chance of them changing again? If I decide to remove the field of corn, is there a chance I could later get even better deal on corn? If the scientists are more expensive that I thought, so I replace them with workers, is there a chance I could later find that workers are less efficient than advertized, so I would had a better deal with the scientists?
We should make certain that what the game percieves as a sunk cost fallacy is really a sunk cost fallacy and not something else, for example a rational update on the fact that the game sometimes changes the rules while playing.
One solution is that the game would announce that the change happens exactly once per level. It could be emphasised by having a "SECRET" card, that in the middle of the game turns and reveals a hidden rule. (The fact that there are no more "SECRET" cards on the screen should make us feel safe about no more hidden rules.) The game should not judge player for what they did before the card was shown -- perhaps they had some estimate about the hidden rule, and already optimized on this estimate. But after the rule is shown, the game should reward player in how well they played the rest of the game.
For example: You need 5000 credits to win the game. After 2000 credits a "SECRET" card is played and the existing situation is saved. At the end the game shows you the alternative ending from the saved point.
To condense what I see as your point: We don't want to change something and then quickly change it back, or it punishes people for changing.
But then again, the goal isn't to teach people to change - it's to teach people to make correct decisions. If something feels like punishment, that's a game design flaw - you want to make peoples' choices feel interesting, informed and impactful. The real culprit seems to be either withholding information about changes form the player (could be counteracted by giving notice ahead of time and being clearer about what sorts of things can happen), and making a system with lots of cost changes too complicated (counteracted by limiting the choices presented, breaking possible cost changes into sensible categories, introducing the player carefully).
Perhaps certain variables (like value of corn vs grapes, cost of hiring researcher) are either strictly increasing or strictly decreasing during the level, so you can see what is coming.
25 unreliable questions. The secret-holder may lie once.
I just tried out DragonBox, and it is quite fun. It's a pretty poor instructional aid, though. Sure, it teaches you how to balance equations by following the correct rules, but it doesn't explain where the rules come from. Algebra is more than rote memorization; a student who truly understood algebra would be able to re-create all the rules, such as "if you add something to the left side, add it to the right side as well", completely from scratch. DragonBox does not teach this level of understanding.
Just because something doesn't teach everything about a subject doesn't make it a poor instructional aid. Algebra is more than rote memorization, but having the rules memorized does help a lot in getting to the stage where you truly understand it. Even if you do understand it and could in principle recreate the rules from scratch, recalling them from memory is faster than deriving them each time you need them - and since algebra is pretty much the foundation of all advanced math, you'll be needing them a lot if you want to study math at all. (Though if you end up struggling with the harder topics because you didn't have the rules of algebra appropriately memorized, you might never want to study more of it...)
I disagree. I think that memorizing the rules first, without understanding where they come from, discourages the student from attempting to understand anything to begin with. After all, his goal is to balance an equation, and look, he just balanced it... so what else is there to know ? Thus, the memorization approach creates the impression that math (or whatever subject you're studying) is all about arbitrary rules that make no sense; it's all about "guessing the teacher's password", and that's boring.
Contrast this with the approach of treating an equation like a puzzle. If "2x - 3 = 5", and we want to know what x is, there are many ways to approach the solution. We could ask, "someone did a bunch of stuff to x to get 5, how can we undo it ?", or we could say, "the equation is like a pair of scales that are balanced, so what can we do to get x by itself without unbalancing the scales ?", etc. Some possible partial answers are, "someone took away 3, so let's add it back", or "if we add 3 to both sides, the scales will still be balanced but we'll be one step closer to a solution". But "add 3 to both sides because that's how the game is programmed and you won't get the high score otherwise" isn't much of an answer. High scores don't mean anything, algebra does.
Well, I can't speak for others, but my personal experience with math tends to be that I only start properly learning why something works once I have the rules pretty well memorized. Before that, my working memory is so occupied with trying to just remember how to apply the rules that I don't have the space to remember why they work. Or alternatively, I can learn why the rules work - but in that case I don't have the memory capacity left for remembering how to apply them.
Of course, this is complicated by the fact that during the process of trying to memorize the rules, I often stop to think about why they work in an attempt to rederive them and make sure I'm not misremembering them. So it's not pure rote memorization, like the way it seems to be with DragonBox. But I would still expect that if somebody first learned them as meaningless rules in the game, and was then later taught math and the reasons for the rules, they'd have a good chance of being delighted at discovering where the rules came from, and could spend all of their cognitive capacity on developing an actual understanding.
Fair enough; it's possible that you and I simply think in different ways. I personally find it very difficult to memorize (seemingly) arbitrary rules, and I found it very difficult to un-teach the "guess the teacher's password" mentality to people. But it's quite likely that I'm making an unjustified generalization from a very small number of examples.
I wonder if there's any layman-accessible literature on this topic...
Time estimation app
At the beginning of each week, enter the things you plan to do during the week, the probability that you estimate of doing them, and the percentage of time you'll spend on various distractions (eg. if you use Rescuetime to count hours of distracted browsing, or the number of movies / TV Series you'll watch during the week), again with confidence intervals.
Then, by the end of the week, you can check your score, and compare it to others!
(This doesn't require a dedicated app, it could be a spreadsheet in google docs shared between a few people, with their email/contacts so they can remind each other to fill in the table once the week is over)
I try to do this by hand, but this would be much more convenient. Also, in addition to using it as a rationality training tool I would be interested in a program that collects this data and then uses it to directly help me make predictions about future projects.
I already do this on PredictionBook.
Much to everyone else's chagrin.
See also here for a list of previous discussions
I have a bunch of more or less well thought-out ideas of rationality games concepts, I'll post a few here. I also did a bit of research on what seems teachable by games and what is not, and talked a bit with Anna about it, but haven't heard back from her :P
In general, a big chunk of debiasing seems to be about teaching economics and statistics (and applying them in everyday life).
One problem for rationality games in general is that an incorrect judgment isn't punished much because you just "start the level over" or something. One solution would be to penalize poor decisions more heavily, e.g. by making people start further back in the game than would be expected when, say, missing a jump in a platformer.
Is that really a problem? It's my understanding that getting rapid feedback, and an opportunity to retry the failed attempt while your previous failure is still fresh in your memory, is much more useful for skill acquisition than having each failure be maximally frustrating.
Hmm... perhaps you could just tighten the time/# of moves limit? To give fast, appropriate updating an advantage over a brute-force approach?
The problem would need to change a bit each time you tried it, so that you had to learn within the round.
Zoombinis actually had some great examples of this, where you had to learn what test was being applied, and how to pass the test, by induction. The exact criteria were randomly determined, so you had to solve the problem anew each time. And you were penalized for taking too long to learn, so being good at the game involved coming up with an efficient learning algorithm, rather than just an adequate one.
Oh man, I played Zoombinis when I was a kid, I loved that game and haven't thought about it in forever.
Zoombinis pro tip: The game allows you to create up to two zoombinis of each hair/eye/nose/transport configuration. You can make things way easier by making your party consist of 8 twinned zoombini pairs as opposed to the usual 16 distinct zoombinis. (For puzzles that depend on zoombini features, which is pretty much all of them, the solution for a given zoombini and it's twin will be the same.)
I love how the game's wikipedia page has a fairly detailed explanation of every puzzle...
So did I!
And, as far as I can remember, not only were the puzzles well designed, but there was a reasonably good overarching story with a definite goal, which meant the puzzles had purpose and there was a little sense of adventure too.
(I should really find the disk and play it again... hehe.)
Another common solution is to randomly generate problems so that you can never learn "by rote" how to pass the level.
They can still "grind" by trying random things until they happen to succeed by chance, but the wider context of the game can discourage that (for example by just showing how many tries it took you)
This is science rather than rationality, but I'd like to see Maxwell's Demon the game, a Jezzball-like game where balls are bouncing at various speeds around a room with a divider in the middle. You open & close (or move) a gap in the divider to let balls through, with the goal of getting the faster-moving balls on one side and the slower balls on the other side. The hot side would get redder as it "heats up" (the average kinetic energy increases) while the cold side gets bluer, and the win condition would be a difference in temperature. More difficult levels would have more balls or would require a larger temperature difference.
An even more physics-y version could incorporate the ideal gas law and make it so that the divider gets pushed over a little bit every time a ball collides with it, so that the side of the room with more balls and/or higher average speed would tend to expand as the divider got pushed away. The win condition could involve both a temperature difference and a location for the divider.
There is already at least one Maxwell's Demon game out there, but the one I found isn't very good (as a game or as physics instruction). The balls are just red and blue - they don't vary in speed - and it's only one level where you have to get 100% separation by color.
The main point is that video games with really hard levels teach the skill of deliberate practice, and paying attention to the things you aren't good at in order to get better.
That seems it could be useful.
Graphical calibration game
1) Show the player an image with a bunch of simple images: rectangles, smiley heads, circles of various color, etc. for about 5 seconds.
2) Hide the image, and ask the player to give a 90% confidence interval for a value such as the average size of a figure that he saw.
3) The correct answer is then shown, along with the image.
Variety can be added by asking for median size, or average width, or whether there are more red or blue circles, or correlation between width and height, or between size and color (if the circles all vary from light blue to dark blue), or the average size of red circles, etc.
This allows for very tight feedback loops between guessing and seeing the answer, and the game can be replayed pretty much infinitely.
This could be important. Some card games teach calibration, such as Bridge and Spades. (Although it's not quite the same, because after you guess how many tricks you'll take, you have some control over it - if you were underconfident, you can throw tricks away, if you were overconfident you can take unusually large risks.) But they just ask for a single number, and later you see how close you were but you can't look back and see how confident you were. If you give a confidence interval, it's much easier to see whether you're well-calibrated.
Not sure how well it fits, but this was an awesomely done game for teaching division.
The basis is an Angry-Birds-like game, where have a stock of 10 cannonballs, which you shoot to knock down stuff (with dedicated targets, blocks with various shapes and properties, etc.).
BUT, before you start playing, you must estimate how many cannonballs it will take you to pass this particular level. At the end of the level, you score 2 points for each remaining cannonball, minus one for each cannonball you planned but didn't shoot, or shot but didn't plan.
The general idea is to help face the planning fallacy and calibrate accordingly; the same idea can be applied to all kinds of games (on a platformer: how many lives will it take you to pass this level, etc.).
Would have to be randomly generated levels with no restart so that players can't just set their estimate low and play until they achieve that estimate.
Another well-regarded educational app is Math Evolve.
That kind of game is relatively easy, because the problems are trivial ("2 + 6 = ?"); the task is only to present them in an interesting way. You could do a World of Warcraft clone where you travel across a 3D map and people give you quests in form of "2 + 6 = ?" and then you have to fight a dragon with "8" written on his belly (there are also other dragons available, but those will kill you). There is no relation between a dragon and a number 8, therefore you have complete freedom in designing such games.
The only drawback is that without care, this could degenerate into "guessing the teacher's password", because the underlying model is just question/answer without a mechanism explaining why the given answer is correct. You could replace texts "2 + 6 = ?" and "8" with texts "Which religion is the true one?" and "Christianity", and maybe you wouldn't even need to recompile the binaries. -- OK, this is exaggerated, because with math the game can generate many new question/answer pairs instead of having a fixed database of them, so it is easier to learn some algorithm to determine the correct answer instead of memorizing them, which is exactly what we want to do. But the point is that this game does not show you why for a question "2 + 6 = ?" the answer "8" is more correct than "9". It just rewards the former and punishes the latter. (Could be fixed by adding an animation that displays 2 red and 6 yellow spheres on one side, 9 green spheres on other side, then moves them to pairs and shows that on one side there is more.)
Make a Phoenix-like shooter game with different kinds of ammunition that do different amounts of damage, and can be changed into each other at some (changing?) ratio. Give the players a limited amount of ammunition that can be replenished by killing enemy ships, along with occasional packets of additional ammo.
Have the enemy ships have different amounts of health, then contrive their order so that the player does significantly better by successfully getting the maximum amount of damage, given the ammunition they currently have.
The game should be rigged so that you run out of ammo and need to dodge things for a while if you don't do the arbitrage, and so that you have a little extra left over if you do.
I probably made this a little too complicated somewhere.
It seems like a challenge of doing this is that a lot of rationality training is about encouraging people to use different mental processes to accomplish their goals than the ones they normally use. In a game, the tasks are often straightforward in a away that seems to cut out a lot of rationality.
Like, whether or not I'm being specific or applying fungibility is difficult to measure, if all you have is my actions in Angry Birds. Whether or not I'm framing problems well (kill the pigs vs. get this tactical implementation of knocking down the blocks) might be noticeable based on the similarity of my moves, but it would be tricky to notice and reward that.
More complicated games like Civilization could probably train some rationality skills, but it's also totally easy to just get stuck in the game. Also, they tend to be waaay longer than I'd like.
It seems like a basic issue here is how to make it so that you need to directly use a rationality subskill in order to play the game well.
Guessing the teacher's password
The player is presented with a small scenario (a paragraph, possibly illustrated), and has a text field in which he can enter the name of a concept that applies - the name of a cognitive bias, or a logical fallacy, or a useful concept from economics or psychology or statistics, or a LW catchphrase. The text field has auto-complete (you don't need to enter the whole darn phrase), adn the player is scored by the time he took (if you guess wrong, the box blinks red for a couple seconds and you can guess again).
A bunch of small scenarios could be made by getting a list of biases and other lesswrongy concepts, and a bunch of categories of real-life situations (family, work, school, health ...) and trying to come up with an illustration of the concept in each category - heck, even generating that list would be a valuable exercise.
This game wouldn't teach people how to use any skill, but it might help create a mental association between real-life situations and some applicable concepts, which would be a step towards making that kind of thinking automatic in everyday life.