It's a card game, how could it possibly cost this much? If you publish the pdf I'll print it for myself.
Honestly it just comes down to manufacturing and the qty in the print run. In this first print, I made a pretty limited number to just test the waters out. I'm just a student right now and didn't want to print thousands of these things so the cost is a bit higher than it would be if I were to print, say, 2000 copies or something which would drop the per unit cost down a lot.
you're getting fleeced, go to a print shop and print some cards on thick paper and cut them with their cutting tools. if you print 9 decks side-by-side on a stack of A4 paper you might not even need to measure.
Your question cuts to the heart of our movement: it's been over two decades and (afaik, afaict) we still don't have a robust & general way to test whether people are getting more rational.
Yeah its certainly an interesting one. I've found myself (since I've spent weeks working on the deck and just talking about these biases) noticed myself, in moments, recognising certain biases and fallacies emerge when people talk, or even when I talk. Like I'll. catch myself being like—I think I'm using an authority bias here—and I would say I do this 'more' after having worked on the game than before it.
The key problem is that this is a process metric and not an outcome metric. With rationality we do care about outcome and it's not clear that the process metric actually relates to them.
With cognitive bias training there's the risk that it just makes it easier to rationalize whatever you want to believe for other reasons. As far as I understand, CFAR did experiment in the beginning about teaching cognitive biases and then decided against it because they believed it wouldn't actually help with what they are trying to accomplish.
From academia we also don't seem to have studies that show that you can improve people's real decision making by teaching them ccognitive biases.
Fallacymania is a thing, is freely available (you can print it), and has a very similar gameplay at the core
I have an open question about how many cards there are and how they are size that doesn't seem to be addressed in your description. I think shortly speaking about the physicality about what you are selling would be a good idea.
You make a great point here. I will update this.
There are 187 biases in the deck, and 45 cards related to the game (so player cards, scenarios, things like that).
Maybe you could do something with LLM sentiment analysis of participants' conversations (e.g. when roleplaying discussing what the best thing to do for the company, genuinely trying to do a good job both before and after).
Though for such a scenario, an important thing I imagine is that learning about fallacies only has a limited relation, and only if people learn to notice them in themselves, not just in someone they already disagree with.
We have been playing around with this stuff (for another project). As in, recording conversations and then trying to mark out the instances of the fallacies and biases. Its not highly accurate right now, but we're trying to turn it into something a bit more fun / usable. But like you said, the biases / fallacies are just a small, discrete part of the whole story really. But we wanted to start somewhere.
Your game's name reads 'homo', which is 'human' in Latin. Your game's cover art shows a gun. Why is it so?
So the gun is one of the images from one of the cognitive biases in the deck, that relates to illusory correlations. It comes from the correlational misunderstanding example often cited that goes something like this.
In Chicago, gun violence tends to get up in summer. Also, ice-cream sales increase in summer. Therefore, increases in ice-cream cause gun violence.
The gun, in this instance, if you look carefully, is made of ice-cream.
Feedback: The cover image and choice of font are bizarre and off-putting to me. Bubbly font with a giant HOMO and a weird diseased-looking pink gun give me more vibes of homosexuality than rationality.
I spent several years working as the chief product officer of a software company in Melbourne, and during this time, there was a point where I raised some money for the company with the aim of using that money to double the size of the company, and hire more engineers. During this period, as we were growing faster and more and more cross functional teams came online, there was this behaviour I noticed emerging where I would get asked to chime in on things—should we build this thing this way or that—and, as was my sentiment at the time, I believe that decisions were best made closer to the action. I was not close. So, I wanted to encourage people to make decisions on their own.
So eventually I made this proclamation. I said, you can make whatever decisions you want moving forward, so long as they have the highest probability of getting us to our goals. The actual details of this were a bit more nuanced than that, but generally speaking that was the picture.
Someone asked how do we know what we’re doing has the highest probability of reaching our goals?
I said I didn’t know. But I'd find out.
At the time, I hired this computational neuroscientist named Brian Oakley who had completed his PhD a few years early on communication between visual cortical areas. He was very clever, and had a tendency to come up with answers to things I thought were relatively unanswerable. So I asked him…
Would it be possible to start to measure decision quality in the organisation?
He said he didn’t know. But he’d find out.
What Brian went on to do, and subsequently, became the focus of a bunch of decision intelligence consulting I ended up doing as a part time job while I went back to university to study neuropsychology, motivated me to try and think of ways to improve decision making—particularly in cross functional teams, a space I knew well—in a way that wasn’t overly complex.
I’d been a fan of this group of individuals called Management 3.0 who had designed these very simple and accessible card games which sort of turned management training from these long, three day seminars where people generally forgot the content days later, into tactical games that could be run in 20 minutes. They have one game called delegation poker, which is a way to really fine tune who can decide what in a relationship between a manager and an employee. It helps clarify the undefinable, and I had started to wonder if it might be possible to build a game that could do that for decision quality.
Decision science is a very broad topic indeed, and so when I started working on this idea, I knew it would be impossible to cover everything. I had come up with a few ideas, and Brian had given me some thoughts on how he thought it could be improved. What I ended up with was focusing on a simple aspect of decision making, and where all good (and poor) decisions tend to start.
In cross functional teams, I often found individuals—who would describe themselves as very logical—debate which direction to take certain projects in ways that were quite illogical or heavily biased. There were always these little corporate vendettas people were engaging in, small emotional infractions which had become capital crimes, and past experiences which never ceased to influence what software people should build and why. When I reflected on this and watched teams debate ideas, I started to wonder if we could hire a debate coach to just work on very basic things like removing logical fallacies from arguments. I found nobody who could do such work.
So I started trying to build a game that might simulate this learning. In essence, the goal was to do several things.
I’d hoped this kind of format would expand the bias and logical fallacy vocabularies individuals had—well beyond confirmation biases, which were often the only biases cited when debates emerged, especially, at the most dramatic apex of corporate arguments. Equally, I hoped the game might train individuals to really listen to arguments as they were unfolding, focusing them on the key points, and not the individuals (ad hominem).
So I made one version of this game and started giving it to friends to play around with. I called it Homo Rationalis and included a bunch of pre-loaded scenarios that people could discuss; these were the conversational simulations that allow people to role play the biases in question.
I eventually got some feedback, made some tweaks to the game, and also just made it more affordable to produce. Initially, I was running down to a local snap printing and producing copies which would cost me around $300 for one deck—hardly a sustainable way of helping improve decision making if the primary decision in making it would send me broke. With a bit more time up my sleeve in between semesters, I started making a much more professional version. It comes in a box now and with a better set of instructions, and now, it can be produced and re-ordered when I run out of copies.
After I began selling these though, I realised I actually cared a lot more about this space than I did when I went into it. I was attending lectures and seminars at university (as a 40 year old mature age student) and noticed just how much time was spent trying to ‘teach critical thinking’ to students, which was, quite frankly, pretty poor. There was a lot of asking students to reflect on things which were quite obvious, but not a lot of real meaningful training going on. What I’d found was that my game was a form of elaborative learning; as students role play biases, and have to pick sides of an argument, occasionally sides they do not agree with, it forces them to re-learn how to listen, and how to make arguments for things and avoid specific fallacies and biases in real time.
To be fair, the game is not easy, because it is frequently not easy to role play biases, which are highly varied, in scenarios which you commonly don't see them appear in. For instance, it is difficult to act out a social desirability bias when debating if Pho is better than Ramen. But, my suspicion, is that this difficulty layer actually ads to the learning process, by having individuals process the concepts at more complex layers of cognitive deliberation (layers-of-processing-effect). But I do think it is effective and with time, I’m hoping to try and run a study around it to see if I can actually improve, maybe with a kind of randomised trial, the decision making abilities of individuals with exposure to the game.
Firstly, obviously it would be great if anyone wanted to try the game and tell me what they think—that would be great. You can purchase it here (it's priced as cheaply as I can make it) and it comes in a fancy box. It just has one flat rate for shipping, and in reality, some locations from which people will order it will result in me losing money, but on average, so long as i don't sell all my copies in Montenegro, it should average out okay. My intention is to use the profits from the sale of this game to fund a superior form of critical thinking training for high-school and university students that is actually effective, and turn this hole thing into a social enterprise. My experiences at University was quite formative, and I wanted to try and help in this way, by using the game profits as a funding mechanism to help younger people in these dimensions at a time where, the internet more broadly, seems to be eroding one cognitive faculty after another.
Secondly, does anyone have any suggestions about some dependent variables that I might be able to track with an experiment like this game? The problem I’ve found when looking at doing this is that certain biases lend themselves to certain experiments, but broader, decision making abilities (related to biases and fallacies) are a bit trickier to operationalise. For instance, there does exist implicit biases tests, but this only tests this particular faculty. There exists tests for things like the Hawthorne effect (observer effect). I can test, individually, how susceptable someone is against certain biases, but what im really trying to do is measure how susceptable, overall, they are to cognitive biases and logical fallacies.
The only way I figured it could be done is to play out a video, for example, of say two podcasters arguing some point—vaccines cause autism, say—and have individuals identify the cognitive biases and fallacies, if they exist, in certain media clips. This would be a kind of 'exam' at the end of the program.
I figured, given the demographic and interests of Lesswrong, someone might be able to suggest a DV I could use to try and run some experiments.