Posts

Sorted by New

Wiki Contributions

Comments

In principle, any game where the player has a full specification of how the game works is immune to this specific failure mode, whether it's multiplayer or not.  (I say "in principle" because this depends on the player actually using the info; I predict most people playing Slay the Spire for the first time will not read the full list of cards before they start, even if they can.)

The one-shot nature makes me more concerned about this specific issue, rather than less.  In a many-shot context, you get opportunities to empirically learn info that you'd otherwise need to "read the designer's mind" to guess.

Mixing in "real-world" activities presumably helps.

If it were restricted only to games, then playing a variety of games seems to me like it would help a little but not that much (except to the extent that you add in games that don't have this problem in the first place).  Heuristics for reading the designer's mind often apply to multiple game genres (partly, but not solely, because approx. all genres now have "RPG" in their metaphorical DNA), and even if different heuristics are required it's not clear that would help much if each individual heuristic is still oriented around mind-reading.

I have an intuition that you're partly getting at something fundamental, and also an intuition that you're partly going down a blind alley, and I've been trying to pick apart why I think that.

I think that "did your estimate help you strategically?" has a substantial dependence on the "reading the designer's mind" stuff I was talking about above.  For instance, I've made extremely useful strategic guesses in a lot of games using heuristics like:

  • Critical hits tend to be over-valued because they're flashy
  • Abilities with large numbers appearing as actual text tend to be over-valued, because big numbers have psychological weight separate from their actual utility
  • Support roles, and especially healing, tend to be under-valued, for several different reasons that all ultimately ground out in human psychology

All of these are great shortcuts to finding good strategies in a game, but they all exploit the fact that some human being attempted to balance the game, and that that human had a bunch of human biases.

I think if you had some sort of tournament about one-shotting Luck Be A Landlord, the winner would mostly be determined by mastery of these sorts of heuristics, which mostly doesn't transfer to other domains.

However, I can also see some applicability for various lower-level, highly-general skills like identifying instrumental and terminal values, gears-based modeling, quantitative reasoning, noticing things you don't know (then forming hypotheses and performing tests), and so forth.  Standard rationality stuff.

 

Different games emphasize different skills.  I know you were looking for specific things like resource management and value-of-information, presumably in an attempt to emphasize skills you were more interested in.

I think "reading the designer's mind" is a useful category for a group of skills that is valuable in many games but that you're probably less interested in, and so minimizing it should probably be one of the criteria you use to select which games to include in exercises.

I already gave the example of book games as revolving almost entirely around reading the designer's mind.  One example at the opposite extreme would be a game where the rules and content are fully-known in advance...though that might be problematic for your exercise for other reasons.

It might be helpful to look for abstract themes or non-traditional themes, which will have less associational baggage.

I feel like it ought to be possible to deliberately design a game to reward the player mostly for things other than reading the designer's mind, even in a one-shot context, but I'm unsure how to systematically do that (without going to the extreme of perfect information).

Oh, hm.  I suppose I was thinking in terms of better-or-worse quantitative estimates--"how close was your estimate to the true value?"--and you're thinking more in terms of "did you remember to make any quantitative estimate at all?"

And so I was thinking the one-shot context was relevant mostly because the numerical values of the variables were unknown, but you're thinking it's more because you don't yet have a model that tells you which variables to pay attention to or how those variables matter?

I'm kinda arguing that the skills relevant to the one-shot context are less transferable, not more.

It might also be that they happen to be the skills you need, or that everyone already has the skills you'd learn from many-shotting the game, and so focusing on those skills is more valuable even if they're less transferable.

But "do I think the game designer would have chosen to make this particular combo stronger or weaker than that combo?" does not seem to me like the kind of prompt that leads to a lot of skills that transfer outside games.

OK.  So the thing that jumps out at me here is that most of the variables you're trying to estimate (how likely are cards to synergize, how large are those synergies, etc.) are going to be determined mostly by human psychology and cultural norms, to the point where your observations of the game itself may play only a minor role until you get close-to-complete information.  This is the sort of strategy I call "reading the designer's mind."

The frequency of synergies is going to be some compromise between what the designer thought would be fun and what the designer thought was "normal" based on similar games they've played.  The number of cards is going to be some compromise between how motivated the designer was to do the work of adding more cards and how many cards customers expect to get when buying a game of this type. Etc.

 

As an extreme example of what I mean, consider book games, where the player simply reads a paragraph of narrative text describing what's happening, chooses an option off a list, and then reads a paragraph describing the consequences of that choice.  Unlike other games, where there are formal systematic rules describing how to combine an action and its circumstances to determine the outcome, in these games your choice just does whatever the designer wrote in the corresponding box, which can be anything they want.

I occasionally see people praise this format for offering consequences that truly make sense within the game-world (instead of relying on a simplified abstract model that doesn't capture every nuance of the fictional world), but I consider that to be a shallow illusion.  You can try to guess the best choice by reasoning out the probable consequences based on what you know of the game's world, but the answers weren't actually generated by that world (or any high-fidelity simulation of it).  In practice you'll make better guesses by relying on story tropes and rules of drama, because odds are quite high that the designer also relied on them (consciously or not).  Attempting to construct a more-than-superficial model of the story's world is often counter-productive.

And no matter how good you are, you can always lose just because the designer was in a bad mood when they wrote that particular paragraph.

 

Strategy games like Luck Be A Landlord operate on simple and knowable rules, rather than the inscrutable whims of a human author (which is what makes them strategy games).  But the particular variables you listed aren't the outputs of those rules, they're the inputs that the designer fed into them.  You're trying to guess the one part of the game that can't be modeled without modeling the game's designer.

I'm not quite sure how much this matters for teaching purposes, but I suspect it matters rather a lot.  Humans are unusual systems in several ways, and people who are trying to predict human behavior often deploy models that they don't use to predict anything else.

What do you think?

I feel confused about how Fermi estimates were meant to apply to Luck Be a Landlord.  I think you'd need error bars much smaller than 10x to make good moves at most points in the game.

I came to a similar conclusion when thinking about the phenomenon of "technically true" deceptions.

Most people seem to have a strong instinct to say only technically-true things, even when they are deliberately deceiving someone (and even when this restriction significantly reduces their chances of success).  Yet studies find that the victims of a deception don't much care whether the deceiver was being technically truthful.  So why the strong instinct to do this costly thing, if the interlocutor doesn't care?

I currently suspect the main evolutionary reason is that a clear and direct lie makes it easier for the victim to trash your reputation with third parties.  "They said X; the truth was not-X; they're a liar."

If you only deceive by implication, then the deception depends on a lot of context that's difficult for the victim to convey to third parties.  The act of making the accusation becomes more costly, because more stuff needs to be communicated.  Third parties may question whether the deception was intentional.  It becomes harder to create common knowledge of guilt:  Even if one listener is convinced, they may doubt whether other listeners would be convinced.

Thus, though the victim is no less angry, the counter-attack is blunted.

Some concepts that I use:

Randomness is when the game tree branches according to some probability distribution specified by the rules of the game.  Examples:  rolling a die; cutting a deck at a random card.

Slay the Spire has randomness; Chess doesn't.

Hidden Information is when some variable that you can't directly observe influences the evolution of the game.  Examples: a card in an opponent's hand, which they can see but you can't; the 3 solution cards set aside at the start of a game of Clue; the winning pattern in a game of Mastermind.

People sometimes consider "hidden information" to include randomness, but I more often find it helpful to separate them.

However, it's not always obvious which model should be used.  For example, I usually find it most helpful to think of a shuffled deck as generating a random event each time you draw from the deck (as if you were taking a randomly-selected card from an unordered pool), but it's also possible to think of shuffling the deck as having created hidden information (the order that the deck is in), and it may be necessary to switch to this more-complicated model if there are rules that let players modify the deck (e.g. peeking at the top card, or inserting a card at a specific position).

Similar reasoning applies to a PRNG:  I usually think of it as a random event each time a number is generated, though it's also possible to think of it as a hidden seed value that you learn a little bit about each time you observe an output (and a designer may need to think in this second way to ensure their PRNG is not too exploitable).

Rule of thumb:  If you learn some information about the same variable more than once, then it's hidden info.  For instance, a card in your opponent's hand will influence their strategy, so you gain a little info about it whenever they move, which makes it hidden info.  If a variable goes from completely hidden to completely revealed in a single step (or if any remaining uncertainty has no impact on the game), then it's just randomness.

Interesting Side Note:  Monte Carlo Tree Search can handle randomness just fine, but really struggles with hidden information.

A Player is a process that selects between different game-actions based on strategic considerations, rather than a simple stochastic process.  An important difference between Chess and Slay the Spire is that Chess includes a second player.

We typically treat players as "outside the game" and unconstrained by any rules, though of course in any actual game the player has to be implemented by some actual process.  The line between "a player who happens to be an AI" and "a complicated game rule for selecting the next action" can be blurry.

A Mixed Equilibrium is when the rules of the game reward players for deliberately including randomness in their decision process.  For instance, in rock-paper-scissors, the game proceeds completely deterministically for a given set of player inputs, but there remains an important sense in which RPS is random but Chess is not, which is that one of these rewards players for acting randomly.

 

I have what I consider to be important and fundamental differences in my models between any two of these games:  Chess, Battleship, Slay the Spire, and Clue.

Yet, you can gain an advantage in any of these games by thinking carefully about your game model and its implications.

If your definition of "hidden information" implies that chess has it then I think you will predictably be misunderstood.

Terms that I associate with (gaining advantage by spending time modeling a situation) include:  thinking, planning, analyzing, simulating, computing ("running the numbers")

I haven't played it, but someone disrecommended it to me on the basis that there was no way to know which skills you'd need to survive the scripted events except to have seen the script before.

Load More