Wiki Contributions

Comments

This comment is my thoughts.

  1. If you have N situations, it does not automatically mean they have same probabilities. I call the mistake of not recognising it - Equiprobability mistake.

  2. Outcomes have to be excluding. So people make the mistake at the very beginnign - at constructing the Ω set. Two of those situations are not excluding. One of them literally guarantees with 100% certainty, that the other will happen. When you have a correct Ω , then probability of one outcome given any other is zero. To check, whether outcomes are excluding, draw the branching universe graph and imagine a single slice in a much later point of time (Sunday), and count how many parallel universes reached that point. You will find, that only two, but thirders count the second entity twice. No matter what situation you research, the nodes which you take as outcomes CAN NEVER BE CONSEQUETIVE. If it was not the axiom, then i would be able add "I throw a dice" into the set of possible numbers that the dice shows at the end, and i would get an nonsense which is not Omega: {I throw the dice, dice shows 1, dice shows 2, shows 3, 4, 5, 6}. Thirders literally construct such an omega and thus get 1/3 for an outcome, just like I would get a "1/7 chance" of getting a number6 if i was also using a corrupted omega set.

  3. There is a table. I place on it two apples, jar, bin, box. I pit the first apple into a jar. I put the jar into the box. I put the second apple into the bin. Comes a thirder and starts counting: "How many apples in a jar. One. How many apples in the box. One. How many apples in a bin. One. So, there are 3 apples" And forgets, that th apple in a jar and the apple in the box is THE SAME apple.

P(Monday|Tails)=P(Tuesday|Tails) is technically true, not "because two entities are equal", but because an entity is compared to itself! It is a single outcome, which is phrased differently by using consequtive events of the single outcome.

When apple is in a jar, it guarantees that it is also in a box, the same way as <Monday and tails> situation guarantees <Tuesday and tails>.

If talking about graphs, both situations are literrally just the node sliding along the branch, not reaching any points of branching.

The evolution is meant to quickly reduce the space of search. If every perk in the pool starts with just 1 ticket, then most of perks will only be tested once (because they lead to a loss and their population immediately got to zero). If the very first run is a loss, then the true-unknown winrate of a perk is likely not a 90%, so we should not regret throwing it away.

The synergies will have effect on the populations sizes later. Those pairs that synergise are slightly more likely to lead to a win and to increase tickets of both in the pool. After some weak perks fell off and the total amount of species in the pool was reduced, we expect potential synergy-makers to "meet" more often. That is a step in the right direction. If their win was just a lucky coincidence when the perks are not consistently good, they will die out a bit later.

Of course, if the very best build relies purely on synergy and is a combination of very-bad-in-solo perks, it will not be found. I acknowledge there is no way to find the true best combination, that search would require bruteforce playing all possible combinations 20+ times. The aim is to find a managable algorithm which does not rely on personal evaluation at all (because opinions is partially the reason of "stagnation of meta").

http://neilsloane.com/doc/cent4.html According to this page, all tables have unsuitable "orientation". For example, there is a table with 99 factors and 2 levels, but the task would need 4 factors and 116 levels. The amount of levels must be much bigger.

Part 1. That is a good analitical approach to split into groups and then synthesize the build. But let me give an example of how one unexpected/unconventional synergy looks.

Perk 1: You see blood drops on the ground better. Perk 2: You get a notification whenever a bird is disturbed by an enemy, and you can pinpoint the direction.

Both these perks would fall into general and quite a big bin of "information perks". Both their descriptions do not give any hint that they would work together. Here is the scenario which happened a lot in my games: You damage one opponent and leave immediately without tracking him down. The opponents will communicate between themselves that you changed your plans, and they will react accordingly by changing positions. Their relocation likely disturbes a bird and you can quickly find and damage the second opponent. The first would likely find a hiding spot to heal, but you switch targets again and track the first guy by his blood drops. But switching targets again encouraged opponents to take strategically better spots and disturb birds, giving you information again. That creates a loop, where you flip flop between reacting to the info those 2 perks give, AND creates a bit of confusion and inefficiency in the opponent team AND they waste time (which is a limited resource) by responding to your changes of targets. (If you were hunting just the first player, like if you had no info perks and were afraid to lose track of the injured guy, all 3 other players would sit on their tasks and feel safe).

Many perks in the game are not like "get +x% to y stat". They introduce a new mechanic. If all perks woul give some numerical benefit to the stats, I think it would be possible to use Wolfram language to solve a system of 100+ equations.

Part 2. Thanks for the video right on topic. I did not know the problem is called Full factorial analysis.

The video shows a problem with 3 slots and 2 possible values for each slot. Those values can repeat (LHH). In my problem there are 4 slots, and 116 possible unique categorical values, which cannot repeat from slot to slot. I do not understand the principles of scaling here. Soo, I build a table, such than any column contains equal observations while having and not having a perk. I don't quite understand how the table is generated, but thanks for the food for thought, I'll dig in this direction.

I am new to the website and fanfic culture, so I have couple questions.

  • May I get a permission to make a translation of "Luna ..." into russian?
  • Do I need to get permission when I want to popularise_by_translating something from this website?
  • If I commited to the translation, would it be suitable to host it here on lesswrong (in personal shortform for example), or site is english-only?

I have had a similar mindset in games for a long time. This exploration you describe in my experience felt like the fastest way to improve in games (and in maths: when a teacher says that some method would not work to solve a problem, without telling me why, I am likely to try and see myself, how the edge cases invalidate the method). Apart from giving a broader intuition about the subject I believe it enreaches the "toolkit".

There is a 1v4 computer game Dead by daylight. The thing that attracts me is the ability to make your own build - set of powers your character will enter the match with. There are more than 10^12 combinations, and that creates an optimisation problem. Community of the game has agreed on around 12 best builds, and a new player could just take one of them into the game without understanding why they are considered best. But I decided to do exploration and play with random builds. This opened to me a vast space of rare power synergies the community never agrees to discuss. They throw unconventional builds and playstyles into the bin.

If I had friends who play computer games, I would really want to convey an experiment. I would ask them to start learning the game, completing the same amount of matches per weekend. One group would be presented with classical introduction, materials and videos that explain the current meta ("best" builds and methods of winning, as community states), and they would only be limited to the professionals' recommendations. Another group would be allowed to play with only random builds and would not be presented with classical "how to win" materials, instead they'll see videos where gamers explore the space of possible builds, try to win by unconventional methods, experiment with setting weird winning criterias, playing with handicaps, or other things which are considered "inefficient" by the community majority.

My intuition says that the second group would improve at the game much faster and after 50 matches be on a higher level (even though they might have lost more matches during training). They will find the powers that fit them better. They will know more mechanics of the game, that they could apply in edge cases. They might play on a psychological level by surprising opponents with some weird strategy.

One of the reasons I did not start that little experiment is the absence of visible Elo in the game. The only way of testing which group is better currently is a face-to-face matches between them, but that might not be the correct evaluation of skill. Rock-paper-scissors problem: when one group is taught to only throw rock and another is taught to adapt, the outcome is trivial. True testing would need many matches against random opponents to make statistically significant conclusions. I have not yet found people with dedication for that. So, there is a strong belief that I have not found a way to test.