I interpreted the name as meaning "performed free association until the faculty of free association was exhausted". It is, of course, very important that exhausting the faculty does not guarantee that you have exhausted the possibility space.
Alas, unlike in cryptography, it's rarely possible to come up with "clean attacks" that clearly show that a philosophical idea is wrong or broken.
I think the state of philosophy is much worse than that. On my model, most philosophers don't even know what "clean attacks" are, and will not be impressed if you show them one.
Example: Once in a philosophy class I took in college, we learned about a philosophical argument that there are no abstract ideas. We read an essay where it was claimed that if you try to imagine an abstract idea (say, the concept of a dog), and then pay close attention to what you are imagining, you will find you are actually imagining some particular example of a dog, not an abstraction. The essay went on to say that people can have "general" ideas where that example stands for a group of related objects rather than just for a single dog that exactly matches it, but that true "abstract" ideas don't exist.[1]
After we learned about this, I approached the professor and said: This doesn't work for the idea of abstract ideas. If you apply the same explanation, it would say: "Aha, you think you're thinking of abstract ideas in the abstract, but you're not! You're actually thinking of some particular example of an abstract idea!" But if I'm thinking of a particular example, then there must be at least one example to think of, right? So that would prove there is at least one member of the class of abstract ideas (whatever "abstract ideas" means to me, inside my own head). Conversely, if I'm not thinking of an example, then the paper's proposed explanation is wrong for the idea of abstract ideas itself. So either way, there must be at least one idea that isn't correctly explained by the paper.
The professor did not care about this argument. He shrugged and brushed it off. He did not express agreement, he did not express a reason for disagreement, he was not interested in discussing it, and he did not encourage me to continue thinking about the class material.
On my model, the STEM fields usually have faith in their own ideas, in a way where they actually believe those ideas are entangled with the Great Web. They expect ideas to have logical implications, and expect the implications of true ideas to be true. They expect to be able to build machines in real life and have those machines actually work. It's something like taking ideas seriously, and something like taking logic seriously, and taking the concept of truth seriously, and seriously believing that we can learn truth if we work hard. I'm not sure if I've named it correctly, but I do think there's a certain mental motion of genuine truth-seeking that is critical to the health of these fields and that is much less common in many other fields.
Also on my model, the field of philosophy has even less of this kind of faith than most fields. Many philosophers think they have it, but actually they mostly have the kind of faith where your subconscious mind chooses to make your conscious mind believe a thing for non-epistemic reasons (like it being high-status, or convenient for you). And thus, much of philosophy (though not quite all of it) is more like culture war than truth-seeking (both among amateurs and among academics).
I think if I had made an analogous argument in any of my STEM classes, the professor would have at least taken it seriously. If they didn't believe the conclusion but also couldn't point out a specific invalid step, that would have bothered them.
I suspect my philosophy professor tagged my argument as being from the genre of math, rather than the genre of philosophy, then concluded he would not lose status for ignoring it.
I think this paper was clumsily pointing to a true and useful insight about how human minds naturally tend to use categories, which is that those categories are, by default, more like fuzzy bubbles around central examples than they are like formal definitions. I suspect the author then over-focused on visual imagination, checked a couple of examples, and extrapolated irresponsibly to arrive at a conclusion that I hope is obviously-false to most people with STEM backgrounds.
An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc.
This is rather tangential to the main thrust of the post, but a couple of people used a react to request a citation for this claim.
One noteworthy source is Aella's surveys on fetish popularity and tabooness. Here is an older one that gives the % of people reporting interest, and here is a newer one showing the average amount of reported interest on a scale from 0 (none) to 5 (extreme), both with tens of thousands of respondents.
Very approximate numbers that I'm informally reading off the graphs:
Note that a 3/5 average interest could mean either that 60% of people are extremely into it or that nearly everyone is moderately into it (or anything in between). Which seems to imply the survey used in the more recent graph has significantly kinkier answers overall, unless I'm misunderstanding something. (I'm fairly certain that people with zero interest ARE being included in the average, because several other fetishes have average interest below 1, which should be impossible if not.)
If we believe this data, it seems pretty safe to guess that a majority of people are into at least one of these things (unless there is near-total overlap between them). The claim that a majority "feel a deep yearning" is not strongly supported but seems plausible.
(I was previously aware that BDSM interest was pretty common for an extremely silly reason: I saw some people arguing about whether or not Eliezer Yudkowsky was secretly the author of The Erogamer, one of them cited the presence of BDSM in the story as evidence in favor, and I wanted to know the base rate to determine how to weigh that evidence.
I made an off-the-cuff guess of "between 1% and 10%" and then did a Google search with only mild hope that this statistic would be available. I wasn't able today to re-find the pages I found then, but according to my recollection, my first search result was a page describing a survey of ~1k people claiming a ~75% rate of interest in BDSM, and my second search result was a page describing a survey of ~10k people claiming ~40% had participated in some form of BDSM and an additional ~40% were interested in trying it. I was also surprised to read (on the second page) that submission was more popular than dominance, masochism was more popular than sadism, and masochism remained more popular than sadism even if you only looked at males. Also, bisexuality was reportedly something like 5x higher within the BDSM-interested group than outside of it.)
If you're a moral realist, you can just say "Goodness" instead of "Human Values".
I notice I am confused. If "Goodness is an objective quality that doesn't depend on your feelings/mental state", then why would the things humans actually value necessarily be the same as Goodness?
What would you want such a disclaimer or hint to look like?
(I am concerned that if a post says something like "this post is aimed at low-level people who don't yet have a coherent foundational understanding of goodness and values" then the set of people who actually continue reading will not be very well correlated with the set of people we'd like to have continue reading.)
A smart human-like mind looking at all these pictures would (I claim) assemble them all into one big map of the world, like the original, either physically or mentally.
On my model, humans are pretty inconsistent about doing this.
I think humans tend to build up many separate domains of knowledge and then rarely compare them, and even believe opposite heuristics by selectively remembering whichever one agrees with their current conclusion.
For example, I once had a conversation about a video game where someone said you should build X "as soon as possible", and then later in the conversation they posted their full build priority order and X was nearly at the bottom.
In another game, I once noticed that I had a presumption that +X food and +X industry are probably roughly equally good, and also a presumption that +Y% food and +Y% industry are probably roughly equally good, but that these presumptions were contradictory at typical food and industry levels (because +10% industry might end up being about 5 industry, but +10% food might end up being more like 0.5 food). I played for dozens of hours before realizing this.
I don't think Eliezer's actual real-life predictions are narrow in anything like the way Klurl's coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius' arguments do have non-zero weight, just nothing close to the weight they'd need to overcome the baseline improbability of such a narrow target.
Thank you for being more explicit.
If you write a story where a person prays and then wins the lottery as part of a demonstration of the efficacy of prayer, that is fictional evidence even though prayer and winning lotteries are both real things.
In your example, it seems to me that the cheat is specifically that the story presents an outcome that would (legitimately!) be evidence of its intended conclusion IF that outcome were representative of reality, but in fact most real-life outcomes would have supported the conclusion much less than that. (i.e. there are many more people who pray and then fail to win the lottery, than there are people who pray and then do win.)
If you read a story where someone tried and failed to build a wooden table, then attended a woodworking class, then tried again to build a table and succeeded, I think you would probably consider that a fair story. Real life includes some people who attend woodworking classes and then still can't build a table when they're done, but the story's outcome is reasonably representative, and therefore it's fair.
Notice that, in judging one of these fair and the other unfair, I am relying on a world-model that says that one (class of) outcome is common in reality and the other is rare in reality. Hypothetically, someone could disagree about the fairness of these stories based only on having a different world-model, while using the same rules about what sorts of stories are fair. (Maybe they think most woodworking classes are crap and hardly anyone gains useful skills from them.)
But I do not think a rare outcome is automatically unfair. If a story wants to demonstrate that wishing on a star doesn't work by showing someone who needs a royal flush, wishes on a star, then draws a full house (thereby losing), the full house is an unlikely outcome, but since it's unlikely in a way that doesn't support the story's aesop, it's not being used as a cheat. (In fact, notice that every exact set of 5 cards they might have drawn was unlikely.)
If your concern is that Klurl and Trapaucius encountered a planet that was especially bad for them in a way that makes their situation seem far more dangerous than was statistically justified based on the setup, then I think Eliezer probably disagrees with you about the probability distribution that was statistically justified based on the setup.
If, instead, your concern is that the correspondence between Klurl's hypothetical examples and what they found when reaching the planet was improbably high, then I agree that is very coincidental, but I do not think that coincidence is being used as support for the story's intended lessons. The story is not trying to convince you that Klurl can narrowly predict exactly what they'll find, and in fact Klurl denies this several times.
The coincidence could perhaps cause some readers to conclude a high degree of predictability anyway, despite lack of intent. I'd consider that a bad outcome, and my model of Eliezer also considers that a bad outcome. I'm not sure there was a good way to mitigate that risk without some downside of equal or greater severity, though. I think there's pedagogical value in pointing out a counter-example that is familiar to the reader at the time the argument is being made, and I don't think any simple change to the story would allow this to happen without it being an unlikely coincidence.
I notice I am confused about nearly everything you just said, so I imagine we must be talking past each other.
"Possible" is a subtle word that means different things in different contexts. For example, if I say "it is possible that Angelica attended the concert last Saturday," that (probably) means possible relative to my own knowledge, and is not intended to be a claim about whether or not you possess knowledge that would rule it out.
If someone says "I can(not) imagine it, therefore it's (not) possible", I think that is valid IF they mean "possible relative to my understanding", i.e. "I can(not) think of an obstacle that I don't see any way to overcome".
(Note that "I cannot think of a way of doing it that I believe would work" is a weaker claim, and should not be regarded as proof that the thing is impossible even just relative to your own knowledge.)
If that is what they mean, then I think the way to move forward is for the person who imagines it impossible to point out an obstacle that seems insurmountable to them, and then the person who imagines it possible to explain how they imagine solving it, and repeat.
If someone is trying to claim that their (in)ability to imagine something means that the laws of the universe (dis)allow it, then I think the person imagining it is impossible had better be able to point out a specific conflict between the proposal and known law, and the person imagining it is possible had better be able to draw a blueprint describing the thing's composition and write down the equations governing its function. Otherwise I call bullshit. (Yes, I'm aware I am calling bullshit on a number of philosophers, here.)