If you have feedback for me, you can fill out the form at .

Or you can email me, at [the second letter of the alphabet]@[my username].net

Wiki Contributions


Other facts about how I experience this:

* It's often opposed to internal forces like "social pressure to believe the thing", or "bucket errors I don't feel ready to stop making yet"

* Noticing it doesn't usually result in immediate enlightenment / immediately knowing the answer, but it does result in some kind of mini-catharsis, which is great because it helps me actually want to notice it more.

* It's not always the case that an opposing loud voice was wrong, but I think it is always the case that the loud voice wasn't really justified in its loudness.

A thing I sort-of hoped to see in the "a few caveats" section:

* People's boundaries do not emanate purely from their platonic selves, irrespective of the culture they're in and the boundaries set by that culture. Related to the point about grooming/testing-the-waters, if the cultural boundary is set at a given place, people's personal boundaries will often expand or retract somewhat, to be nearer to the cultural boundary.

Perhaps controversially, I think this is a bad selection scheme even if you replace "password" with any other string.

any password generation scheme where this is relevant is a bad idea

I disagree; as the post mentions, sometimes considerations such as memorability come into play. One example might be choosing random English sentences as passwords. You might do that by choosing a random parse tree of a certain size. But some English sentences have ambiguous parses, i.e. they'll have multiple ways to generate them. You *could* try to sample to avoid this problem, but it becomes pretty tricky to do that carefully. If you instead find the "most ambiguous sentence" in your set, you can get a lower bound on the safety of your scheme.

Um, huh? There are 2^1000 1000-character passwords, not 2^4700. Where is the 4700 coming from?

(added after realizing the above was super wrong): Whoops, that's what I get for looking at comments first thing in the morning. log2(26^1000) = 4700 Still, the following bit stands:

I'd also like to register that, in my opinion, if it turns out that your comment is wrong and not my original statement, it's really bad manners to have said it so confidently.

(I'm now not sure if you made an error or if I did, though)

Update: I think you're actually totally right. The entropy gives a lower bound for the average, not the average itself. I'll update the post shortly.

To clarify a point in my sibling comment, the concept of "password strength" doesn't cleanly apply to an individual password. It's too contingent on factors that aren't within the password itself. Say I had some way of scoring passwords on their strength, and that this scoring method tells me that "correct horse battery staple" is a great password. But then some guy puts that password in a webcomic read by millions of people - now my password is going to be a lot worse, even though the content of the password didn't change.

Password selection schemes aren't susceptible to this kind of problem, and you can consistently compare the strength of one with the strength of another, using methods like the ones I'm talking about in the OP.

I don't think that's how people normally do it; partly because I think it makes more sense to try to find good password *schemes*, rather than good individual passwords, and measuring a password's optimal encoding requires knowing the distribution of passwords already. The optimal encoding story doesn't help you choose a good password scheme; you need to add on top of it some way of aggregating the code word lengths. In the example from the OP, you could use the average code word length of the scheme, which has you evaluating Shannon entropy again, or you could use the minimum code word length, which brings you back to min-entropy.

Yep! I originally had a whole section about this, but cut it because it doesn't actually give you an ordering over schemes unless you also have a distribution over adversary strength, which seems like a big question. If one scheme's min-entropy is higher than another's max-entropy, you know that it's better for any beliefs about adversary strength.

Hm. On doing exactly as you suggest, I feel confused; it looks to me like the 25-44 cohort has really substantially more deaths than in recent years: Shot 2022-01-16 at 2.12.44 PM.png?dl=0 I don't know what your threshold for "significance" is, but 103 / 104 weeks spent above the preceding 208 weeks definitely meets my bar.

Am I missing something here?

A thing that feels especially good about this way of thinking about things is that it feels like the kind of problem with straightforward engineering / cryptography style solutions.

Load More