LESSWRONG
LW

dbohdan's Shortform

by dbohdan
13th Jun 2025
1 min read
32

1

This is a special post for quick takes by dbohdan. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
dbohdan's Shortform
17dbohdan
20habryka
9Ben Pace
7mattmacdermott
3Alex_Altair
2Measure
13dbohdan
51habryka
15dbohdan
9habryka
0Said Achmiz
5habryka
3Said Achmiz
4habryka
-3Said Achmiz
2habryka
5Elizabeth
3dbohdan
5Ben Pace
5habryka
10Garrett Baker
8[anonymous]
-1dirk
11[anonymous]
9dbohdan
8Viliam
3Viliam
12dbohdan
3Viliam
1metachirality
2dbohdan
1dbohdan
32 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:00 AM
[-]dbohdan18d172

Does it make sense that LW voting arrows are arranged the way they are? This is how they look right now:

username 12h ▾ 1 ▴ ✘ 5 ✔

My intuition protests, and I think I know why. Upvote and downvote arrows are usually stacked vertically, with the upvote arrow on top. When you translate vertical to left-to-right text, you get what was above to the left and what was below to the right. It means the following horizontal arrangement:

username 12h ▴ 1 ▾ ✔ 5 ✘

Reply3
[-]habryka18d2012

Hmm, I think I might be sold that that order would have been better. Unfortunately at this point the switching costs are reasonably high, so probably not worth thinking that much more about?

The reason for this orientation is that voting buttons used to be oriented horizontally. I.e. downvotes going left, and upvotes going right, on comments. We did this because we wanted to keep comments more dense and hadn't figured out a way to make the big arrows associated with strong-votes not make the vote buttons extend too far into the content area.

In that context, you IMO clearly want the vote buttons facing away from the number, and at least my intuition says that an arrow facing right feels more like an upvote, and an arrow facing left feels more like a downvote. 

When we figured out how to orient the buttons vertically, we preserved the old order of downvote left, upvote right. Hence this order. I think I agree with you that when thinking from first principles, the order you suggests is better. 

Reply
[-]Ben Pace18d90

Lol, I thought you did the counterintuitive thing on purpose in order to emphasize downvotes/disagree-votes, because you think people are too averse to using them.

Reply
[-]mattmacdermott18d712

Usually lower numbers go on the left and bigger numbers go on the right (1, 2, 3,…) so seems reasonable to have it this way.

Reply
[-]Alex_Altair18d32

The arrows used to be left- and right-pointing triangles, and I was incapable of remembering which one was upvoting. I'd just click one, and change it if it turned red.

Reply
[-]Measure15d20

Interesting. I think left/right arrows or triangles would be my most preferred/intuitive of the options.

Reply
[-]dbohdan3mo130

Why don’t rationalists win more?

The following list is based on a presentation I gave at a Slate Star Codex meetup in 2018. It is mirrored from a page on my site, where I occasionally add new "see also" links.

Possible factors

  • Thinkers vs. doers: selection effects [1] and a mutually-reinforcing tendency to talk instead of doing [2]
  • Theoretical models spread without selection [2]
  • Inability and unwillingness to cooperate [2]
  • People who are more interested in instrumental rationality leave the community [2]
  • Focusing on the future leads to a lack of immediate plans [2]
  • Pessimism due to a focus on problems [1]
  • Success mostly depends on specific skills, not general rationality [1]
  • Online communities are fundamentally incapable of increasing instrumental rationality ("a chair about jogging") [3]

Sources

  1. "Why Don't Rationalists Win?", Adam Zerner (2015)
  2. "The Craft & The Community—A Post-Mortem & Resurrection", bendini (2017)
  3. "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality", Patri Friedman (2010)

See also

  • "What Is Rationalist Berkeley's Community Culture?", Zvi Mowshowitz (2017)
  • "Slack Club", The Last Rationalist (2019)
  • "Where are All the Successful Rationalists?", Applied Divinity Studies (2020)
  • "Rationality !== Winning", Raemon (2023)
Reply
[-]habryka3mo*5147

I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more? 

The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto. 

Rationalists are probably happier than their demographic counterparts. The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline. 

What the hell do people want? I don't get it. Like, I think it's overall not clear to me that the whole AI safety and rationality thing is working out because maybe all the things we are doing are hastening the end of the world and not helping that much, but by naive measures of success, power and influence the rationalists are winning beyond what anyone I think was reasonably expecting, and far far above what any similar demographic group appears to be doing.

Rationalists are winning. Maybe it's not enough because existential risk is still real. But man, I sure absolutely am not living a life where I feel like my social circle and extended community is failing to have an effect on the world, or ending up with a disappointing amount of wealth, influence, and power. I am worried we are shaping the trajectory of humanity towards worse things, but man, surely we don't lack markers of conventional success.

Reply
[-]dbohdan3mo154

I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more?

Scott's comment linked in another comment here sums up the expectations at the time. I am not sure if a plain list like this gives a different impression, but note that my sentiment for the talk wasn't that rationalists should win more. Rather, I wanted to say that their existing level of success was probably what you should expect.

The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto.

I find myself questioning this in a few ways.

Who do you consider the most reasonable demographic counterparts? Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren't noticeably more successful than software engineers in general. Professional groups seem highly relevant to this comparison.

If we look at the income statistics in SSC surveys (graph and discussion), you see American-tech levels of income (most respondents are American and in tech), but not 5×–10×. It does depend on how you define "long-term rationalists".

Why evaluate the success of rationalists as a group by an average that includes extreme outliers? This approach can take you some silly places. For example, if Gwern is right about Musk, all but one American citizen with bipolar disorder could have $0 to their name, and they'd still be worth $44k on average[1]. Can you use this fact to say that bipolar American citizens are doing well as a whole? No, you can't.

The mistaken expectations that built up for the individual success of rationalists weren't built on the VC model of rare big successes. "We'll make you really high-variance, and some of you will succeed wildly" wasn't how people thought about LW. (Again, I think they were wrong to think this at all. I am just explaining their apparent model.)

Rationalists are probably happier than their demographic counterparts.

It's a tough call. The median life satisfaction score in the 2020 SSC survey (picked as the top search result) is 8 on a 1–10 scale; the "mood scale" is 7. But then a third of those who answered the relevant questions say they either have a diagnosis of or think they may have depression and anxiety. The most common anxiety scores are 3, 2, then 7. A fourth has seriously considered suicide. My holistic impression is that a lot of rationalists online suffer from depression and anxiety, which are anti-happiness.

The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline.

I agree, some rationalist memes about AI have spread far and wide. Rationalist language like "motte and bailey" has entered the mainstream. It wasn't the case in 2018[2], and I would want discussions about rationalist success today to acknowledge it. This is along the lines of long-term, collective (as opposed to individual) impact that Scott talks about in the comment.

Of course, Eliezer disagrees that the AI part constitutes a success and seems to think that the memes have been co-opted, e.g., AI safety for "brand safety".

What the hell do people want?

I think they want superpowers, and some are (were) surprised rationality didn't give them superpowers. By contrast, you think rationalists are individually quite successful for their demographics, and it's fine. I think rationalists are about as successful as their demographics, and it's fine.


  1. According to Forbes Australia, Elon Musk's net worth is $423 billion. Around 2.8% of the US population of approximately 342 million is estimated by the NIH to be bipolar, giving approximately 9.6 million people. 423 000 ÷ 9.6 = 44 062.5. ↩︎

  2. Although rationalist terminology had had a success(?) with "virtue signaling". ↩︎

Reply3
[-]habryka3mo94

Why evaluate the success of rationalists as a group by an average that includes extreme outliers?

Because one of the whole central principles of the Rationalist and EA ethos is scope sensitivity. Almost everyone I know has pursued high-variance strategies because their altruistic/global goals give them much less diminishing returns. 

Imagine trying to evaluate the success of Y-Combinator on the median valuation of a Y-Combinator startup. A completely useless exercise. Similarly evaluating the income of a bunch of rationalists pursuing if anything even higher-variance plans on the median outcome is just as useless.

Reply
[-]Said Achmiz3mo00

Imagine trying to evaluate the success of Y-Combinator on the median valuation of a Y-Combinator startup. A completely useless exercise.

Surely this is dependent on whether you’re evaluating the success of Y-Combinator from the standpoint of Y-Combinator itself, or from the standpoint of “the public” / “humanity” / etc., or from the standpoint of a prospective entrant into YC’s program? It seems to me that you get very different answers in those three cases!

Reply
[-]habryka3mo51

Sure, though I don't think I understand the relevance? In this case, people were pursuing careers with high expected upside with relative risk-neutrality, and indeed in-aggregate they succeeded extremely enormously well at that (on conventional metrics, I generally think people did so at substantial moral cost, with a lot of money being made by building doomsday machines, but we can set that part aside for now). 

It's also not the case this resulted in a lot of poverty, as even people for whom the high-variance strategies didn't succeed still usually ended up with high paying software developer jobs. Overall, the distribution of strategies seems to me indeed like it was well-chosen, with a median a few tens of thousands of dollars lower compared to people who just chose stable software engineering careers, and an average many millions higher, which makes sense given that people are trying to solve world-scale problems.[1]

  1. ^

    Again, I don't really endorse many of the strategies that led to this conventional success, but I feel like that is a different conversation.

Reply
[-]Said Achmiz3mo30

Well, the relevance is just that from the standpoint of an individual prospective X, the expectation of the value of becoming an X is… not irrelevant, certainly, but also not the only concern or even the main concern; rather, one would like to know the distribution of possible values of becoming an X (and the median outcome is an important summary statistic of that distribution). This is true for most X.

So if I am a startup founder and considering entry into YC’s accelerator program, I will definitely want to judge this option on the basis of the median valuation of a YC startup, not the mean or the maximum or anything of that sort.

Similarly, if I am considering whether “being a rationalist” (or “joining the rationalist community” or “following the rationalists’ prescriptions for life” etc.), I will certainly judge this option on the basis of the median outcome. (Of course I will condition on my own demographic characteristics, and any other personal factors that I think may apply, but… not too much; extreme Inside View is not helpful here.)

Reply
[-]habryka3mo40

Hopefully not the median! That seems kind of insane to me. I agree you will have some preference distribution over outcomes, but clearly "optimizing for the median" is a terrible decision-making process.

Reply
[-]Said Achmiz3mo-3-1

clearly “optimizing for the median” is a terrible decision-making process

Clearly, but that’s also not what I suggested, either prescriptively or descriptively. “An important axis of evaluation” is not the same thing as “the optimization target”.

My point is simple. You said that evaluating based on the median outcome is a “completely useless exercise”. And I am saying: no, it’s not only not useless, but in fact it’s more useful than evaluating based on the mean/expectation (and much more useful than evaluating only based on the mean/expectation), if you are the individual agent who is considering whether to do a thing.

(Optimizing for the mean is, of course, an even more terrible decision-making process. You presumably know this very well, on account of your familiarity with the FTX fiasco.)

EDIT: A rate limit on my comments?? What the hell is this?! (And it’s not listed on the moderation log page, either!)

Reply
[-]habryka3mo2-4

I will definitely want to judge this option on the basis of the median valuation of a YC startup, not the mean or the maximum or anything of that sort.

This sure sounds to me like you said you would use it at the very least as the primary evaluation metric. I think my reading here is reasonable, but fine if you meant something else. I agree the median seems very reasonable as one thing to think about among other things.

EDIT: A rate limit on my comments?? What the hell is this?! (And it’s not listed on the moderation log page, either!)

Yep, we have downvote-based rate-limits. I think they are reasonable, though not perfect (and my guess is in this case not ideal, and also I expect your comments to get upvoted more and then for the rate limit to disappear). 

I would like them to be listed on the moderation log page, but haven't gotten around to it. You would be welcome to make a PR for that, or anyone else is, and we will probably also get around to it at some point.

Reply
[-]Elizabeth3mo57

Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren't noticeably more successful than software engineers in general.

 

How are you selecting your control group? if there's a certain bar of success to be in the same room with you, then of course the groups don't look different. My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don't, it's more likely to be a deliberate choice. 

Reply1
[-]dbohdan3mo32

I was comparing software engineers I knew who were and weren't engaged with rationalist writing and activities. I don't think they were strongly selected for income level or career success. The ones I met through college were filtered the fact they had entered that college.

My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don't, it's more likely to be a deliberate choice.

It's possible I underestimate how successful the average rationalist programmer is. There may also be regional variation. For example, in the US and especially around American startup hubs, the advantage may be more pronounced than it was locally for me.

Reply
[-]Ben Pace3mo54

While I think the rationalist folk are outperforming most other groups and individuals, I will sign on to Scott’s proposal to drop the slogan. “Rationalists should win” was developed in the context of a decision theory argument with philosophers who in my opinion had quite insane beliefs and thought that it was rational to choose to lose because of vague aesthetic reasons, and was not intended to connote fully general life advice. Of course I regularly lose games that I play. (Whereby “games” I also refer to real life situations well-modeled by game theory.)

Reply1
[-]habryka3mo53

Sure, but I feel like the actual conversation in all of these posts is about whether "the rationalist philosophy works as a tool for winning", and at least measured by conventional success metrics, the answer is "yes, overwhelmingly so, as far as I can tell". I agree there is an annoying word-game here that people play with a specific phrase that was intended to convey something else, but the basic question seems like one worth asking for every community one is part of, or philosophy one adopts.

Reply11
[-]Garrett Baker3mo108

My guess is the people asking such questions really mean "why don't I win more, despite being a rationalist", and their criticisms make much more sense as facts about them or mistakes they've made which they attribute to holding them back on winning.

Reply
[-][anonymous]3mo87

"Do as well as Einstein?" Jeffreyssai said, incredulously.  "Just as well as Einstein?  Albert Einstein was a great scientist of his era, but that was his era, not this one!  Einstein did not comprehend the Bayesian methods; he lived before the cognitive biases were discovered; he had no scientific grasp of his own thought processes.  Einstein spoke nonsense of an impersonal God—which tells you how well he understood the rhythm of reason, to discard it outside his own field! He was too caught up in the drama of rejecting his era's quantum mechanics to actually fix it.  And while I grant that Einstein reasoned cleanly in the matter of General Relativity—barring that matter of the cosmological constant—he took ten years to do it.  Too slow!"

"Too slow?" repeated Taji incredulously.

"Too slow!  If Einstein were in this classroom now, rather than Earth of the negative first century, I would rap his knuckles!  You will not try to do as well as Einstein!  You will aspire to do BETTER than Einstein or you may as well not bother!"

(Yudkowsky, 2008, Class Project)


 

What the hell do people want? I don't get it.

Probably something along the lines of what LW was meant to aspire to above.

Reply
[-]dirk1mo-1-2

See, when you put it like that, I think the reason rationalists don't win as much as was expected is quite obvious: claims about the power of rationality were significant overpromises from the start.

Reply
[-][anonymous]3mo110

I appreciate how many sources you've cited. Also worth mentioning is Extreme Rationality: It's Not That Great, by Scott all the way back in 2009. It feels a bit dated, given the references to akrasia (one of Scott's old obsessions of sorts, before LW recognized it was not a useful way of framing the problems). However, it serves as an explicit prediction of sorts by one of the pillars of this community, who basically did not expect instrumental rationality to result in rationalists "winning more" in the conventional sense. I believe time has mostly proven him right.

I also deeply appreciate Scott's comment here in response to a 2018 post by Sailor Vulcan. Relevant parts:

"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".

But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.

I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. [...]

[...] People in the community are pushing a thousand different kinds of woo now, in exactly the way "Schools Proliferating Without Evidence" condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously.

I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says "Hey, are we sure we shouldn't go back to being pure truth-seekers?", it's going to be a very different community that discusses the answer to that question.

Jacob Falkovich's classic post on "Is Rationalist Self-Improvement Real?" is also a must-read here, alongside Scott's excellent response comment.

Reply
[-]dbohdan3mo92

Thanks a lot! It's a good comment by Scott on Sailor Vulcan's post. I have added it and your other links to the page's "see also" on my site.

I like this paragraph in particular. It captures the tension between the pursuit of epistemic and instrumental rationality:

I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says “Hey, are we sure we shouldn’t go back to being pure truth-seekers?”, it’s going to be a very different community that discusses the answer to that question.

I think we have an example of the first part because it has happened with the postrationalists. As a group, postrationalists are influenced by LW but embrace weaker epistemic norms for what they consider practical reasons. A major theme in "a postrationalist syllabus" is superficially irrational beliefs and behaviors that turn out to be effective, which (generalizing) postrationalists try to harness. This exacerbates the problem of schools proliferating without evidence, as reflected in this joke.

Reply
[-]Viliam3mo83

I just imagined a possible April Fools article that I am too lazy to actually write, but the idea is that it would announce a new (fictional) feature of the Less Wrong website -- you can write posts and comments using two different colors: everything written in black is supposed to be epistemically rational, and everything written in blue is supposed to be instrumentally rational. The new rule is that you should upvote black texts if they are true, and downvote them if they are false, but you should upvote blue texts if they are useful to believe, and downvote them if they are harmful to believe. So it is okay to write something like "Jesus loves you and has a personal plan for you" as long as you write it in blue font. By using both colors, we can achieve the epistemic and instrumental rationality at the same time. (There is a new option in settings for the post-rationalists that sets blue as their default font color.)

Reply4
[-]Viliam3mo30

Thank you, this is great!

I would add (or maybe it is already mentioned in some of the linked articles) that luck and long-term traits (some of them immutable) seem to play a larger role in winning than most people here seem to imagine. Also, success seems to require not doing one thing right, but rather avoiding many possible mistakes (some of them by skills, others by luck), so even dramatically improving on one of those things may have little overall effect.

The opposite of this is what some people call "one weird trick" mentality -- the belief that there is one thing so important that if you get it right, it will dramatically improve your entire life, and that all you need to get it right is to read one more important idea (that the rationalist community could provide).

As an example of getting many things right, imagine that your goal is to become rich. One possible way (let's call it "the path of the employee") is to do all of the following:

  • become great at some rare and useful skill (such as programming)
  • learn to negotiate to get a high salary
  • don't mindlessly increase your spending proportionally to your income
  • put the extra money in passively managed index funds

Notice how failing at any 1 of these means not getting rich, even if you get the remaining 3 right. Without skills, there is nothing to build on (from this perspective, even bullshitting is a skill). Without negotiation, your skills will only make your boss rich, not you. If you don't save money, the greater salary can afford you better clothes or a nicer home, but as soon as you lose the job, you lose everything, so this might actually increase your stress and make you do more overtime, potentially reducing the quality of your life. Finally, if you invest your savings in some scam, you will lose them all, and you have to start from zero again, only older. So even if you had a magical wand that fixes one of these things instantly, there is still a long way from there to actually getting rich.

At any step, you can be sabotaged in many ways. You may learn a technology that goes out of fashion. It may turn out that even if you learn programming, you don't have what it takes to become great at it. You might be caught in a loop where poverty requires you to work two low-paying jobs, so you can never find the time to learn programming. Maybe you are so unattractive or autistic, that no one wants to hire you and give you high salary regardless of your skills. You may have a health problem that prevents you from working full-time. Even if you want to save your money, your partner may waste it, and the domestic arguments make you too exhausted. Perhaps you have poor relatives, who expect you to help them financially, and refusing them would be too costly for you socially. Maybe the passively managed index funds do not exist in your country, or are highly taxed, or maybe illegal.

Many things that need to be done right, and many possible accidents that can ruin it all. And even if you succeed at all of this, this is just money. What about health? Mental health? Relationships? Meaning in life? What is the point of getting rich if in the middle of the way you die of something preventable, or get depressed and kill yourself? Winning kinda requires having it all, otherwise the grass will feel greener on the other side.

And as a benchmark for your progress, you will probably look at people who started at a better position, and proceeded faster than you thanks to the Matthew effect. I guess this is another important detail: even if rationality does not work for you, that doesn't automatically imply that something else would have worked better. Maybe the cards were simply stacked against you.

Reply
[-]dbohdan2mo126

i learned something about agency when, on my second date with my now-girlfriend, i mentioned feeling cold and she about-faced into the nearest hotel, said she left a scarf in a room last week, and handed me the nicest one out of the hotel’s lost & found drawer

— @_brentbaum, tweet (2025-05-15)

you can just do things?

— @meansinfinity

not to burst your bubble but isn't this kinda stealing?

— @QiaochuYuan

What do people mean when they say "agency" and "you can just do things"? I get a sense it's two things, and the terms "agency" and "you can just do things" conflate them. The first is "you can DIY a solution to your problem; you don't need permission and professional expertise unless you actually do", and the second is "you can defect against cooperators, lol".

More than psychological agency, the first seems to correspond to disagreeableness. The second I expect to correlate with the dark triad. You can call it the antisocial version of "agency" and "you can just do things".

Reply
[-]Viliam2mo30

I think this is one of the big painful points in our culture. There seems to be a positive correlation between agency and crime (and generally things that are in the same direction as crime, but with smaller magnitude, so we don't call them crime), and people kinda notice that, which kinda makes the lack of agency a virtue signal.

The reason is that as a criminal, you have to be agenty. No one is going to steal money for you; that is, in the way that would land the money in your pockets. (Technically, there are also some non-agenty people involved in crime, I don't know what is the English idiom for them; I mean the kind of stupid or desperate person that you just tell "move this object from place A to place B" or "sign this legal document" and they do it for a few bucks without asking questions, and when shit hits the fan, they end up in jail instead of you.)

And this is quite unfortunate, because it seems to me that many people notice the connection, and start treating agency with suspicion. And not without good reason! For example, if a random person approaches you out of the blue, most likely it is some kind of scammer.

As a consequence, agenty people have to overcome not just their natural inertia, but also this mistrust.

This probably depends a lot on specific culture and subculture. In ex-socialist countries, people are probably more suspicious of agency, because during socialism agency was borderline illegal (you are supposed to do what the system tells you to do, not introduce chaos). If you hang out with entrepreneurs or wannabe entrepreneurs, agency is probably valued highly (but I would also suspect scams to be more frequent).

Reply1
[-]metachirality2mo10

Here's a riddle: A woman falls in love with a man at her mother's funeral, but forgets to get contact info from him and can't get it from any of her acquaintances. How could she find him again? The answer is to kill her father in hopes that the man would come to the funeral.

It reminds me of [security mindset](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html), in which thinking like an attacker exposes leaky abstractions and unfounded assumptions, something that is also characteristic of being agentic and "just doing things."

Reply
[-]dbohdan12d20

I was looking for a term for my diet that indicated adherence and came up with "intermittent intermittent fasting". ("Time-restricted time-restricted eating" doesn't have the same ring.)

Reply
[-]dbohdan2mo12

GBDE, or Geom-Based Disk Encryption, has specific features for high-security environments where protecting the user is just as important as concealing the data. In addition to a cryptographic key provided by the user, GBDE uses keys stored in particular sectors on the hard drive. If either key is unavailable, the partition can’t be decrypted. Why is this important? If a secure data center (say, in an embassy) comes under attack, the operator might have a moment or two to destroy the keys on the hard drive and render the data unrecoverable. If the bad guys have a gun to my head and tell me to “enter the passphrase or else,” I want the disk system to say, The passphrase is correct, but the keys have been destroyed. I don’t want a generic error saying, Cannot decrypt disk. In the first situation, I still have value as a blubbering hostage; in the latter, either I’m dead or the attackers get unpleasantly creative.

Absolute FreeBSD, 3rd Edition, Michael W. Lucas (2018), chapter 23

Reply
Moderation Log
More from dbohdan
View more
Curated and popular this week
32Comments