Significant planecrash spoilers up through Book 5, crisis of faith.

Generated at the suggestion of John Wentworth.

Epistemic status: I personally have no experience with project management; here I'm just presenting the ideas on the topic I gathered reading planecrash.

Introduction

The overarching dath ilani principle of project management, as far as I can discern, is that management means programming organizations. This is a foreseeably difficult domain to program in -- people are significantly less well-behaved objects than computers. The challenge of project management is engineering organizations that actually optimize for EV despite that.

In dath ilan, if you're really good at project management, you embody one of the central archetypes of social success: the mad entrepreneur.[1] Mad entrepreneurs are logistical geniuses who are immune to status distortions and flinches, confidently ask for enormous sums of venture capital, and then scale up to enormous project with all that money. At the end of the day, their projects even actually ship!

Below are scattered dath ilani insights into the art of project management.

Have One Single Responsible Person for Every Fathomable and Unfathomable Project State

Keltham will now, striding back and forth and rather widely gesturing, hold forth upon the central principle of all dath ilani project management, the ability to identify who is responsible for something.  If there is not one person responsible for something, it means nobody is responsible for it.  This is the proverb of dath ilani management.  Are three people responsible for something?  Maybe all three think somebody else was supposed to actually do it.

In companies large enough that they need regulations, every regulation has an owner.  There is one person who is responsible for that regulation and who supposedly thinks it is a good idea and who could nope the regulation if it stopped making sense.  If there's somebody who says, 'Well, I couldn't do the obviously correct thing there, the regulation said otherwise', then, if that's actually true, you can identify the one single person who owned that regulation and they are responsible for the output.

Sane people writing rules like those, for whose effects they can be held accountable, write the ability for the person being regulated to throw an exception which gets caught by an exception handler if a regulation's output seems to obviously not make sane sense over a particular event.  Any time somebody has to literally break the rules to do a saner thing, that represents an absolute failure of organizational design.  There should be explicit exceptions built in and procedures for them.

Exceptions, being explicit, get logged.  They get reviewed.  If all your bureaucrats are repeatedly marking that a particular rule seems to be producing nonsensical decisions, it gets noticed.  The one single identifiable person who has ownership for that rule gets notified, because they have eyes on that, and then they have the ability to optimize over it, like by modifying that rule.  If they can't modify the rule, they don't have ownership of it and somebody else is the real owner and this person is one of their subordinates whose job it is to serve as the other person's eyes on the rule.

One simple way to achieve this property in an otherwise Earth-typical organization is to (1) have every employee be responsible for their domain and (2) have an additional manager who's responsible for everything else that might surprise you coming down the pipe.

Bureaucratic Corrigibility and Actually Optimizing for EV

Organizations that are too large for people to comfortably juggle informally have to instead rely on formal bylaws to get by. Commonly, these large organizations ossify into unwieldly bureaucracies, in which people get ahead by Goodharting on the regulations of the organization and it is tacitly understood that gaming the system is what everyone in the organization who isn't clueless actually does, day in and day out.

On the other hand, these unwieldly bureaucracies have some resistance against litigation, because they insist on everyone's behavior always conforming to what's explicitly spelled out in the employee handbook, and they require all autonomous costly behavior to be written up in paperwork and made bureaucratically legible.

Lean start-ups that "move fast and break things" aren't corrigible to a handbook of procedures in this way. But this frees up their employees to act autonomously in the best interests of the company, even when those actions have poor optics and/or aren't legible to higher-ups. If you have a bunch of smart, value-aligned employees, it's often wiser to let them loose to do their thing. You should have a presumption against micromanagement, because your employees are smart and aligned. Just incentivize good outcomes and disincentivize bad ones; don't centrally plan behavioral protocols ahead of time.

Exception Handling

And when you do regulate agents in your organization, those regulations should all include an out that employees can exercise when their better judgement weighs against following even the spirit of that regulation. It's amazing how little this guideline is heeded in the world! If you wanted to centrally plan an entire large organization, you'd need to basically foresee every class of eventualities your on the ground employees might encounter ahead of time. But what about Cromwell's Law -- what if you're wrong about some future eventualities? There's are options between anarchy and Communist Party central planning, and no need to choose solely from one of those two extremes.

Infosec

Cheliax doesn't think about informational security the same way dath ilan does.  They don't have an explicit concept of information theory and probabilistic entanglement and improbable observations narrowing down probable worlds.  If a top-secret Civilization project requests two hundred mice, and most other projects don't do that, then the mouse order is also obviously top secret, period, your job isn't to figure out what an adversary could deduce from a piece of unusual information but to deny your adversaries as much information as possible.  Even if you're at +3sd they may perhaps be at +5sd, and you won't see all the connections that they'll see.

Dath ilani children's fiction is replete with cautionary tales of fools who assumed that some fact could not possibly be deduced from the scanty, unreliable information that some slightly less foolish person possessed.  Adults, of course, read about more sophisticated and plausible errors than that.

Not that every dath ilani has the deep information-theoretic security mindset either, to be clear.  Any real information-theoretic-security expert of dath ilan - as opposed to some random punk kid on an airplane - would've told Keltham, during the Nidal attack on the villa, that as soon as his life was no longer in immediate danger, he needed to get the shit out of those Obviously Strange Clothes before he went into the villa and anyone project-uncleared got a close or extended look at him.  No, not because an ideal agent could use a mere glance at the zipper to deduce precise manufacturing technology not currently known to Golarion.  Because the clothes are incredibly abnormal and therefore a highly improbable rare signal and therefore represent a potentially massive update for any adversary who is smarter than you and making unknown deductions; seriously what the shit is Keltham thinking.

If a top-secret Chelish project asks for a budget estimate on two hundred mice, the project manager will think about whether they believe anything top-secret seems obviously deducible from the mouse request; and if there's an obvious way to deduce something genuinely ultra-top-secret, they'll mark the mouse order as being also genuinely ultra-top-secret.  Otherwise, it will soon be widely rumored within the Inner Ring - this being something that would make dath ilani informational security experts spit out their drinks - that a top-secret Chelish project ordered two hundred mice, no, nobody's allowed to ask for what.  When Abrogail Thrune issues an order, it's put forth under Crown authority so everybody knows how important it is and what happens to them if they fuck up; rather than being issued anonymously with a quantitative priority that isn't any higher than it has to be to get that job done, rounded up to make the exact quantity less revealing.

Imagine your counterfactual self, who exists in another world mostly like yours, but who knows some dangerous secret that you don't know about. When people ask him about his dangerous knowledge, if he isn't lying, he can either stay mum or Glomarize. For those two responses not to leak information about what he knows out to his interrogators, it needs to be the case that you-in-this-world, where you don't know the dangerous secret, also stay mum or Glomarize. It needs to be the case that externally observable behaviors aren't correlated with internal hidden contents.

When you build a great big secret project, that secret project needs to look just like a mundane project does across all its informational interfaces. Worlds running mundane projects can acausally coordinate with worlds running secret projects by standardizing their publicly visible interfaces now. Then, you gain the option of surreptitiously transitioning (or not transition) to secretly running a sensitive project later on.

"Reality Doesn't Care What You Can Afford"

"Taking weeks or months to finish updating would lack dignity... that word doesn't have any Taldane translation but maybe 'pride', 'dignity', the part of your self-image where you think you're not completely unskilled at Law-aspiring thought and you want to live up to the expectations you have of yourself.  I'll be aiming for tomorrow.  Maybe day after tomorrow since I also have to orient to Golarion as it appears on this layer of reality."  Part of Keltham is tired, now, and would just as soon speedrun whatever part of the game this is.

"If you don't wake up the day after tomorrow all better are you going to have a fit about that?"

"That, too, would lack dignity.  If I'm still not functional the day after tomorrow I will accept that situation, assess that situation, and figure out what to do with that situation."

"Well, I'm not going to try to talk you into taking longer than you need, but I don't think your help's going to be that much less valuable to us in a month compared to the day after tomorrow."

"I would not assume that to be the case.  Cheliax is making an assembly line - outside-item-assisted way of rapidly producing - intelligence headbands, currently at the +4 level, because that is how they turn spellsilver into having even more and better wizards.  If they can master enough Law to get started on the invention of science and technology in general, ways of understanding and manipulating the world, then, no, you may not really have a month."

"I do not think, at this point, that you move quietly for fear of provoking a countermove.  I think you call together every Lawful or Good country in the world, have them send all of their brightest people here or to a facility located in neutral ground - possibly inside the Ostenso nonintervention zone, if the god who originally set that up can force Cheliax to agree to that.  Intelligence 19 teenagers wearing +6 intelligence headbands, brilliant accomplished researchers who are not past their useful working lifespans."

"Cheliax didn't allocate +6 intelligence headbands, I think, because that level of resource commitment would've tipped me off that I had the political pull to demand - scries on other countries, Greater Teleports - as I eventually did.  Though, to be clear, that was mostly me being stupid.  What I should've done shortly after the supposed godwar was demand that Cheliax fill a bag of holding with the unfiltered contents of a Chelish library.  I mean, I did not know, fundamentally, that I was facing a Conspiracy on a level where it would be defeated by a test like that, but - it would have ruled out some Conspiracies and that is what I should have -"

"Anyways.  I do not need to be fully functional to do politics, 'politics'.  That does not require my full intellect the same way as teaching epistemology, Law-inspired skill of figuring out what's true.  If you're not the one making decisions like these, I should talk to whoever is, and get things rolling on the criticalpath, today.  Uh, criticalpath, the path through the graph, connected lines, with the greatest minimum time to complete, such that the time to complete the criticalpath is the time to complete the whole project."

"Until we've learned how to make spellsilver cheaply, we cannot afford to give anyone a +6 intelligence headband. We certainly can ask countries to send talented researchers here to learn from Ione, which is what I just explained we have done, though none of them have native intelligence 19, obviously."

"This is not really a situation where you get to scrape up whatever resources you can 'afford' and hope you win with those.  'Reality doesn't care what you afford.'"  (It rhymes and scans perfectly in Baseline, in the way of Central Cheating Poetry.)

I think this is one of the more important lessons out of planecrash for EA projects, and one the EA community already embodies pretty well. Ordinary people often use cached heuristics about not spending money unnecessarily, e.g., anchoring on a paradigmatic example in some reference class, and then after that refusing any negatively surprising expenditures. A heuristic like this has a lot of problems in its own right. When time is scarce, though, it's extra important to be willing to do socially unorthodox things on reflection in order to save time/produce more research work.

I've found it useful to explicitly name a price you'd be willing to pay for various things. Generate some (asspull) dollar value to put on your time, and then use this estimate to generate rough, order-of-magnitude estimates of whether it's worthwhile to buy some time-saving tool or not. Similarly, organizations ought to have some at least asspull list of numbers (that list can of course be refined later on, once it exists at all) expressing how much they'd pay in dollars to accomplish their various goals. Say, for example, that a 1% chance of saving the entirety of the posthuman future was worth a trillion dollars to your organization. You'd want everyone in your organization who could spend org money to know this -- you can now independently estimate the effect on doom of some proposed course of action, and then multiply to work out very roughly how much you'd stand to gain or lose, by your organization's lights, by paying for that action. This beats going off of intuition alone, because (1) you can always fall back on your gut intuition, and (2) 10 minutes of optimization is infinitely better than no minutes of optimization.

Epistemological Distortions from Status Games

One of the big problems with academia is that the esteem of your peers is the key to career success. In fields with clear datapoints from reality -- with proofs that either work or don't, or experiments that come back with a bunch of measurements attached -- this isn't so bad. In those fields, reality can be the ultimate arbiter of what's high-status theorizing and what isn't, and can keep status incentives from diverging too far from accurate results. In fields without clear datapoints from reality... it's much harder to train researchers, i.e., good generative models. In these fields, what you get back is some broad distribution over the possible quality of your work. Worse, that distribution gets skewed by the status games people play with each other: if some generated results seem unacceptably Green-flavored, the Blue Tribe being at all predisposed to more carefully examine those results will systematically, predictably bias the whole academic system off in the Blue direction, some distance predictably away from the truth. In fields where reality isn't clearly weighing in, social status incentives predominate in their absence.

Widespread Internal Betting as (1) a Remedy for Status Distortions, and (2) a Way to Actually Aggregate Everyone's Models

"Neither Osirian nor Taldane really have a word that means Law-aspiring thought... the native concept of 'science' doesn't include key aspects like prediction markets."

One way to tackle this problem is to get people's skin in the game. Have a widespread internal culture of betting on outcomes. Betting is virtuous! Betting on plagues, war deaths, rockets blowing up, whatever, is virtuous and praiseworthy, if you're trying to cultivate a culture of epistemic rationalists. It's far better than sniping from the sidelines, never keeping track of when you get it wrong and never being correlated with which possible world you're actually in. The commentaratti have no skin in the game, no incentive to live in their reality. The commentaratti do have an incentive to appear sophisticated, erudite, and hypermoral, thereby accruing status in all possible worlds, irregardless of which specific world they're in fact in.

The ambitious form of this guideline is to form internal prediction markets, opening them up to as many bettors as possible. But the 80/20 approach here is just to ever bet with each other at all, and keep some track of who's bet on what. I personally have noticed what I'm not readily willing to bet on with people around Lightcone by pushing myself to bet on beliefs. It's easy to skirt by with quite low-res models when all you ever do is comment from the peanut gallery; this feels analogous to the gap between being able to follow or verify someone else's views when you hear them, and ever generating those same views yourself unprompted.

Fast-Prototype

All right, you primitive screwheads, listen up!  This is now day three of trying to build a spectroscope.  Back in dath ilan, any startup that failed to build a prototype of anything in three days would be shut down by its funders as clearly hopeless.

Finally, projects should abhor perfectionist engineering. It's important, for 80/20 reasons, to ever build some things at all: you learn most of what you'll learn from building the thing perfectly from building it once, poorly.

A funny extension of this guideline is to the case of math: it's much better to have people quickly put together dubious intuitive arguments based on analogies and then quickly verify or falsify their conclusions than it is to insist on getting careful proofs for every claim. In the latter world, you're only going to fully use some tiny sliver of your accumulated models. If you're rigorously proving all of your claims before you present them, you're optimizing for your ratio of true:false claims instead of your number of insights.

Several of the new students do in fact know calculus, and that seems like the obvious tool to use on this problem?

Ah, yes.  Golarion's notion of 'calculus'.  Keltham has actually looked into it, now.

It looked like one of several boring, safe versions of 'dubious-infinitary-mathematics' where you do all the reasoning steps more slowly and they only give you correct answers.

Dath ilani children eventually get prompted into inventing versions like that, after they've had a few years of fun taking leaps onto shaky ground and learning which steps land well.

Those few years of fun are important!  They teach you an intuitive sense of which quick short reasoning steps people can get away with.  It prevents you from learning bad habits about reasoning slowly and carefully all the time, even when that's less enjoyable, or starting to think that rigor is necessary to get the correct answer in mathematics.

"Rigor is necessary to know you got the correct answers.  Nonrigorous reasoning still often gets you correct answers, you just don't know that they're correct.  The map is not the territory."

"Often though not literally always, the obvious methodology in mathematics is to first 'rapid-prototype' an answer using nonrigorous reasoning, and then, once you have a strong guess about where you're going, what you're trying to prove, you prove it more rigorously."

Is there some precise sense of "approximately" that could be used to prove -

That sounds like a question SLOW people would ask!  Boring question!  Let's move on!

  1. ^

    Other dath ilani archetypes of success include the science maniac and the reckless investor.

    Being a mad entrepreneur is especially harrowing for some, because you must ask others to entrust you with lots of their money, and then risk returning to your investors empty-handed. Dath ilani see disappointing your cooperative trade partners as uncomfortably close to betraying a cooperative trade partner, and the latter is a black sin in their culture.

New Comment