Create a Full Alternative Stack

63

Moral Mazes
Frontpage

Last time I proposed nine strategies for fighting mazes. All of them were either systemic solutions requiring coordinated and/or political action, or cultural shifts that happen one person at a time.

Now for the tenth and final proposal for how to fight mazes. The tenth is a proposed strategy that one dedicated person with sufficient resources could implement on their own.

If you are in a position where you have the resources to implement this, please make an effort to take this proposal seriously. And please contact me if you are potentially interested and wish to discuss it further.

Solution 10: Create a Full Alternative Stack

In some ways this is the most ambitious solution here. It may seem Utopian. 

In other ways, it is the least ambitious, and most practical. It could be implemented by a single sufficiently wealthy and committed individual or organization. All other known solutions can be implemented locally, and would help locally, but need general adaptation to succeed in general.

The full alternative stack offers a contract. 

Disengage entirely with mazes and traditional distortionary incentives, competitions and signals of all kinds, and discard all zero-sum activity, in favor of doing the thing. Whatever the thing in question may be. Make no compromises to make oneself legible or attractive to outside sources of funding. Tolerate no maze behaviors of any kind. Hire and fire with this deeply in mind.

In exchange, if you keep your end of the bargain, the stack will fully fund you and your operations, at fair prices that do not take advantage of the hold-up opportunity presented by your giving up of other opportunities. Evaluation will be fully on the object-level merits, and the generative processes involved. 

This is a form of tenure for the people. If they continue to act with integrity and work to accomplish positive-sum things relevant to the stack’s interests, and spend responsibly, they and their family will have strong financial security. 

Think of this as similar to tenure at a university, or to the system of universal employment for partisan hacks. If you are promising the stack gives you the opportunity to prove yourself. Once you have proven yourself, we take care of you, even if you don’t bear as much fruit as we would like, provided you continue to play by the rules of the stack and honor its values. Unlike many tenured professors nowadays, we would not then force you to seek grants, outside investments, or other outside funding for your work. On the contrary, you would be agreeing not to seek outside funding, so as to protect your incentives from corruption.

This is also a form of secured financing for corporations and other organizations. While they need funding to reach maturity, they will be evaluated on whether they are succeeding at doing the thing. Traditional signals, and anticipation of future traditional signals, will be not only disregarded but punished – it’s fine to look good naturally, but if you are doing things in order to look good or successful to outsiders rather than be good or successful, then that breaks the compact. 

We call this a full alternative stack because the ideal version is literally a full alternative stack. It recreates civilization. Those involved would not need or depend on outside goods or services. There would be a local area fully owned by and given over to the project.

That is the full version. The full version is ambitious and difficult, but likely far less ambitious and difficult, and far less expensive, than it appears. We would soon find out how much of current activity is rent extraction or otherwise unproductive, and how much is necessary to keep things running. 

A lesser version, built around a particular cause or goal, or to give this freedom to select individuals and corporations, would still be extremely valuable. 

The MacArthur grant is a template for what this looks like on a personal level, with a shift in focus from creativity to integrity, and a bump in compensation – $625,000 is a lot of money, but that money is designed to be seed money for an activity rather than financial security. Those getting a MacArthur grant still face the specter of future financial needs. One needs an order of magnitude more than that over a lifetime to be secure while not compromising one’s interactions with society. 

For startup corporations, this can be similar to the standard method of funding a biotechnology company to pursue a new drug of unknown efficacy. Milestones are set. If they are met, funding is unlocked at pre-negotiated levels, locked in for both sides in advance. There is no reason to worry about signaling in other ways unless the company is about to fail. We would add the condition of not then selling out to a maze (in the biotech example, a big pharma company, or taking the company public) when successful, instead keeping the operation privately owned by its founders to prevent it from being eaten and transformed or killed. Public markets exert strong pressure towards maze behaviors, so such companies would need to commit to staying away from them.

I believe there is a strong opportunity for a venture capital fund that promises committed, full funding to projects in this way in fields outside biotechnology. Projects that are freed from having to gain strong negotiating positions regarding raising capital could be much better at pursuing actual success and production. To succeed, such a fund would need to honor its commitments carefully and be credible at every stage. This includes its commitments not to respond positively to things that would on the outside be viewed as good news, if they are not in fact relevantly good news. Its word would be its bond. It would also need to be highly skilled at choosing superior evaluation techniques. There are many terrible things about current systems of venture funding, but naive replacement models threaten to be easily gameable or otherwise create new and perhaps much worse versions of the same problems. 

Most people nowadays are forced, both within an enterprise and overall in their lives, to structure and censure everything they do in light of their potential future need to look legible, comfortable and successful or valuable to mazes. The prospect of having this option cut off fills them with terror, whether or not this should not be the case. Even when they do not fear it, those around them who rely on them fear it, which has a similar effect.

I am unusually immune to these pressures. I have skills that can earn money on demand, without getting a formal job or the approval of a maze, if I need that. I also have robust savings and a family and community that would save me if I fell upon hard times. This terror is still one of the things I have often struggled with.

Freeing a select group to do things without regard to such concerns, and people knowing they have the option to join this group, would be a major cultural change. Ideally this would then become the ‘city on a hill’ that shows what is possible, and gets emulated elsewhere. Regulatory and other legal issues would still have to be navigated, which would often be most of the difficulty of any worthwhile operation. This is why typical versions of this type of proposal go to places like seasteading. Mazes will instinctively attempt to crush whatever is being built.

If one is sitting on a large pile of money and wishes to do good, or simply wishes to increase production, deploying that money effectively has proven a very hard problem. This is to be expected, under even the best/worst conditions, as such problems are anti-inductive. Any easy answers get utilized until they stop being easy answers. Once others find out your criteria for spending or granting money, some of them will Goodhart and/or commit fraud to extract those funds. 

The closer you attempt to stick to specified metrics and use criteria you can explain and justify that looks consistent, the more you are optimizing over time for those who Goodhart and commit fraud, in order to grant the appearance of having the appearance of attempting to help in the approved ways, rather than optimizing for actually helping. This is certainly a danger to the full stack operation as well, and the best reason to keep the operation relatively small. 

The closer you do not stick to such methods, the more illegible you become, the more blameworthy you appear, and the more likely you are actually buying things that make you feel good about them as your metric. Which, in turn, is even easier to Goodhart or commit fraud on. 

Sticking to ‘do the right thing,’ as this solution suggests, and rewarding those who do right things is a rather crazy ask without rich contextual knowledge. The larger you scale, the more universal you attempt to get, the crazier it gets. Goodharting or committing fraud on ‘right thingness’ is as much a threat as Goodharting or committing fraud on anything else, if you’re not staying a step ahead. That very freedom from mazes, Goodharting and fraud is the precious thing you’re trying to get in the first place. 

The project has to cash itself out purely on its own terms. It has to care more about doing things its own way than getting things done or looking effective, where that own way is a ruthless focus on what will actually work. Everyone’s instinct, even that of the best possible additions, will be to abandon this at every step. Everyone will face constant pressure to do so. 

But without sufficient scale to complete the stack, how do you break free from, and securely break the right people away from the need to worry about, mazes and other outside forces?

Threading that needle is going to be very difficult, even if the other impossible problems are solved. I do not think any one person or formal group can be the head of the entire stack without it getting too large. One must form a distinct subset, and hope others form the required other parts, and until that happens purchase what one needs from the outside using capital, and trust those in the project to continue interacting economically in some ways outside of the stack. 

Ideally one does not need to literally go to Mars to be allowed to complete the project. However, if one does need to literally go to Mars, then there is a fair argument that literally going to Mars is a reasonable price to pay to be allowed to complete the project.

The next post asks what we should do when we have a project that would benefit from a large organization.

Moral Mazes1
Frontpage

63

10 comments, sorted by Highlighting new comments since Today at 8:07 AM
New Comment

My interpretation of the previous several posts is: alignment of organizations is hard, and if you're even a little bit misaligned, the mazeys will exploit that misalignment to the hilt. Allow any divergence between measures of performance and actual performance, and a whole bureaucracy will soon arise, living off of that divergence and expanding it whenever possible.

My interpretation of this post is: let's solve it by making a fund which pays people to always be aligned! The only hard part is figuring out how to verify that they are, in fact, aligned.

... Which was the whole problem to begin with.

The underlying problem is that alignment is hard. If we had a better way to align organizations, then organizations which use that method would already be outperforming everyone else. The technique would already be used. Invent that technology (possibly a social technology), and it will spread. The mazes will fight it, and the mazes will die. But absent some sort of alignment technology, there is not much else which will help.

This is a problem which fundamentally cannot be fixed by throwing money at it. Setting up a fund to pay people for being aligned will result in people trying to look aligned. Without some way of making one's measure of alignment match actual alignment, this will not do any good at all.

I was in no way trying to disguise that the problem of people faking alignment with the stack in order to extract resources is the biggest problem with the project, if someone were to actually try to implement it. If I get feedback that this wasn't clear enough I will edit to make it more clear. And certainly one does not simply throw money at the problem.

So that far, fair enough.

However, this also incorporates a number of assumptions, and a general view of how things function, that I do not share.

First, the idea that alignment is a singular problem, or that it either does or does not have a solution. That seems very wrong to me. Alignment has varying known solutions depending on the situation and which prices you are wiling to pay and how much you care, and varies based on what alignment you are trying to verify. You can also attempt structure the deal such that people that are non-aligned (e.g. with the maze nature, or even not very into being opposed to it) do not want what you are offering.

I don't think there are cheap solutions. And yes, eventually you will fail and have to start over, but I do think this is tractable for long enough to make a big difference.

Second, the idea that if there was a solution then it would be implemented because it outcompetes others just doesn't match my model on multiple levels. I don't think it would be worth paying the kind of prices the stack would be willing to pay, in order to align a generic corporation. It's not even clear that this level of anti-maze would be an advantage in that spot, given the general reaction to such a thing on many levels and the need for deep interaction with mazes. And it's often the case that there are big wins, and people just don't know about them, or they know about them but for some reason don't take them. I've stopped finding such things weird.

You can also do it backwards-only if you're too scared of this - award it to people who you already are confident in now, and don't extend it later to avoid corruption. It would be a good start on many goals.

In any case, yes, I have thought a lot about the practical problems, most of which such people already face much worse in other forms, and have many many thoughts about them, and the problem is hard. But not 'give up this doesn't actually help' kinds of hard.

Not going to go deeper than that here. If I decide to expand on the problem I'll do it with more posts (which are not currently planned).

One needs an order of magnitude more than that over a lifetime to be secure while not compromising one’s interactions with society. 

I'm confused by this. Isn't compromising one's interactions with society the point?

That is, suppose most costs are rent-seeking. If you want your startup to do something in the Bay, you have to pay Bay Area landlords about half of your investment capital (through rent and increased salaries). But why does your startup have to do something in the Bay? Because marketing, both to customers and future investors, and employees who want to be able to jump between companies, and various other things.

If you instead want to deliberately avoid marketing / mazes, why not do it in rural Pennsylvania? The rents are low, there.

Like, it seems to me the thing you're suggesting is something like an Amish community, but with something more healthy than God at the center. And that suggests you should do something more like what the Amish do, and less like staying in NYC, where if you don't have ~$10M in the bank or in expected future compensation you're not going to be financially secure while purchasing all the markers of being in the professional class. Like, why pay more for a house in a 'good' school district when you're just going to unschool your kids anyway?

Freeing a select group to do things without regard to such concerns, and people knowing they have the option to join this group, would be a major cultural change. Ideally this would then become the ‘city on a hill’ that shows what is possible, and gets emulated elsewhere.

I think the history of the Thiel Fellowship should be interesting, in this regard. My sense is that it tried to do this, couldn't find the people for it, and then pivoted to just be another way to perform well in the broader maze.

The point is to compromise one's interactions with society in the sense that you want to change what they are. But in this frame, the idea is that your interactions were previously being compromised by the worry that some day you may need to extract money from society / mazes, and this seeks to prevent that.

Consider the Thiel fellowship. Yes, it helps people get their start, but their orders are to go out into the world and start a normal business and raise money the normal way. It's better than letting those people go to college, so yay fellowship, but it's totally not this thing. It was a way to let kids who knew that college was a trap skip college. Or at least, that's my understanding.

Thiel literally proposed funding me in the full stack way at a meeting - not personally for life, but for a proposed company, which was going to be biotech-related so it was much closer to normal procedure. He got the logic. But when he came back to his social situation he couldn't follow through. Biotech has to work this way for companies because of hold-up problems and dependencies, you agree on the later rounds in advance with criteria for unlocking them. It's not the full full stack, but it's the core idea that you need to be secure from concerns that would bury the real operation if you had to worry about them.

Creating a new entire community in a new location makes perfect sense, and is one good way to consider implementation.

Hmm. This seems to ignore the underlying actor-agent problem that is a partial cause of mazes: most (perhaps all, perhaps including you, certainly including me) people aren't willing to truly dedicate their entire life to a thing. There just aren't enough people who are willing/able/whatever to ignore all interpersonal competition for some of the slack (whether that be money or time or other non-shared-goal-directed value).

What would such a full-stack organization do?

They couldn't try to optimize for any thing.

So... don't optimize? Not even for longevity. But also don't signal anti-optimization, or whatever the opposite of optimizing is. Don't reward signalling, and don't do any of the opposites of rewarding signalling.


I try to imagine such an entity ex nihilo, and I keep getting a novel experience that I'm going to call 'paradigm error'. But when I try to apply those qualities to various organizations, there is simply a mismatch- and different organizations mismatch different things.


Don't try to optimize anything, including meta to this description. Instead, try to do a good job of that thing. Including doing a good job at this meta-thing. My intuition says that doing a good job at avoiding Goodhart while doing a good job of doing a good job things is often going to mean using fuzzy metrics. I can't describe what I actually mean by 'fuzzy metrics', but using fuzzy metrics to evaluate a thing is adjacent to, and not, having someone observe the thing and then rate how good they think it was on a scale (that method is a badly done hard metric). It might look like a narrative evaluation of an expert observer, but I think a core feature of what I'm calling a 'fuzzy metric' is there is no way to generate a fuzzy metric by following a written or unwritten formal procedure.

When checking to see if you're doing a good job, look at some things that can be measured objectively, and combine those measurements with a thing that it is impossible for me to tell you how to get. Maybe "Don't not go with your gut." (double negative intended) might be a good job of explaining the non-measurement part of evaluation.


Such a full-stack organization can of course not optimize for maintaining an ideal culture, because that would be sacrificing literally all value. But they can try to do a good job of maintaining a good culture, identifying people who make the culture worse and humanely moving them to locations where they stop influencing the organizational culture.


In large organizations, hierarchy is impossible to avoid. I think a good tool to reduce that is to say that each level of hierarchy should have a unique object-level thing that they do, beyond bookkeeping or managerial tasks for the other levels. If in a corporate context, anyone who successfully replaces their own job with a few spreadsheet fomulae should not by default be punished for/with Redundancy.

Milestones are set. If they are met, funding is unlocked at pre-negotiated levels, locked in for both sides in advance.

It seems to me like the pressure to Goodhart are higher when you agree to be only funded by a single entity with specific pre-negotiated milestones then if you are in a state where you are going to seek capital from a bunch of people with at least slightly different evaluation criteria.

Goodheart applies to any use of an alignment indicator, not just funding.

Yes, I don't see how it changes the fact that this setup causes stronger incentives to Goodhart the milestones given that they decide about whether the companies dies or can continue to function.

I read this and thought of organized religion. Unable to figure out why though.