When Money Is Abundant, Knowledge Is The Real Wealth

by johnswentworth4 min read3rd Nov 202046 comments

193

Expertise (topic)EconomicsWorld Modeling
Curated

First Puzzle Piece

By and large, the President of the United States can order people to do things, and they will do those things. POTUS is often considered the most powerful person in the world. And yet, the president cannot order a virus to stop replicating. The president cannot order GDP to increase. The president cannot order world peace.

Are there orders the president could give which would result in world peace, or increasing GDP, or the end of a virus? Probably, yes. Any of these could likely even be done with relatively little opportunity cost. Yet no president in history has known which orders will efficiently achieve these objectives. There are probably some people in the world who know which orders would efficiently increase GDP, but the president cannot distinguish them from the millions of people who claim to know (and may even believe it themselves) but are wrong.

Last I heard, Jeff Bezos was the official richest man in the world. He can buy basically anything money can buy. But he can’t buy a cure for cancer. Is there some way he could spend a billion dollars to cure cancer in five years? Probably, yes. But Jeff Bezos does not know how to do that. Even if someone somewhere in the world does know how to turn a billion dollars into a cancer cure in five years, Jeff Bezos cannot distinguish that person from the thousands of other people who claim to know (and may even believe it themselves) but are wrong.

When non-experts cannot distinguish true expertise from noise, money cannot buy expertise. Knowledge cannot be outsourced; we must understand things ourselves.

Second Puzzle Piece

The Haber process combines one molecule of nitrogen with three molecules of hydrogen to produce two molecules of ammonia - useful for fertilizer, explosives, etc. If I feed a few grams of hydrogen and several tons of nitrogen into the Haber process, I’ll get out a few grams of ammonia. No matter how much more nitrogen I pile in - a thousand tons, a million tons, whatever - I will not get more than a few grams of ammonia. If the reaction is limited by the amount of hydrogen, then throwing more nitrogen at it will not make much difference.

In the language of constraints and slackness: ammonia production is constrained by hydrogen, and by nitrogen. When nitrogen is abundant, the nitrogen constraint is slack; adding more nitrogen won’t make much difference. Conversely, since hydrogen is scarce, the hydrogen constraint is taut; adding more hydrogen will make a difference. Hydrogen is the bottleneck.

Likewise in economic production: if a medieval book-maker requires 12 sheep skins and 30 days’ work from a transcriptionist to produce a book, and the book-maker has thousands of transcriptionist-hours available but only 12 sheep, then he can only make one book. Throwing more transcriptionists at the book-maker will not increase the number of books produced; sheep are the bottleneck.

When some inputs become more or less abundant, bottlenecks change. If our book-maker suddenly acquires tens of thousands of sheep skins, then transcriptionists may become the bottleneck to book-production. In general, when one resource becomes abundant, other resources become bottlenecks.

Putting The Pieces Together

If I don’t know how to efficiently turn power into a GDP increase, or money into a cure for cancer, then throwing more power/money at the problem will not make much difference.

King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer - the knowledge that cowpox could safely immunize against smallpox. There were thousands of humoral experts, faith healers, eastern spiritualists, and so forth who would claim to offer some protection against smallpox, and King Louis XV could not distinguish the real solution.

As one resource becomes abundant, other resources become bottlenecks. When wealth and power become abundant, anything wealth and power cannot buy become bottlenecks - including knowledge and expertise.

After a certain point, wealth and power cease to be the taut constraints on one’s action space. They just don’t matter that much. Sure, giant yachts are great for social status, and our lizard-brains love politics. The modern economy is happy to provide outlets for disposing of large amounts of wealth and power. But personally, I don’t care that much about giant yachts. I want a cure for aging. I want weekend trips to the moon. I want flying cars and an indestructible body and tiny genetically-engineered dragons. Money and power can’t efficiently buy that; the bottleneck is knowledge.

Based on my own experience and the experience of others I know, I think knowledge starts to become taut rather quickly - I’d say at an annual income level in the low hundred thousands. With that much income, if I knew exactly the experiments or studies to perform to discover a cure for cancer, I could probably make them happen. (Getting regulatory approval is another matter, but I think that would largely handle itself if people knew the solution - there’s a large profit incentive, after all.) Beyond that level, more money mostly just means more ability to spray and pray for solutions - which is not a promising strategy in our high-dimensional world.

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is "cheap". Then the things which are “expensive” are the things which money alone cannot buy - including knowledge and understanding of the world. Life lesson from Disney!Rumplestiltskin: there are things which money cannot buy, therefore it is important to acquire such things and use them for barter and investment. In particular, it’s worth looking for opportunities to acquire knowledge and expertise which can be leveraged for more knowledge and expertise.

Investments In Knowledge

Past a certain point, money and power are no longer the limiting factors for me to get what I want. Knowledge becomes the bottleneck instead. At that point, money and power are no longer particularly relevant measures of my capabilities. Pursuing more “wealth” in the usual sense of the word is no longer a very useful instrumental goal. At that point, the type of “wealth” I really need to pursue is knowledge.

If I want to build long-term knowledge-wealth, then the analogy between money-wealth and knowledge-wealth suggests an interesting question: what does a knowledge “investment” look like? What is a capital asset of knowledge, an investment which pays dividends in more knowledge?

Enter gears-level models.

Mapping out the internal workings of a system takes a lot of up-front work. It’s much easier to try random molecules and see if they cure cancer, than to map out all the internal signals and cells and interactions which cause cancer. But the latter is a capital investment: once we’ve nailed down one gear in the model, one signal or one mutation or one cell-state, that informs all of our future tests and model-building. If we find that Y mediates the effect of X on Z, then our future studies of the Y-Z interaction can safely ignore X. On the other hand, if we test a random molecule and find that it doesn’t cure cancer, then that tells us little-to-nothing; that knowledge does not yield dividends.

Of course, gears-level models aren’t the only form of capital investment in knowledge. Most tools of applied math and the sciences consist of general models which we can learn once and then apply in many different contexts. They are general-purpose gears which we can recognize in many systems.

Once I understand the internal details of how e.g. capacitors work, I can apply that knowledge to understand not only electronic circuits, but also charged biological membranes. When I understand the math of microeconomics, I can apply it to optimization problems in AI. When I understand shocks and rarefactions in nonlinear PDEs, I can see them in action at the beach or in traffic. And the “core” topics - calculus, linear algebra, differential equations, big-O analysis, Bayesian probability, optimization, dynamical systems, etc - can be applied all over. General-purpose models are a capital investment in knowledge.

I hope that someday my own research will be on that list. That’s the kind of wealth I’m investing in now.

193

46 comments, sorted by Highlighting new comments since Today at 10:20 AM
New Comment

I'm in somewhat agreement with this general idea, but I think that most people that try to "build knowledge" ignore a central element of why money is good, it's a hard to fake signal.

I agree with something like:

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is "cheap".

To the extent that I basically did the same (not quit my job, but got a less well-paying, less time-consuming job doing a thing that's close to what I'd be doing in my spare time had I had no job).

But this is a problem if your new aim isn't to make "even more money", i.e. say a few million dollars.

***

The problem of money is when it scales linearly, when you make 50k this year, 55 the next, 100k ten years later. Because the difference between 50 and 100k is indeed very little.

But the difference between 100m and 100k isn't, 100m would allow me to pursue projects that are far more interesting than what I'm doing right now.

***

Knowledge is hard to anchor. The guy building TempleOS was acquring something like knowledge in the process of being an unmedicated schizophrenic. Certainly, his programming skills improved as his mental state went downhill. Certainly he was more knowledgeable in specific areas than most (who the **** build a x86 OS from scratch, kernel and all, there's maybe like 5 or 6 of them total, I'd bet there are fewer than 10,000 people alive that can do that given a few years !?)... and those areas were not "lesbian dance PhD" style knowledge, they were technical applied engineering areas, the kind people get paid in the millions for woerking in.

Yet for some reason, poor Terry Davis was becoming insane, not smart, as he went through life.

Similarly, people doing various "blind inuit pottery making success gender gap PhD" style learning think they are acquiring knowledge, but many of the people here would agree they aren't. Or at least that they are acuqiring knowledge of little consequence, which will not help them live more happily or affect positive change, or really any change, upon the world.

At most, you can see it "fail" in the knolwedge you've acquiered once it hits an extreme, once you're poor, sad, and alone in spite of a lifetime of knowledge acquistion.

Money, on the other hand, is very objective, everyone wants it. Most need it. Everyone, to some extent, won't give up theirs or print more of it very easily. It's also instant. Given 10 minutes I can tell you +/-1% how much liqudity I have access to in total at this very moment. That number will then be honored by millions of businesses and thousands of banks accross the world who will give me services, goods or precious metals, stakes in businesses and government bonds in exchange for it. I can't have any such validation with knowledge.

So is it no a good "test of your knowledge" to try and acquire some of it ?

Even if doing a 1-1 knowledge-money mapping is harmful, doing a, say, 0.2 - 1 knowledge - money mapping isn't. Instead it serves as a guideline. Are you acquiring relevant knowledge about the world ? Maybe you're just becoming a numerology quack, or a religious preacher, or a self-help guru, or a bullshiter, or whatever.

Which is not to say the knowledge-money test is flawless, it isn't, it's just the best one we have thus far. Maybe one could suggest other tests exchanging knowledge for things that one can't buy (e.g. affection), but most of those things are much less easy to quantify and trying to "game" them would feel dirty and immoral, trying to "game" money is the name of the game, everyone does it, that's part of its role.

This is a great comment, and I kind of really want to see it get written up as a top-level post. I've made this argument myself a few times, and would love to see it written up in a way that's easy to reference and link to.

I will ping you with the more cohesive + in depth, syntactically and grammatically correct version when it's done. Either this Monday or the next, it's been in draft form ever since I wrote this comment...

Though the main point I'm making here is basically just Taleeb's Skin in the Game idea, he doesn't talk about the above specifically, but the idea flows naturally after reading him (granted, I read the book ~3yrs ago, maybe I'm missremembering the)

I generally agree with this comment, and I think the vast majority of people underestimate the importance of this factor. Personally, I consider "staying grounded" one of the primary challenges of what I'm currently doing, and I do not think it's healthy to stay out of the markets for extended periods of time.

precious mentals

I like this coinage.

What have you been learning? How has it been working out for you?

Plurality of my effort has been studying agency-adjacent problems. How to detect embedded Bayesian models (turns out to be numerically unstable), markets/committees requiring unanimity as a more general model of inexploitable preferences than utility functions, abstraction, how to express world models, and lately ontology translation.

Other things I've spent time on:

  • Financial market models. Some progress there, but mostly I found that my statistical tools just aren't yet up to the task of (reliably) dealing with full-scale market data.
  • Statistical and optimization algorithms. Fair bit of progress there, and many of the insights feed my gears-level modelling posts (though obviously with most of the original math stripped out).
  • More general economic questions. For instance, a couple months ago I was thinking about when and to what extent working as a group outperforms working independently, and I ended up reading a book on theory of the firm. That lead to some interesting thoughts about identifiability-in-hindsight as a constraint on incentive/contract design, which will probably be a post eventually.
  • Making the ideas/intuitions underlying gears-level models more explicit.
  • Reading the literature on aging.
  • Several more minor threads which didn't lead anywhere interesting.

thousands of other people who claim to know (and may even believe it themselves) but are wrong

Seems to me the greatest risk of this strategy is becoming one of them.

The risk can be mitigated by studying the textbooks of settled science first, and only trying to push the boundary of human knowledge later. But then, time becomes another bottleneck.

How exactly do people end up knowing little? 

I'd venture it tends towards starting with poor mental models and then addressing a huge universe of learnable information with those models. Amplifies confirmation bias and leads to consistently learning the wrong lessons. So there's real value to optimizing your mental models before you even try to learn the settled knowledge, but of course the knowledge itself is the basis of most people's models.

Perhaps there's a happy medium in building out a set of models before you start work on any new field, and looking to those you respect in those fields for pointers on what the essential models are.

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. There are plenty of research labs using old equipment, promising projects that don't get funding, post-docs who move into industry because they're discouraged about landing that tenure-track position, schools that can't attract competent STEM teachers partly because there's just so little money in it.

And of course, you can build institutions like OpenPhil to help reduce uncertainty about how to spend that money.

Using money or power to fix those problems is do-able. You don't have to know everything. You can be a dart, or, if you're lucky and hard-working, you can be a dart-thrower.

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. 

From the OP:

Beyond that level, more money mostly just means more ability to spray and pray for solutions - which is not a promising strategy in our high-dimensional world.

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.

In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

Tho if you take analyses like Braden's seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. "Science advances one funeral at a time," in a way that seems detectable from analyzing the literature.

This isn't to say that planning is worthless, and that no one can see the future. It's to say that you can't buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.

I'm starting to read Braden. The thing is, that if Braden's analysis is true, then either:

  1. We can filter for the right people, we're just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
  2. We truly can't filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.

I think there's a fairly rigorous, step-by-step, logical way to ground this whole argument we're having, but I think it's suffering from a lack of precision somehow...

There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.

There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it's not straightforward to know which expert is right and that judgement can't be brought.

I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.

The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.

The most valuable thing to do would be to observe what's going on right now, and the possibilities we haven't tried (or have abandoned). Insofar as we have credence in the "we know nothing" hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.

I think this is true if you're looking for near-perfect scientists but if you're assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).

>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?

I think the greater concern is that it's hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.

There are plenty of research labs using old equipment

The people in those research labs probably believe that newer equipment is likely to yield the knowledge that we are seeking. Our labs now have much better equipment and much more people then before the Great Stagnation started.

Expensive equipment has the problem that it forces the researchers to focus on questions that can actually be answered with the expensive equipment and those questions might not be the best to focus on. 

What does the NIH have to show for Bush doubling their budget?

Would philanthropy be better off it people just threw darts, or if they stuck to tried and true ways of giving? Is not even taking a gamble on a possible great outcome for the overall good a form of genuine altruism?

Well, if you’re a subscriber to mainstream EA, the idea is that neither traditionalism nor dart-throwing is best. We need a rigorous cost-benefit analysis.

If one believes that, yet also that less cost-benefit analysis is needed (or tractable) in science, that needs an explanation.

Again, I think that this post is getting at something important, but the definitions here aren’t precise enough to make it easy to apply to real issues. Like, can a billionaire use his money to buy a cost/benefit analysis of an investment of interest? Definitely.

But how can he evaluate it? Does he have to do it himself? Does he focus on creating an incentive structure for the people producing it? If so, what about Goodhart’s Law - how will he evaluate the incentive structure?

It’s “who will watch the watchmen” all the way down, but that’s a pretty defeatist perspective. My guess is that institutions do best when they adopt a variety of metrics and evaluative methods to make decisions, possibly including some randomization just to keep things spicy.

Look at all the good Bill Gates does that I think is effective altruism and he gets vilified . It's a weird thing. I remember watching a patriot act episode https://www.youtube.com/watch?v=mS9CFBlLOcg

Welcome to LW, by the way :)

You’re doing something (a good thing) that we call Babble. Freely coming up with ideas that all circle around a central question, without worrying too much about whether they’re silly, important, obvious, or any of the other reasons we hold stuff back.

I’d suggest going further. Feel free to use this comment thread (or make a shortform) to throw out ideas about “why philanthropy might benefit from more (or less) cost/benefit analysis”.

We often suggest trying to come up with 50 ideas all in one go. Have at it!

I imagine most good deeds or true altruism takes place on non-measurable scales. It's the thought that counts right? A smile goes a long way, how can you measure a smile, or positive energy.  Whether you throw a dart or follow a non dart follow method, maybe the positive energy put out means something, especially now. 

Great summary! A nit:

our lizard-brains love politics

it's more likely our monkey (or ape) brains that love politics. e.g. https://www.bbc.co.uk/news/uk-politics-41612352

On the note of monkey-business - what about investments in collective knowledge and collaboration? If you've not come across this, you might like it https://80000hours.org/articles/coordination/

EDIT to add some colour to my endorsement of the 80000hours link: I've personally found it beneficial in a few ways. One such is that although the value of coordination is 'obvious', I nevertheless have recognised in myself some of the traits of 'single-player thinking' described.

By and large, the President of the United States can order people to do things, and they will do those things.

I think this exaggerates the power of the president. Obama did try to order Gitmo closed. Trump did try to withdraw troops from Afghanistan and failed.

On the specific topic of COVID-19 Trump spoke about having a vaccine quite early, likely because he believed he could just approve it and get it distributed even if the evidence for the vaccine was only a little better then the evidence then what Benjamin Jesty had. 

It turns out that the president doesn't have the power to just approve a vaccine without it having gone through "enough" testing. 

Curated.

If I understood it correctly, the central point of this post is that very often, knowing what to do is a much larger problem than having the ability to do things, i.e., money and power. I often like to say that planning is an information problem for this reason. This post is an excellent articulation of this point, probably the best I've seen.

It's an important point. Ultimately, it is precisely this point that unifies epistemic and practical rationality, the skills of figuring out what's true and skills of achieving success. When you recognize that success is hard because you don't know what to do, you appreciate why understanding what is actually true is darned important, and that figuring out how to discover truth is among the best way to accomplish goals whose solution isn't known.

This can all get applied downstream in Value of Information calculations and in knowledge-centric approaches to planning. It's good stuff. Thanks!

This is an idea I've been struggling to wrap my head around for a while now. I usually think of it as "capital vs knowledge" instead of "money vs knowledge". But since knowledge is a form of capital, your phrasing is more accurate.

I also agree the low hundred thousands is about where this happens for someone who lives in the United States and works a full-time job. I wonder how this number changes if you have passive income instead.

The world would undoubtedly be better if more Data Scientists became monks. 

in the space of aging (or models in bioscience research in general), you should contact Alexey Guzey and Jose Ricon and Michael Nielsen and Adam Marblestone and Laura Deming. You'd particularly click with some of these people, and many of them recognize the low number of independent thinkers in the area.

I think you have a kind of thinking that almost everyone else in aging I know seems to lack (If I showed your writing to most aging researchers, they'd most likely glare over what you wrote), so writing a good way to, say, put a physical principles framework to aging can result in a lot of people wanting to fund you (a la Pascal's wager - there are LOTS of people who are willing to throw money into the field even if it doesn't have a huge chance of producing results - and a good physical framework can make others want you to make the most out of your time, especially as many richer/older people lack the neuroplasticity to change how aging research is fine). Many many many papers have already been written on the field (many by people making guesses as to what matters most) - a lot of them being very messy and not very first-principles (even JP de magalhaes's work, while important, is kind of "messy" guessing at the factors that matter).  

Are you time-limited? Do you have all the money needed to maximize your output on the world? (note for making the most out of your limited time, I generally recommend being like mati roy and trying to create a simulation of yourself that future you/others can search, which generally requires a lot of HD/streaming - though even that is not that expensive). 

It seems that you can understand a broad range of extremely technical fields that few other people do (esp optimization theory and category theory), and that you get a lot out of what you read (the time investment of other people reading a technical textbook may not be as high as that of you reading one) - thus you may be more suited for theoretical/scaleable work than you are for work that's less generalizeable/scaleable (one issue with bioscience research is that most people in bioscience research spend a lot of time on busywork that may be automated later, so most biologists aren't as broad or generalizeable as you are, and you can put together broad frameworks that can improve the efficiencies/rigor of future people who read you, so you should optimize for things that are highly generalizeable.)

[you also put them all in a clear/explainable fashion that makes me WANT to return back to reading your posts, which is not something I can say for most textbooks].

There are tradeoffs between spending more time on ONE area vs spending time on ANOTHER area of academic knowledge - though there are areas where good thinking in one area can transfer to another (eg optimization theory => whole cell modeling/systems biology in biology/aging). Building general purpose models (if described well) could be an area you might have unique comparative advantage over others in, where you could guide someone else's thinking on the details even if you did not have the time to look at the individual implementations of your model on the system at hand. 

If you become someone who everyone else in the area wants to follow (eg Laura Deming), you can ask question and get pretty much every expert swarming over you, wanting to answer your questions.

You seem good at theory (which is low-cost), but how much would you want to ideally budget for sample lab space and experiments? [the more details you put in your framework - along with how you will measure the deliverables, the easier it would be to get some sort of starter funding for your ideas]. Doing some small cheap study (and putting all the output in an open online format that transcends academic publishing) can help net you attention and funding for more studies (it certainly seems that with every nascent field, it takes a certain something to get noticed, but once you do get noticed, things can get much easier over time, particularly if you're the independent kind of person). Wrt biology, I do get the impression that you don't interact much with other biologists, which might make the communication problems more difficult for now [like, if I sent your aging posts as is to most biologists I know, I don't think they would be particularly responsive or excited].

BTW - regarding wealth - fightaging has a great definition at https://www.fightaging.org/archives/2008/02/what-is-wealth/

Wealth is a measure of your ability to do what you would like to do, when you would like to do it - a measure of your breadth of immediately available choice. Therefore your wealth is determined by the resources you presently own, as everything requires resources.

Generally speaking, due to aging [and the loss of potential that comes with it] most people's wealth decreases with age (it's said that the wealthiest people are really those that are born) - however, your ability to imagine what you can do with wealth (within an affordance space - or what you can imagine doing over the next year if given all the resources you can handle - framework) can increase over time. Mental models are only wealth inasmuch as they actively work to improve people's decision-making on the margin relative to an alternative model (they are necessary for innovation, but there are now so many mental models that taking time to understand one reduces the amount of time one has to understand another mental model) - I do believe that compressible mental models (or network models) that explain a principle elegantly can offload the time investment it takes to use a model to act on a decision (eg superforecasters use elegant models that others believe and can act on - thus knowing when to use the expertise of superforecasters can help decision-making). Not many people can create an elegant mental model, and fewer can create one that is useful on top of all the other models that have been developed (useful in the sense that it makes it more useful for others to read your model than all the confusing model renditions used by others) - obviously there is vast space for improvement on this front (as you can see if you read quantum country) as most people forget the vast majority of what they read from textbooks or from conversations with others. Presentism is an ongoing issue as more papers/online content is published than there are total eyeballs to read them (+all the material published in the past)

The best kind of wealth you can create, in this sense, is a model/framework/tool that everyone uses. Think of how wealth was created with the invention of a new programming language, for example, or with Stack Exchange/Hacker News, or a game engine, or the wealth that could be created with automating tedious steps in biology, or the kind that makes it far easier for other people to make or write almost anything. The more people cite you, the more wealth and influence (of a certain kind) you get. This generalizes better than putting your entire life into studying a single protein or model organism, especially if you find a model/technique that is easily-adoptable and makes it easy to do/automate high-throughput "-omics" of all organisms and interventions at once (making it possible for others to speed up and generalize biology research where it used to be super-slow). Bonus points if you make it machine-readable and put in a database that can be queried so that it is useful even if no one reads it at first [as amount of data generated is higher/faster than the total mental bandwidth/capacity of all humans who can read it]. 

[btw, attention also correlates with wealth, and money/attention/wealth is competitive in a way that knowledge is not (wisdom may be which knowledge to read in which order - wisdom is how you can use knowledge to maximize the wealth that you can use with that knowledge)]

[Shaping people's framework by causing them to constantly refer to your list of causes, btw, is another way to create influence/wealth - but this may get in the way of maximizing social wealth over a lifetime if your frameworks end up preventing people from modeling or envisioning how they can discover new anomalies in the data that do not fit within those frameworks - this is also why we just need a better concrete framework with physical observables for measuring aging rate, where our ability to characterize epigenetic aging is a local improvement. ]

In the area of aging already there is too much "knowledge" (though not all of it particularly insightful), but does the sum of all aging papers published constitute as knowledge? Laura Deming mentions on her twitter that she thinks about what not to read, rather than what to read, and recommends students study math/CS/physics rather than biochemistry. There can be a way to compress all this knowledge into a more organized physical principles format that better helps other people map what counts as knowledge and what doesn't count - but at this moment the sum of all aging research is still a disorganized mess, and it may be that the details of much of what we know now will become superseded by new high-throughput papers that publish data/meta-data rather than as papers (along with a publicly accessible annotation service that better guides people as to which aging papers represent true progress and which papers will simply obsolete quickly.). Guiding people to the physical insight of a cell is more important for this kind of true understanding of aging, even though we can still get things done through rudimentary insight-free guesses like more work on rapamycin and calorie restriction.

The best investments in knowledge are mental models that can be applied across domains (some of which were mentioned in the post) and unchanging/permanent/durable knowledge like that in the STEM fields. This provides both leverage (from the cross-disciplinary latticework of mental models) and allows compounding to work as your knowledge compounds over the years.

I agree. But knowledge was abundant for him too. What wasn’t abundant was critical thinking. And this was the problem from the start

Is it really true that money can't buy knowledge?

We can ask the most knowledgeable person we know to name the most knowledgeable person they know, and do that until we find the best expert. Or alternatively, ask a bunch of people to name a few, and keep walking this graph for a while.

This won't let us buy knowledge that doesn't exist, but seems good enough for learning from experts, given enough money and modern communication technology that Louis XV didn't have.

This is an excellent algorithm for finding people with high status. Unfortunately, the correlation between status and knowledge is unreliable at best.

Caveat: ask each person to name someone they personally worked with.

Hard to get right, but not sure whether it's harder than knowledge investment.

Wouldn't have helped Louis XV. We might need infrastructure in place that would incentivize people to make themselves easy to find.

The idea of knowledge over money is romantic and one that I agree, is at times the right long-run bet. I understand you're arguing for diminishing returns here but the issue I've found is that some tiers of wealth *on average* allow one to access tiers of collectives of knowledge that meaningfully impact your ability to scale with lower effort (& thus at equal effort, faster).

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

Also this knowledge only matters if you do something useful with that knowledge, which I'm convinced that you are, for instance. many other people are not able to create useful knowledge and thus may be better suited for earning2give.

Access to scientific discoveries is not given to those who invent them, but rather those who buy them on the market from whoever commercializes the technology (probably not the inventor). If there was a known cure for cancer then the rich will have better access to it than those who know a lot about curing cancer. Personal knowledge does nothing for you in terms of getting access to those things in a capitalist society.

Things that money can't buy are by definition outside of the capitalist system. If you are investing in knowledge in order to get access to things, you sorta have to assume that you will hold your knowledge back from the public/market, which is basically unethical. 

There is obviously an issue with free-riding here where the public will gain access to "knowledge capital" that was built by others, and so people aren't incentivized to develop knowledge, and if they do they want to keep it private or at the very least charge a lot of money for it. 

The solution probably isn't to have more people who are independently wealthy quit their jobs and work in science for peanuts. It's an irrational decision by them because they are unlikely to personally develop any major breakthrough, whereas by making more money they could buy access to any breakthrough made by the community. It also displaces those that aren't rich from pursuing science as a career, and the rationale given here has perverse incentives.

Better to invest in scientists generally. Even if "spray and pray" is a bad idea, surely it's better than "invest only in one person who is also the person I'm most irrationally biased towards (myself)". 

Also, gaining "general-purpose knowledge capital" doesn't really help the situation. Progress is disproportionately driven by a few people that are highly knowledgeable and motivated in specific areas, along with being creative and/or lucky. Learning the basics of everything makes you a Renaissance man but isn't going to help with putting a man on the moon. Treating a cell like a capacitor is neat but probably misses a lot of details once you get into it. Digging deeply into a problem is necessary, but inherently risky as a lot of knowledge, unlike actual capital, turns out to be quite useless for practical considerations and you can't exchange it so easily. 

Nonsense among friends is not the problem here, clearly. It's nonsense let loose among hundreds of millions of people simultaneously. That's been a problem for every government since the beginning of government. And it's one the Chinese largely solved 2500 years ago and, thanks to which, have thrived ever since.

As John Stewart Mill[1] observed, “The Chinese are remarkable in the excellence of their apparatus for implanting, as far as possible, the best wisdom they have in every mind in the community”  and, Mill might have added, "Slowing the unconfined spread of nonsense”. That's the job of their Chief Censor, who is usually the country's leading public intellectual (as he is now). Imagine Noam Chomsky as media referee and you get the flavor of Chinese censorship. Young people, especially university students, find it constricting. Their parents say they understand the necessity of it and thinks it's well-managed. Their grandparents think that the government has gone to the dogs, permitting pornography, and ....

Official  information has always been treasured in China, because those who heeded it prospered while those who did not languished: as they still do. Jack Ma will heed whatever advice he gets today because doing so has always been the smart way to bet. He's talking to geniuses, guys who are far, far smarter than he, who are responsible for their country's next 25 years.

Senior officials practiced–and still practice–xuānchuán–propaganda, transforming the people through honorable behavior and instruction–and lectured on the Emperor’s Sacred Maxims while exemplifying them in daily life (as they still do):

Highly esteem filial piety and brotherly submission to give due weight to social relations.

Behave generously toward your family to promote harmony and peace.

Cultivate peace within the neighborhood to prevent quarrels and lawsuits.

Respect farming and the cultivation of mulberry trees to ensure sufficient clothing and food.

Be moderate and economical in order to avoid wasting away your livelihood.

Give weight to schools and academies in order to honor the scholar.

Wipe out strange beliefs to elevate the correct doctrine.

Elucidate the laws in order to warn the ignorant and obstinate.

Show propriety and tactful courtesy to elevate customs and manners.

Work diligently in your chosen callings to quiet your ambitions.

Instruct sons and younger brothers to keep them from doing wrong.

Hold back false accusations to safeguard the good and honest.

Warn against sheltering deserters lest you share their punishment.

Promptly and fully pay your taxes lest you need be pressed to pay them.

Join together in hundreds and tithings to end theft and robbery.

Free yourself from enmity and anger to show respect for your body and life.

__________________________________

[1] On Liberty. John Stewart Mill. 1863.

China values shares certain types of knowledge but it doesn't value sharing information about mistakes. Sharing information about mistakes is vital to building scientific knowledge and will be vital to achieve further goals such as during cancer and fighting aging.

Instead of valuing the scientific progress and the freedom it needs to talk about what doesn't work the Chinese government pushes traditional Chinese medicine and got the WHO to adept classifications about blocked Chi flow. 

Even when China has plenty of smart people who are willing to work long hours it won't be able build to the kind of scientific productivity that brought the West it's wealth as long as it's not more open about talking about what goes wrong. 

Chinese are strong on propaganda, Americans are weak on teaching evolution at schools... how could we achieve a middle way, where the established knowledge is neither "sacred" nor "just your opinion, man"?

I suppose in the West at least you can have a bubble that promotes good knowledge, and the theory is that in competition between bubbles, the good thoughts will prevail in long term. Except... academia does not really work this way, at least when it comes to funding, does it? So maybe we are getting the worse of both systems here, where it's neither competition of ideas (as the tradition would suggest in West), nor governance by highly educated experts (at the tradition would suggest in China), but rather decisions of bureaucrats-managers optimizing to do the safe thing and cover their asses. (I am probably exaggerating here a lot, dunno.)

There are parts of academia that work and parts that doesn't. At the moment many parts of Western academia struggle with the replication crisis and that struggle is about accepting that errors are made. The progress is not as fast as I would like it, but in China this kind of looking at what went wrong is much harder.

Losing face is a big deal in China and it prevents analysis of what goes wrong.

What is the process for choosing the state censor, and the long term planning geniuses? How do these processes systematically select for very intelligent people with reliably correct beliefs?

This is why money printing by itself doesn't generate more wealth. It's human ingenuity that comes from education/knowledge that actually moves the needle and improves our living standards over time.