Perhaps the 1994 Crime Bill really did cause the drop in violent crime, or perhaps the causality goes the other way: the spike in violent crime motivated politicians to pass the act in the first place. (Note that the act was passed slightly after the violent crime rate peaked!)
In an ideal political system, wouldn't we hope both of these happen?
If your job is creating shoes from start to finish, it’s fairly straightforward to measure productivity: it’s merely the dollar value of the shoes you make each year based on market prices.
Isn't this completely wrong? The price will be something like their marginal value, but most shoes are worth much more than the marginal shoe, due to diminishing marginal returns. The marginal shoe probably just replaces a kinda-broken-but-still-usable shoe or an outgrown shoe, but you could handle this in other ways, e.g. going to a different shoemaker or getting hand-me-downs or whatever, leading to very low marginal value despite high absolute value. This quantification is thus ignoring almost all the consumer surplus.
Furthermore, the value of a shoe overestimates the productivity of making the shoe, because making it deprives society of the resources needed for the shoe. This is probably faithfully-enough reflected in the prices, but my model is society usually works with extremely thin profit margins, so the bulk of the price will be due to the cost of production.
Or another way of phrasing is that this quantification assumes consumer surplus equals production cost (kind of like the labor theory of value), which I think is only true in a very narrow range of circumstances.
Meaning of my comment was "your examples are very weak in proving absense of cross-domain generalization".
I can buy that there's a sort of "trajectory of history" that makes use of all domains at once, I just think this is the opposite of what rationalist-empiricists are likely to focus on.
And if we are talking about me, right now I'm doing statistics, physics and signal processing, which seems to be awfully generalizable.
This is precisely the position that I am referring to when I say "the assumption was that the world is mostly homogeneous". Like physics is generalizable if you think the nature of the world is matter. And you can use energy from the sun to decompose anything into matter, allowing you to command universal assent that everything is matter. But does that mean matter is everything? Does your physics knowledge tell you how to run a company? If not, why say it is "awfully generalizable"?
Thesis: there's a condition/trauma that arises from having spent a lot of time in an environment where there's excess resources for no reasons, which can lead to several outcomes:
By contrast, if resources are contingent on a particular reason, everything takes shape according to said reason, and so one cannot make a general characterization of the outcomes.
Some results I got:
P(doom) = 2%, P(Blanchardianism) = 15%, P(mesmerism) = 5%, P(overregulation raises housing prices) = 35%, P(Trump wins 2024) = 55%, P(dyson sphere) = 2%, P(Eliezer Yudkowsky on AI) = 35%
It sounds cool, though also intuitively temperature seems like one of the easiest attributes to measure because literally everything is kind of a thermometer in the sense that everything equillibrates in temperature. My prior mental image on inventing temperature is iteratively finding things that more and more consistently/cleanly reflects this universal equillibration tendency.
Is this in accordance with how the book describes it, or would I be surprised when reading it? Like of course I'd expect some thermodynamic principles and distinctions to be developed along the way, but it seems conceptually very different from e.g. measuring neural networks where stuff is much more qualitatively distinct.
But I don't think you are doing space colonization. I'd guess you are doing reading/writing on social media, programming, grocery shopping, cooking, ... . And I think recursive self-improvement is supposed to work with no experience in space colonization.
And DON’T EVEN GET ME STARTED on people who think Wikipedia is an “Artificial Intelligence,”
With the invention of LLMs, this aged poorly. It turns out that most of the research that goes into developing artificial intelligence consists of cataloguing the world and writing it up on the internet.
Farming, law enforcement, war, legislation, chip fabbing, space colonization, cargo trucking, ...
When talking about top performance in highly specific domains, one should indeed use lots of domain specific tricks. But in a grand scheme of things the rule of "coherence + contact with the world" is extremely helpful, among other things it allows to derive all the specific tricks for all the different domains.
This assumes you have contact with all the different domains, which you don't, rather than just some of them.
P(The scientific consensus on time switches from agreeing with Albert Einstein to agreeing with Henri Bergson) = 15%, P(The scientific consensus switches away from mechanistic thinking to agreeing with Henri Bergson on Elan Vital) = 5%.