# All of Ege Erdil's Comments + Replies

Ege Erdil's Shortform

It's not the logarithm of the BTC price that is a martingale, it's the BTC price itself, under a risk-neutral measure if that makes you more comfortable (since BTC derivatives would be priced by the same risk-neutral measure pricing BTC itself).

Ege Erdil's Shortform

Recently I saw that Hypermind is offering a prediction market on which threshold BTC will hit first: $40k or$60k? You can find the market on this list.

I find this funny because for this kind of question it's going to be a very good approximation to assume BTC is a martingale, and then the optional stopping theorem gives the answer to the question immediately: if BTC is currently priced at $40k < X <$60k then the probability of hitting $40k first is ($60k - X)/($20k). Since BTC itself is going to be priced much more efficiently than this small volum... (read more) 4BackToBaseball18dLn (41.85/40) / ln (60/40) = 11.2% What is a probabilistic physical theory? You need "if the number on this device looks to me like the one predicted by theory, then the theory is right" just like you need "if I run billion experiments and frequency looks to me like probability predicted by the theory, then the theory is right". You can say that you're trying to solve a "downward modeling problem" when you try to link any kind of theory you have to the real world. The point of the question is that in some cases the solution to this problem is more clear to us than in others, and in the probabilistic case we seem to be using some... (read more) 1Signer1moHence it's a comment and not an answer^^. I don't get your examples: for a theory that predicts phase transition to have information content in the desired sense you would also need to specify model map. What's the actual difference with deterministic case? That "solution is more clear"? I mean it's probably just because of what happened to be implemented in brain hardware or something and I didn't have the sense that it was what the question was about. Or is it about non-realist probabilistic theories not specifying what outcomes are impossible in realist sense? Then I don't understand what's confusing about treating probabilistic part normatively - that just what being non-realist about probability means. Why did Europe conquer the world? Thank you for the link. I'm curious what the table would look like if we examined the top 10 or 20 cities instead of just those tied for the top position. I think this is quite a tall order for ancient times, but a source I've found useful is this video by Ollie Bye on YouTube. It's possible to move his estimates around by factors of 2 or so at various points, but I think they are correct when it comes to the order of magnitude of historical city populations. Who does "they" refer to in this sentence? It could mean two very different things. Edited the... (read more) Why did Europe conquer the world? I'd be happy to be corrected if I'm wrong. Do you have more precise numbers? There's obviously quite a bit of uncertainty when it comes to ancient city populations, but Wikipedia has a nice aggregation of three different sources which list the largest city in the world at various times in history. Estimates of city populations can vary by a factor of 2 or more across different sources, but the overall picture is that sometimes the largest city in the world was Chinese and sometimes it was not. My reference point for technological regression after the fa ... (read more) 3lsusr1moThank you for the link. I'm curious what the table would look like if we examined the top 10 or 20 cities instead of just those tied for the top position. Who does "they" refer to in this sentence? It could mean two very different things. Why did Europe conquer the world? This post seems to be riddled with inaccuracies and misleading statements. I'll just name a few here, since documenting all of them would take more time than I'm willing to spare. For most of history, China was the center of civilization. It had the biggest cities, the most complex government, the highest quality manufacturing, the most industrial capacity, the most advanced technology, the best historical records and the largest armies. It dominated East Asia at the center of an elaborate tribute system for a thousand years. This is simply false. China ... (read more) 8lsusr1moI'd be happy to be corrected if I'm wrong. Do you have more precise numbers? Roman concrete fell out of use after the fall of the Western Roman Empire. It is my impression that not many aqueducts were built either. My reference point for technological regression after the fall of the Western Roman Empire comes from science rather than technology. My understanding of the Renaissance (from reading Destiny Disrupted [https://www.lesswrong.com/posts/DRXW6CrHwkH4rfuGi/book-review-destiny-disrupted] ) is that much of European philosophy (including science) only survived because it was preserved by the Arabic-speaking world. I agree. This is why Europeans choosing the terms of engagement was so important. They won when the Mughal and Qing empires were at their weakest. What is a probabilistic physical theory? It's true that both of these outcomes have a small chance of not-happening. But with enough samples, the outcome can be treated for all intents and purposes as a certainty. I agree with this in practice, but the question is philosophical in nature and this move doesn't really help you get past the "firewall" between probabilistic and non-probabilistic claims at all. If you don't already have a prior reason to care about probabilities, results like the law of large numbers or the central limit theorem can't convince you to care about it because they are a... (read more) What is a probabilistic physical theory? So I agree with most of what you say here, and as a Metaculus user I have some sympathy for trying to make proper scoring rules the epistemological basis of "probability-speak". There are some problems with it, like different proper scoring rules give different incentives to people when it comes to distributing finite resources across many questions to acquire info about them, but broadly I think the norm of scoring models (or even individual forecasters) by their Brier score or log score and trying to maximize your own score is a good norm. There are proba... (read more) 5davidad14dI think it is not circular, though I can imagine why it seems so. Let me try to elaborate the order of operations as I see it. 1. Syntax: Accept that a probability-sentence like "P(there will be a sea-battle tomorrow)≥0.4" is at least syntactically parseable, i.e. not gibberish, even if it is semantically disqualified from being true (like "the present King of France is a human"). * This can be formalized as adding a new term-formerP:ClassicalSentence→ProbabilityTerm, other term-formers such as+:ProbabilityTerm×ProbabilityTerm→ProbabilityTerm, constantsC:Q→Pr obabilityTerm, and finally a predicate≥0:ProbabilityTerm→ProbabilitySente nce. 2. Logic: Accept that probability-sentences can be the premises and/or conclusions of valid deductions, such asP(A)≥0.4,P(B∧A)≥0.5⋅P(A)⊢P(B)≥0.2. * Axiomatizing the valid deductions in a sound and complete way is not as easy as it may seem, because of the interaction with various expressive features one might want (native conditional probabilities, higher-order probabilities, polynomial inequalities) and model-theoretic and complexity-theoretic issues (pathological models, undecidable satisfiability). Some contenders: * LPWF [https://www.academia.edu/download/30763898/Sven_Hartmann_Foundations_of_Information_and_Kn.pdf#page=253] , which has polynomial inequalities but not higher-order probabilities * LCP [http://www.doiserbia.nb.rs/img/doi/0350-1302/2007/0350-13020796141O.pdf] , which has higher-order conditional probabilities but not inequalities * LPP [https://link.springer.com/chapter/10.1007%2F978-3-319-47012-2_3]2, which has neither, but has decidable satisfiability. * Anyway, the basic axioms about probability that we need for such logics are: * P(α)≥0 * P(⊤)=1 * P(⊥)=0 * P(α)+P(β)=P(α∨ What is a probabilistic physical theory? Negations of finitely observable predicates are typically not finitely observable. [0,0.5) is finitely observable as a subset of [0,1], because if the true value is in [0,0.5) then there necessarily exists a finite precision with which we can know that. But its negation, [0.5,1], is not finitely observable, because I'd the true value is exactly 0.5, no finite-precision measurement can establish with certainty that the value is in [0.5,1], even though it is. Ah, I didn't realize that's what you mean by "finitely observable" - something like "if the propos... (read more) 5davidad1moIt's nice if the opens of X can be internalized as the continuous functions X→TV for some space of truth values TV with a distinguished point ⊤ such that x∈O⇔O(x )=⊤. For this, it is necessary (and sufficient) for the open sets of TV to be generated by {⊤}. I could instead ask for a distinguished point ⊥ such that x∉O⇔ O(x)=⊥, and for this it is necessary and sufficient for the open sets of TV to be generated by TV∖{⊥}. Put them together, and you get that TV must be the Sierpiński space [https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space]: a "true" result (⊤∈TV) is finitely observable ({⊤} is open), but a "false" result is not ({⊥} is not open). Yes, constructively we do not know a proposition until we find a proof. If we find a proof, it is definitely true. If we do not find a proof, maybe it is false, or maybe we have not searched hard enough—we don't know. Also related is that the Sierpiński space is the smallest model of intuitionistic propositional logic (with its topological semantics) that rejects LEM, and any classical tautology rejected by Sierpiński space is intuitionistically equivalent to LEM. There's a sense in which the difference between classical logic and intuitionistic logic is precisely the assumption that all open sets of possibility-space are clopen (which, if we further assume T0, leads to an ontology where possibility-space is necessarily discrete). (Of course it's not literally a theorem of classical logic that all open sets are clopen; this is a metatheoretic claim about semantic models, not about objects internal to either logic.) See A Semantic Hierarchy for Intuitionistic Logic [https://escholarship.org/content/qt2vp2x4rx/qt2vp2x4rx_noSplash_2bc40e4f9d71c7442df59051c9139bde.pdf?t=poxut0#page15] . What is a probabilistic physical theory? What I'm sneaking in is that both the σ-algebra structure and the topological structure on a scientifically meaningful space ought to be generated by the (finitely) observable predicates. In my experience, this prescription doesn't contradict with standard examples, and situations to which it's "difficult to generalize" feel confused and/or pathological until this is sorted out. It's not clear to me how finitely observable predicates would generate a topology. For a sigma algebra it's straightforward to do the generation because they are closed under com... (read more) 4davidad1mo(I agree with your last paragraph—this thread is interesting but unfortunately beside the point since probabilistic theories are obviously trying to "say more" than just their merely nondeterministic shadows.) Negations of finitely observable predicates are typically not finitely observable. [0,0.5) is finitely observable as a subset of [0,1], because if the true value is in [0,0.5) then there necessarily exists a finite precision with which we can know that. But its negation, [0.5,1], is not finitely observable, because if the true value is exactly 0.5, no finite-precision measurement can establish with certainty that the value is in [0.5,1], even though it is. The general case of why observables form a topology is more interesting. Finite intersections of finite observables are finitely observable because I can check each one in series and still need only finite observation in total. Countable unions of finite observables are finitely observable because I can check them in parallel, and if any are true then its check will succeed after only finite observation in total. Uncountable unions are thornier, but arguably unnecessary (they're redundant with countable unions if the space is hereditarily Lindelöf [https://en.wikipedia.org/wiki/Lindel%C3%B6f_space], for which being Polish is sufficient, or more generally second-countable), and can be accommodated by allowing the observer to hypercompute. This is very much beside the point, but if you are still interested anyway, check out Escardó's monograph on the topic [https://www.cs.bham.ac.uk/~mhe/papers/entcs87.pdf#page15]. What is a probabilistic physical theory? I think you can justify probability assessments in some situations using Dutch book style arguments combined with the situation itself having some kind of symmetry which the measure must be invariant under, but this kind of argument doesn't generalize to any kind of messy real world situation in which you have to make a forecast on something, and it still doesn't give some "physical interpretation" to the probabilities beyond "if you make bets then your odds have to form a probability measure, and they better respect the symmetries of the physical theory y... (read more) 1tivelen1moPerhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question. I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have sufficient trust in your intuition about the statement in question. What is your thinking on this prong of the dilemma (retracting your assessment of reasonableness on these probability assessments for which you have no justification)? What is a probabilistic physical theory? Believing in the probabilistic theory of quantum mechanics means we expect to see the same distribution of photon hits in real life. No it doesn't! That's the whole point of my question. "Believing the probabilistic theory of quantum mechanics" means you expect to see the same distribution of photon hits with a very high probability (say ), but if you have not justified what the connection of probabilities to real world outcomes is to begin with, that doesn't help us. Probabilistic claims just form a closed graph of reference in which they only refer ... (read more) 1DaemonicSigil1moOkay, thanks for clarifying the question. If I gave you the following answer, would you say that it counts as a connection to real-world outcomes? The real world outcome is that I run a double slit experiment with a billion photons, and plot the hit locations in a histogram. The heights of the bars of the graph closely match the probability distribution I previously calculated. What about 1-time events, each corresponding to a totally unique physical situation? Simple. For each 1 time event, I bet a small amount of money on the result, at odds at least as good as the odds my theory gives for that result. The real world outcome is that after betting on many such events, I've ended up making a profit. It's true that both of these outcomes have a small chance of not-happening. But with enough samples, the outcome can be treated for all intents and purposes as a certainty. I explained above why the "continuous distribution" objection to this doesn't hold. What is a probabilistic physical theory? You spend a few paragraphs puzzling about how a probabilistic theory could be falsified. As you say, observing an event in a null set or a meagre set does not do the trick. But observing an event which is disjoint from the support of the theory's measure does falsify it. Support is a very deep concept; see this category-theoretic treatise that builds up to it. You can add that as an additional axiom to some theory, sure. It's not clear to me why that is the correct notion to have, especially since you're adding some extra information about the topology o... (read more) 5davidad1moOkay, I now think both of my guesses about what's really being asked were misses. Maybe I will try again with a new answer; meanwhile, I'll respond to your points here. You're right that I'm sneaking something in when invoking support because it depends on the sample space having a topological structure, which cannot typically be extracted from just a measurable structure. What I'm sneaking in is that both the σ-algebra structure and the topological structure on a scientifically meaningful space ought to be generated by the (finitely) observable predicates. In my experience, this prescription doesn't contradict with standard examples, and situations to which it's "difficult to generalize" feel confused and/or pathological until this is sorted out. So, in a sense I'm saying, you're right that a probability space (X,Σ,P) by itself doesn't connect to reality—because it lacks the information about which events in Σ are opens. As to why I privilege null sets over meagre sets: null sets are those to which the probability measure assigns zero value, while meagre sets are independent of the probability measure—the question of which sets are meagre is determined entirely by the topology. If the space is Polish (or more generally, any Baire space), then meagre sets are never inhabited open sets, so they can never conceivably be observations, therefore they can't be used to falsify a theory. But, given that I endorse sneaking in a topology, I feel obligated to examine meagre sets from the same point of view, i.e. treating the topology as a statement about which predicates are finitely observable, and see what role meagre sets then have in philosophy of science. Meagre sets are not the simplest concept; the best way I've found to do this is via the characterization of meagre sets with the Banach–Mazur game [https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur_game]: * Suppose Alice is trying to claim a predicate X is true about the world, and Bob is trying to claim it isn What is a probabilistic physical theory? Here I'm using "Bayesian" as an adjective which refers to a particular interpretation of the probability calculus, namely one where agents have credences about an event and they are supposed to set those credences equal to the "physical probabilities" coming from the theory and then make decisions according to that. It's not the mere acceptance of Bayes' rule that makes someone a Bayesian - Bayes' rule is a theorem so no matter how you interpret the probability calculus you're going to believe in it. With this sense of "Bayesian", the epistemic content adde... (read more) 1JBlack1moThe use of the word "Bayesian" here means that you treat credences according to the same mathematical rules as probabilities, including the use of Bayes' rule. That's all. What is a probabilistic physical theory? The question is about the apparently epiphenomenal status of the probability measure and how to reconcile that with the probability measure actually adding information content to the theory. This answer is obviously "true", but it doesn't actually address my question. What is a probabilistic physical theory? This is not true. You can have a model of thermodynamics that is statistical in nature and so has this property, but thermodynamics itself doesn't tell you what entropy is, and the second law is formulated deterministically. What is a probabilistic physical theory? As I see it, probability is essentially just a measure of our ignorance, or the ignorance of any model that's used to make predictions. An event with a probability of 0.5 implies that in half of all situations where I have information indistinguishable from the information I have now, this event will occur; in the other half of all such indistinguishable situations, it won't happen. Here I think you're mixing two different approaches. One is the Bayesian apporach: it comes down to saying probabilistic theories are normative. The question is how to reconc... (read more) What is a probabilistic physical theory? I don't know what you mean here. One of my goals is to get a better answer to this question than what I'm currently able to give, so by definition getting such an answer would "help me achieve my goals". If you mean something less trivial than that, well, it also doesn't help me to achieve my goals to know if the Riemann hypothesis is true or false, but RH is nevertheless one of the most interesting questions I know of and definitely worth wondering about. I can't know how an answer I don't know about would impact my beliefs or behavior, but my guess is tha... (read more) 1tivelen1moMy approach was not helpful at all, which I can clearly see now. I'll take another stab at your question. You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason. Are you unable to justify any probability assessments at all? Or is there some specific subset that you're having trouble with? Or have I failed to understand your question properly? Retail Investor Advantages To elaborate on the information acquisition cost point; small pieces of information won't be worth tying up a big amount of capital for. If you have a company worth$1 billion and you have very good insider info that a project of theirs that the market implicitly values at \$10 million is going to flop, if the only way you can express that opinion is to short the stock of the whole company that's likely not even worth it. Even with 10% margin you'd be at best making a 10% return on capital over the time horizon that the market figures out the project is bad ... (read more)

Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists

Ah, I see. I missed that part of the post for some reason.

In this setup the update you're doing is fine, but I think measuring the evidence for the hypothesis in terms of "bits" can still mislead people here. You've tuned your example so that the likelihood ratio is equal to two and there are only two possible outcomes, while in general there's no reason for those two values to be equal.

Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists

This is a rather pedantic remark that doesn't have much relevance to the primary content of the post (EDIT: it's also based on a misunderstanding of what the post is actually doing - I missed that an explicit prior is specified which invalidates the concern raised here), but

If such a coin is flipped ten times by someone who doesn't make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those

5Zack_M_Davis2moThanks for this analysis! However— I'm not. The post specifies "a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time"—that is (and maybe I should have been more explicit), I'm saying our prior belief about the coin's bias is just the discrete distribution {"1/3 Heads, 2/3 Tails": 0.5, "2/3 Heads, 1/3 Tails": 0.5}. I agree that a beta prior would be more "realistic" in the sense of applying to a wider range of scenarios (your uncertainty about a parameter is usually continuous, rather than "it's either this, or it's that, with equal probability"), but I wanted to make the math easy on myself and my readers.
Laplace's rule of succession

Yeah, Neyman's proof of Laplace's version of the rule of succession is nice. The reason I think this kind of approach can't give the full strength of the conjugate prior approach is that I think there's a kind of "irreducible complexity" to computing for non-integer values of . The only easy proof I know goes through the connection to the gamma function. If you stick only to integer values there are easier ways of doing the computation, and the linearity of expectation argument given by Neyman is one way to do it.

One concrete example of the ru... (read more)

1. What matters is that it's something you can invest in. Choosing the S&P 500 is not really that important in particular. There doesn't have to be a single company whose stock is perfectly correlated with the S&P 500 (though nowadays we have ETFs which more or less serve this purpose) - you can simply create your own value-weighted stock index and rebalance it on a daily or weekly basis to adjust for the changing weights over time, and nothing will change about the main arguments. This is actually what the authors

Over 20 years that's possible (and I think it's in fact true), but the paper I cite in the post gives some data which makes it unlikely that the whole past record is outperformance. It's hard to square 150 years of over 6% mean annual equity premium with 20% annual standard deviation with the idea that the true stock return is actually the same as the return on T-bills. The "true" premium might be lower than 6% but not by too much, and we're still left with more or less the same puzzle even if we assume that.

Average probabilities, not log odds

That's alright, it's partly on me for not being clear enough in my original comment.

I think information aggregation from different experts is in general a nontrivial and context-dependent problem. If you're trying to actually add up different forecasts to obtain some composite result it's probably better to average probabilities; but aside from my toy model in the original comment, "field data" from Metaculus also backs up the idea that on single binary questions median forecasts or log odds average consistently beats probability averages.

I agree with Simo... (read more)

Average probabilities, not log odds

I don't know what you're talking about here. You don't need any nonlinear functions to recover the probability. The probability implied by  is just , and the probability you should forecast having seen  is therefore

since  is a martingale.

I think you don't really understand what my example is doing.  is not a Brownian motion and its increments are not Gaussian; it's a nonlinear transform of a drift-diffusion process by a sigmoid which takes valu... (read more)

5AlexMennen2moOh, you're right, sorry; I'd misinterpreted you as saying that M represented the log odds. What you actually did was far more sensible than that.

Thanks for the comment - I'm glad people don't take what I said at face value, since it's often not correct...

What I actually maximized is (something like, though not quite) the expected value of the logarithm of the return, i.e. what you'd do if you used the Kelly criterion. This is the correct way to maximize long-run expected returns, but it's not the same thing as maximizing expected returns over any given time horizon.

My computation of  is correct, but the problem comes in elsewhere. Obviously if your goal is to just maximize ex... (read more)

Average probabilities, not log odds

The experts in my model are designed to be perfectly calibrated. What do you mean by "they are overconfident"?

2AlexMennen2moThe probability of the event is the expected value of the probability implied by M(T). The experts report M(X) for a random variable X sampled uniformly in [0,T]. M(T) differs from M(X) by a Gaussian of mean 0, and hence, knowing M(X), the expected value of M(T) is just M(X). But we want the expected value of the probability implied by M(T), which is different from the probability implied by the expected value of M(T), because expected value does not commute with nonlinear functions. So an expert reporting the probability implied by M(X) is not well-calibrated, even though an expert reporting M(X) is giving an unbiased estimate of M(T).
Average probabilities, not log odds

I did a Monte Carlo simulation for this on my own whose Python script you can find on Pastebin.

Consider the following model: there is a bounded martingale  taking values in  and with initial value . The exact process I considered was a Brownian motion-like model for the log odds combined with some bias coming from Ito's lemma to make the sigmoid transformed process into a martingale. This process goes on until some time T and then the event is resolved according to the probability implied by . You have n "experts"... (read more)

2AlexMennen2moNope! If n=1, then you do know which expert has the most information, and you don't do best by copying his forecast, because the experts in your model are overconfident. See my reply to ADifferentAnonymous [https://www.lesswrong.com/posts/b2jH8GqNhoE5vguni/average-probabilities-not-log-odds?commentId=evLsxypzBa4kNHmEt] . But well-done constructing a model in which average log odds outperforms average probabilities for compelling reasons.

NOTE: Don't believe everything I said in this comment! I elaborate on some of the problems with it in the responses, but I'm leaving this original comment up because I think it's instructive even though it's not correct.

There is a theoretical account for why portfolios leveraged beyond a certain point would have poor returns even if prices follow a random process with (almost surely) continuous sample paths: leverage decay. If you could continuously rebalance a leveraged portfolio this would not be an issue, but if you can't do that then leverage exhibits ... (read more)

4paulfchristiano2moI didn't follow the math (calculus with stochastic processes is pretty confusing) but something seems obviously wrong here. I think probably your calculation ofE[(Δlog(S)2)]is wrong? Maybe I'm confused, but in addition to common sense and having done the calculation in other ways, the following argument seems pretty solid: * Regardless ofk, if you consider a short enough period of time, then with overwhelming probability at all times your total assets will be between 0.999 and 1.001. * So no matter how I choose to rebalance, at all times my total exposure will be between0.999kand1.001k. * And if my exposure is between0.999kand1.001k, then my expected returns over any time periodTare between0.999kTμand1.001kTμ. (Whereμis the expected return of the underlying, maybe that's different from yourμbut it's definitely just some number.) * So regardless of how I rebalance, doublingkapproximately doubles my expected returns. * So clearly for short enough time periods your equation for the optimum can't be right. * But actually maximizing EV over a long time period is equivalent to maximizing it over each short time period (since final wealth is just linear in your wealth at the end of the initial short period) so the optimum over arbitrary time periods is also to max leverage.
What Do GDP Growth Curves Really Mean?

I think there's some kind of miscommunication going on here, because I think what you're saying is trivially wrong while you seem convinced that it's correct despite knowing about my point of view.

No it doesn't. It weighs them by price (i.e. marginal utility = production opportunity cost) at the quantities consumed. That is not a good proxy for how important they actually were to consumers.

Yes it is - on the margin. You can't hope for it to be globally good because of the argument I gave, but locally of course you can, that's what marginal utility means! T... (read more)

Petrov Day Retrospective: 2021

Strong upvote for the comment. I think the situation is even worse than what you say: the fact is that had Petrov simply reported the inaccurate information in his possession up the chain of command as he was being pressured to do by his own subordinates, nobody would have heard of his name and nobody would have blamed him for doing his job. He could have even informed his superiors of his personal opinion that the information he was passing to them was inaccurate and left them to make the final decision about what to do. Not only would he have not been bl... (read more)

What Do GDP Growth Curves Really Mean?

The reason I bring up the weighting of GDP growth is that there are some "revolutions" which are irrelevant and some "revolutions" which are relevant from whatever perspective you're judging "craziness". In particular, it's absurd to think that the year 2058 will be crazy because suddenly people will be able to drink wine manufactured in the year 2058 at a low cost.

Consider this claim from your post:

When we see slow, mostly-steady real GDP growth curves, that mostly tells us about the slow and steady increase in production of things which haven’t been revo

2johnswentworth3moNo it doesn't. It weighs them by price (i.e. marginal utility = production opportunity cost) at the quantities consumed. That is not a good proxy for how important they actually were to consumers. I'm mostly operationalizing "revolution" as a big drop in production cost. I think the wine example is conflating two different "prices": the consumer's marginal utility, and the opportunity cost to produce the wine. The latter is at least extremely large, and plausibly infinite, but the former is not. If we actually somehow obtained a pallet of 2058 wine today, it would be quite a novelty, but it would sell at auction for a decidedly non-infinite price. (And if people realized how quickly its value would depreciate, it might even sell for a relatively low price, assuming there were enough supply to satisfy a few rich novelty-buyers.) The two prices are not currently equal because production has hit its lower bound (i.e. zero). More generally, there are lots of things which would be expensive to produce today, will likely be cheap to produce in the future, but don't create all that much value. We just don't produce any of them, To think properly about how crazy the future would be, we need to think about the consumer's perspective, not the production cost. A technological revolution does typically involve a big drop in production cost. Note, however, that this does not necessarily mean a big drop in marginal utility. Now, I do think there's still a core point of your argument which survives: The thing it tells us is that the huge revolution in electronics produced goods whose marginal utility is low at current consumption levels/production levels. When I say "real GDP growth curves mostly tell us about the slow and steady increase in production of things which haven’t been revolutionized", I mean something orthogonal to that. I mean that the real GDP growth curve looks almost-the-same in world without a big electronics revolution as it does in a world with a big ele
What Do GDP Growth Curves Really Mean?

In addition, I'm confused about how you can agree with both my comment and your post at the same time. You explicitly say, for example, that

Also, "GDP (as it's actually calculated) measures production growth in the least-revolutionized goods" still seems like basically the right intuitive model over long times and large changes, and the "takeaways" in the post still seem correct.

but this is not what GDP does. In the toy model I gave, real GDP growth perfectly captures increases in utility; and in other models where it fails to do so the problem is not that... (read more)

2johnswentworth3moThe main takeaways in the post generally do not assume we're thinking of GDP as a proxy for utility/consumer value. In particular, I strongly agree with: It remains basically true that goods whose price does not drop end up much more heavily weighted in GDP. Whether or not this weighting is "correct" (for purposes of using GDP as a proxy for consumer value) isn't especially relevant to how true the claim is, though it may be relevant to how interesting one finds the claim, depending on one's intended purpose. To the extent that we should stop using GDP as a proxy for consumer value, the question of "should a proxy for consumer value more heavily weight goods whose price does not drop?" just isn't that relevant. The interesting question is not what a proxy for consumer value should do, but rather what GDP does do, and what that tells us.
What Do GDP Growth Curves Really Mean?

I think in this case omitting the discussion about equivalence under monotonic transformations leads people in the direction of macroeconomic alchemy - they try to squeeze information about welfare from relative prices and quantities even though it's actually impossible to do it.

The correct way to think about this is probably to use von Neumann's approach to expected utility: pick three times in history, say ; assume that  where  is the utility of living around time  and ask people fo... (read more)

What Do GDP Growth Curves Really Mean?

There is a standard reason why real GDP growth is defined the way it is: it works locally in time and that's really the best you can ask for from this kind of measure. If you have an agent with utility function  defined over  goods with no explicit time dependence, you can express the derivative of utility with respect to time as

If you divide both sides by the marginal utility of some good taken as the numeraire, say the first one, then you get

where  is the pri... (read more)

6johnswentworth3moI was hoping somebody would write a comment like this. I didn't want to put a technical primer in the post (since it's aimed at a nontechnical audience), but I'm glad it's here, and I basically agree with the content.