Wiki Contributions

Comments

I view it as highly unlikely (<10%) that Putin would accept "Vietnam" without first going nuclear, because it would almost certainly result in him being overthrown and jailed or killed.

Much of the analysis hinges on this, so I think it needs to thought through more deeply.  I would argue that the odds of Putin "being overthrown and jailed or killed" are higher if he gives the order to use nukes, than if he accepts "Vietnam". 

The NATO response to nukes would be catastrophic. Any remaining support from China/India would disappear. Further, the war is becoming less popular within Russia.  Russia escalating to nukes and the possibility of all out war weakens Putins position both internally and externally.

My guess is that withdrawal would also be met with a certain degree of "relief" from a significant portion of the Russian population.

There is a long history of Goliaths accepting and surviving embarrassing defeats. The level of control Putin exerts internally makes it more likely he would survive and spin "Vietnam" into something not too embarrassing for him personally, instead pinning the blame on an incompetent and corrupt military. Much of the Russian news media is already taking this approach.

"end of the world" images make me wonder if Dall-E thinks the world is flat

Interesting. Though I think extremes represent fewer degrees of freedom; where certain traits/characteristics dominate, and heuristics can better model behaviour. The "typical" person has all the different traits pushing/pulling, and so fewer variables you can ignore. i.e. the typical person might be more representative of hard-mode.

I think identifying the blind spots of the typical AI engineer/architect is an interesting and potentially important goal. Though I'm not sure I follow the reasoning behind identifying the opposite as the path to "modeling the desires of the typical person."?

I think investigating this would be of interest to people working in AI alignment and whose ultimate goal and whose ultimate goal is improving the condition of humanity in general. Understanding the needs and wants of the subset of humans most unlike themselves would likely help in modeling the desires of the typical person.

Isn't that better and more easily accomplished by identifying the median person i.e. in what way is the typical AI engineer different from the general population, and adjusting for that?

Alternatively, one could find what is complementary to autism rather than the opposite of autism; assuming those are not necessarily the same. People who may be attracted to and good at roles/professions like people management, team sports, therapists etc.

So trying to see what effect immigration has on inflation is fundamentally misguided - if immigration increases supply, which one might think would reduce prices, it's entirely possible that the government will react by creating more money, undoing this effect, since they can now do so without inflation going up.

This remains my primary question i.e. I definitely wouldn't think immigration is the only thing that creates inflation. But if we think its possible that immigration can impact price, then understanding if and how it could create "inflationary pressure" or "inflationary relief" would be quite useful to understand. Even if the government undoes it with other policies.

So, did the steadily declining immigration rate in the early 20th centuary contribute to the inflation Ameirca saw in the 70s - in addition to the increase in money supply and other policies? Was that stark dip, and then rise for the latter half of the centuary purely independant coincidence, or related somehow to inflation? And if so, what role did it play? Similarly, has the recent downward trend in immigration contributed this time? 

All of these seem like fairly reasonable and important questions to ask, even if we find the answer to be inconclusive or in the negative. I guess finding it mostly missing from the conversation, even as we talk about supply chains, willingness to work etc. seemed a bit odd to me.

Finally, I was a little confused by:

Deep in the footnotes of the academic papers claiming this, you may see an acknowledgement that the spiral can continue only if the central bank "validates" the price increases by creating more money.

I thought the typical response, even according to Keynesians, is to increase interest rates, therefore reducing money supply, rather than creating more money. The mechanism could be people buying more treasuries thereby removing money supply in circulation. Or people starting consuming less since borrowing rates are high - especially housing, cars etc. 

While some people ask for price or wage controls, it seems like its a fairly fringe view, even amongst those considered "left leaning economists". Am I misunderstanding something here?

Heuristics explain some of the failure to predict emergent behaviours. Much of Engineering relies on "perfect is the enemy of good" thinking. But, extremely tiny errors and costs, especially the non-fungible types, compound and interfere at scale. One lesson may be that as our capacity to model and build complex systems improves, we simultaneously reduce the number of heuristics employed.

Material physical systems do use thresholds, but they don't completely ignore tiny values (eg. neurotransmitter molecules don't just disappear at low potential levels).

What is being lost is related to your intuition in the earlier comment:

if the market is 49.9 / 50.1 in millions of dollars, then you can be fairly confident that 50% is the "right" price.

Without knowing how many people of the "I've studied this subject, and still don't think a reasonable prediction is possible" variety didn't participate in the market, it's very hard to place any trust in it being the "right" price.

This is similar to the "pundit" problem where you are only hearing from the most opinionated people. If 60 nutritionist are on TV and writing papers saying eating fats is bad, you may try to draw the "wrong" conclusion from that.; because unknown to you, 40 nutritionists believe "we just don't know yet". And these 40 are provided no incentives to say so.

Take the Russia-Kiev question on Metaculus which had a large number of participants. It hovered at 8% for a long time. If prediction markets are to be useful beyond just pure speculation, that market didn't tell me how many knowledgable people thought an opinion was simply not possible.

The ontological skepticism signal is missing - people saying there is no right or wrong that "exists" - we just don't know. So be skeptical of what this market says.

As for KBC - most markets allow you to change/sell your bet before the event happens; especially for longer-term events. So my guess is that this is already happening. In fact, the uncertainty index would seperate out much of the "What do other people think?" element into it's own question.

For locked in markets like ACX where the suggestion is to leave your prediction blank if you don't know, imagine every question being paired with "What percentage of people will leave this prediction blank?"

All these indicators are definitely useful for a market observer. And betting on these indicators would make for an interesting derivatives market - especially on higher volume questions.  The issue I was referring to is that all these indicators are still only based on traders who felt certain enough to bet on the market.

Say 100 people who have researched East-Asian geopolitics saw the question "Will China invade Taiwan this year?". 20 did not feel confident enough to place a bet. Of the remaining 80 people, 20 bet small amounts because of their lack of certainty.

The market and most of the indicators you mentioned would be dominated by the 60 that placed large bets. A LOT of information about uncertainty would be lost.  And this would have been fairly useful information about an event.

The goal would be to capture the uncertainty signal of the 40 that did not place bets, or placed small bets.  One way to do that would be to make "uncertainty" itself a bettable property of the question. And one way to accomplish that would be to bet on what percentage of bets are on "uncertainty" vs. a prediction.

First, I want to dispute the statement that a 50% is uninformative. It can be very informative depending on value of the outcomes.

Yes, absolutely.  50% can be incredibly useful. Unfortunately, it also represents the "I don't know" calibration option in most prediction markets.  A market at 50% for "Will we discover a civilization ending asteroid in the next 50 years?" would be cause for much concern. 

Is the market really saying that discovering this asteroid is essentially a coin flip with 1:1 odds? More likely it just represents the entire market saying "I don't know". It's these types of 50% that are considered useless, but I think do still convey information - especially if saying "I don't know" is an informed opinion. 

The Bayesian approach to the problem (which is in fact the very problem that Bayes originally discussed!) would require you to provide a distribution of your "expected" (I want to avoid the terms "prior" or "subjective" explicitly here) probabilities

I think there might an ontological misunderstanding here? I fully agree that ones expectations are often best represented by a non-normal distribution of outcomes. But this presumes that such a distribution "exists"? If it does, then one way to capture it would be to place multiple bets at different levels like one does with options for a stock. Metaculus already captures this distribution for the market as a whole - but only for those who were confident and certain enough to place bets.

My suggestion is to also capture signal from those with studied uncertainty who don't feel comfortable placing bets on ANY distribution. It's not that their distribution is flat - it's that for them a meaningful distribution does not exist. Their belief is "doubt that a meaningful prediction is even possible".

I think there is a certain "cost amnesia" that sets in after a "good" decision? Even for fairly large costs. So the "indistinguishability blindness" is often a cognitive response to maintain the image of a good decision rather than determined by hard numbers. 

Regardless, this is likely entering speculation territory. It's something I'd noticed in my own life as well as in policy decisions i.e. a negative reaction to talking about fairly large costs because net-benefits were still positive.

Load More