Blog at thelimelike.wordpress.org
Do you have any strong evidence to back this up? Opinions on education are dime a dozen, studies are what I'd like to see for once.
This is fair and an important caveat. Pure arbitrage disappears quickly in the market. At the same time, though, no pure arbitrage profits is only necessary and not sufficient for full efficiency. An efficient market means no (or very few) positive expected utility opportunities still left around. The EMH still implies the price of shorts should have fallen until it became economically feasible to correct the mispricing.
This post seems both interesting and like a way to get a very unrepresentative sample.
I find it difficult to imagine how returns could be literally negative – it's not as though people are anti-correlated with the truth – unless you're taking into account some effect like "Annoying people by making too many coronavirus posts."
(Also: The opposite of the virtue of lightness usually goes by the name of conservatism bias in the cognitive science literature.)
From my own experience reading the literature, I propose the following: The Bimodal Market Hypothesis.
The BMH states that the market is either a terrifying beast of infinite power that has priced in the number of grandchildren you will have into McDonald's stocks, or is so stupid it kinda makes you want to cry. Evidence for this includes:
1. The Dot-Com bubble, the fact that Bitcoin has a price exceeding $0 despite there currently existing cryptocurrencies that are strictly better in every way, and that time the market proved to be incapable of doing addition. Maybe it's when tech is involved?
2. But also, the Peso problem, where an apparent decades-long anomaly in the markets turned out to exist because the market was accurately estimating the probability that the Mexican government would be unable to maintain its peg.
Situation 1 occurs when idiots outnumber financial experts, and overwhelm the ability of "Smart money" to accurately price a stock in the face of a hype wave. Everyone reasonable avoids these stocks like the plague; shorting them exposes you to the possibility of bankruptcy in the case that the madness fails to subside in time. Situation 2 happens when financial experts poring over spreadsheets determine the value of a stock by rereading every statement until it's become their favorite work of literature before making a decision.
Policy proposal: Make it illegal to trade stocks if you haven't read A Random Walk Down Wall Street, don't know what a PE ratio is, if you do know what a candlestick graph is, or if you think that the reason why stocks go up is something like "Everyone thinks they will."
"Always go with your first intuition on multiple choice" reflects advice that's specifically good for students who are anxious because they're taking a test. The student will generally select the correct answer (or at least the one that's most likely to be correct). If they're somewhat uncertain about it, they'll then start to feel anxious; this anxiety will build over time, resulting in a more and more pessimistic assessment of how likely they are to be correct, resulting in even more anxiety. This continues until the student is either sufficiently pessimistic to think that the original answer was not the best or else changes it simply to relieve the stress. This happens even though no new information has been received, implying said change is unlikely to be correlated with correctness and more likely simply reflects a failure of human psychology.
In short, a test is an especially bad test case (pun fully intended) for this because the amount of bias being introduced increases over time with anxiety, rather than decreasing.
Which is why, again, I'm suggesting Yudkowsky's writings describing compatibilism. There is a sense in which objective morality exists, and a sense in which it doesn't; there is similarly a sense in which the world is deterministic and a sense in which we have free will, and the appearance of conflict has to do with our intuitions being too vague and needing to be sharpened and defined better.
Depends on what you mean by "Illusion" and "Ethics." I'd actually agree that the question of "Does an objective code of ethics exist" is confused like the free will one, and that there's a sense in which it does and a sense in which it doesn't.
The sense in which it does is twofold. First, codes of ethics can be objectively wrong; for example, any set of ethics which does not attempt to maximize an expected utility function must be inconsistent (See Yudkowsky's post on the von Neumann-Morgenstern axioms). So there's a sense in which moral systems can be straight-up bad. Another criterion that can rule out a moral system is strict Pareto inefficiency: If you have two moral systems, and every single agent agrees that they would be worse off under one of them than the other, then you really should chuck out that worse system.
However, out of these systems, you're not going to find only a single moral law printed on the fabric of the universe, regardless of how hard you try. Try starting with the Euthyphro dilemma, and just replace "God" with "The Universe." Even if in some far-off corner of Alpha Centauri there did actually end up being a "Moral thermometer" that measures how moral the universe is, and it went up whenever I kicked a little kid in the face, I'd tell that thermometer to **** off. The idea of a "Natural law of the universe" is a pretty bad one, given that even if it existed, there would be no reason to follow it if it clearly conflicted with the general human idea of morality.
A brief note: I'm not 100% sold on the many-worlds hypothesis -- Bohmian interpretations strike me as similarly plausible, but I'm not going to discuss this right now because I doubt I'm educated enough to do so at a high level that doesn't just retread old arguments. With that out of the way, let's assume many-worlds is correct.
Given the existence of many-worlds, interpreting making a decision as "Choosing your own Everett branch" is not correct for one simple reason: In any case in which your decisions depend on something going on at the quantum level, you will simultaneously make every single decision you possibly could have made. There's a sense in which you're accidentally making the error of importing classical intuitions of "One world" into many-worlds -- in this case, the mistake is in believing that there is only one you, who can only make one decision. The reality is that all possible worlds already exist: Everything that has happened or will happen is fully captured by the mathematics of quantum mechanics, and you can't change anything about it. You can't change what ever has or ever will happen.
Now, the question becomes the same as for any determinist universe: whether or not determinism, and the fact that all decisions you will ever make are fully predictable by mathematics, actually makes ethics pointless. In this case, I suggest looking back at Yudkowsky's post on dissolving the question of free will, and then posting your answer here when you think you've got it. It's a good exercise, since it took me a while to figure it out myself. I look forward to seeing your answer.