This is a special post for quick takes by Nate Showell. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
18 comments, sorted by Click to highlight new comments since: Today at 9:34 PM

I've come to believe (~65%) that Twitter is anti-informative: that it makes its users' predictive calibration worse on average. On Manifold, I frequently adopt a strategy of betting against Twitter hype (e.g., on the LK-99 market), and this strategy has been profitable for me.

Is Twitter literally worse than flipping a coin, or just worse than... someone following a non-Twitter crowd?

I was comparing it to base-rate forecasting. Twitter leads people to over-update on evidence that isn't actually very strong, making their predictions worse by moving their probabilities too far from the base rates.

For hype-topics, this is almost certainly true.  For less-trendy ideas, probably less so.  I suspect this isn't specific to twitter, but to all large-scale publishing and communication mechanisms.  People are mostly amplifiers rather than evaluators.

I find myself betting "no" on Manifold a lot more than I bet "yes," and it's tended to be a profitable strategy. It's common for questions on Manifold to have the form "Will [sensational event] happen by [date]." These markets have a systematic tendency to be too high. I'm not sure how much of this bias is due to Manifold users overestimating the probabilities of sensational, low-probability events, and how much of it is an artifact of markets being initialized at 50%. 

Is trade ever fully causal? Ordinary trade can be modeled as acausal trade with the "no communication" condition relaxed. Even in a scenario as seemingly causal as using a vending machine, trade only occurs if the buyer believes that the vending machine will actually dispense its goods and not just take the buyer's money. Similarly, the vending machine owner's decision to set up the machine was informed by predictions about whether or not people would buy from it. The only kind of trade that seems like it might be fully causal is a self-executing contract that's tied to an external trigger, and for which both parties have seen the source code and verified that the other party have enough resources to make the agreed-upon trade. Would a contract like that still have some acausal element anyway?

Physical causality is naturally occuring acausal dependence (between physically interacting things), similarly to how a physical calculator captures something about abstract arithmetic. So the word "acausal" is unfortunate, it's a more general thing that shouldn't be defined by exclusion of the less general special case of immense practical importance, acausal dependence is something like logical/computational causality. And acausal trade is trade that happens in situations within the fabric of acausal dependencies, how an agent existing within acausal ontology might think about regular trade. But since a clearer formulation remains elusive, fixing the terminology seems premature.

An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.

Allow me to quote from Lem’s novel “Golem XIV”, which is about a superhuman AI named Golem:

Being devoid of the affective centers fundamentally characteristic of man, and therefore having no proper emotional life, Golem is incapable of displaying feelings spontaneously. It can, to be sure, imitate any emotional states it chooses— not for the sake of histrionics but, as it says itself, because simulations of feelings facilitate the formation of utterances that are understood with maximum accuracy, Golem uses this device, putting it on an "anthropocentric level," as it were, to make the best contact with us.

May not this method also be employed by human writers?

One thing to do here is to re-write their arguments in your own (ideally more neutral) language, and see whether it still seems as strong.

It's a natural tendency to taunting, which is meant to motivate the reader to attack the author, who is frustrated at the lack of engagement. The more sure you are of yourself, the more provocative you tend to be, especially if you're eager to put your ideas to the test.

A thing which often follows edginess/confidence, and the two may even be a cause of eachother, is mania. Even hypomanic moods has a strong effect on ones behaviour. I believe this is what happened to Kanye West. If you read Nietzsche's Zarathustra, you might find that it seems to contain a lot of mood-swings, and it was written in just 10 days as far as I know (and periods of high productivity are indeed a characteristic of mania)

I think it makes for great reading, and while such people have a higher risk of being wrong, I also think they have more interesting ideas. But I will admit that I'm a little biased on this topic as I've made myself a little edgy (confidence has a positive effect on mood)

What do other people here think of quantum Bayesianism as an interpretation of quantum mechanics? I've only just started reading about it, but it seems promising to me. It lets you treat probabilities in quantum mechanics and probabilities in Bayesian statistics as having the same ontological status: both are properties of beliefs, whereas in some other interpretations of quantum mechanics, probabilities are properties of an external system. This match allows quantum mechanics and Bayesian statistics to be unified into one overarching approach, without requiring you to postulate additional entities like unobserved Everett branches.

My probability that quantum Bayesianism is onto something is .05. It went down a lot when I read Sean Carroll's book Something Deeply Hidden. .05 is about as extreme as my probabilities get for the parts of quantum physics that are not settled science since I'm not an expert.

Could you summarize what Carroll says that made you update so strongly against it?

My memory is not that good. I do recall that it is in the chapter "Other ways: alternatives to many-worlds".

Simulacrum level 4 is more honest than level 3. Someone who speaks at level 4 explicitly asks himself "what statement will win me social approval?" Someone who speaks at level 3 asks herself the same question, but hides from herself the fact that she asked it.

Simulacra levels aren't a particularly good model for some interactions/topics, because they blend together in idiosyncratic ways.  It's unspecified in the model whether it's intentional or not, or whether a speaker is hiding anything from themselves, vs cynically understanding the levels and using the one that they think is most effective at the time.

I don't think so. simulacrum 4 has trouble making coherent reference to physical causal trajectory. simulacrum 3 and 1 are compatible in fact, in some circumstances. not so with 4