Connor_Flexman

Comments

Six economics misconceptions of mine which I've resolved over the last few years

For the externalities in (4), it’s important to remember that not internalizing the externality creates a lot of moral hazard, though, because Coase’s theorem rarely applies in practice. For example, the steel mill could often have been built at a different location for slightly more cost (say, $10k), which they will not do if they know the efficient move will be to not tax them. Thus a $40k inefficiency. And the theorem would rejoin with “well, the owners of the resorts will pay the mill >$10k to initially build on the new spot, which makes things efficient again”, but then of course you open up the opportunity for all sorts of inefficient blackmail if you don’t fulfill the perfect information requirement of Coase.

Obviously none of this contradicts the nuance you were adding, but I just wanted to spell this out lest we see anyone waver in their moral resolution to internalize most externalities.

Connor_Flexman's Shortform

I’ve understood episteme, techne, and metis for awhile, and the vital importance of each, but I’ve been missing this understanding of gnosis. I now think I've been bouncing off the implication that’s bundled into the idea of gnosis: that knowledge of spiritual mysteries is universal, or won’t be overturned later, or is “correct”. But I think that’s a wrong way to look at things.

For example, consider “life philosophies”. People put a ton of energy thinking about existentialism and what to do about the fact that we’re all going to die. The important thing people get from it isn’t some sort of episteme; nor techne; nor metis. They process it, learn to cope, learn how their values interact with the world—and the big insights here feel spiritual.

Likewise, with love. People develop philosophies around love that are clearly not built on the other 3 kinds of knowledge: they often contain things like “my heart yearns for that kind of thing”. The statement “my heart yearns for that kind of thing” is episteme, the decisionless following of the heart is techne, the fact that you should follow your heart is metis, but finding that your heart yearns for the thing is gnosis. It was a spiritual mystery what your heart yearned for, and you figured it out, and to find one of these feels just as spiritual as they say.

I can sort of see how meditation can give rise to these, cutting yourself off from synthetic logical direction and just allowing natural internal annealing to propagate all sorts of updates about your deep values and how to cope with the nature of reality. I can sort of see why people go to “find themselves spiritually” by traveling, letting new values come out and the standard constraints get loosened, and the resulting depth growing spiritual knowledge. I can sort of see why drugs, dancing, and sexuality were often used in pagan religious ceremonies meant to cause a revealing of the spirit and an estuary where deep values intermingled.

But all these spiritual insights are about how your mind wants to work, not about episteme-like "correct" universal knowledge. It's not universal, even if they look similar from mind to mind. They definitely get overturned later, at least in the limited sense that GR overturned Newton. And "correctness" doesn't really apply to them, because they're about the map being more like the map wants, not about map v reality.

Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis

I think the big Ethereum move in May 17 was a great example of this. Evidently many people thought earlier that the price should be higher, but "the market can stay irrational longer than you can stay solvent." So you had a building of steam before it finally shot up.

However I'm not sure to what extent these are actual epistemic issues as you hypothesize, or just an artifact of small fractions of smart money.

Stop saying wrong things

(grammatical edits for clarity)

Man, this post is special to me, because it is one of the most powerful rationality tools I know of and was the backbone of my own high-growth intro to rationality. So while I want to gesture at a patch for one problem I expect you will run into, I'm not that sure it's even worth paying attention to right now. Just running with the current skill seems fantastic. But, here's my concern for the future.

When you say "stop saying wrong things", most of your examples use logic to define "wrongness". Which makes sense. If you intend to learn Chinese and you don't check how long it will take you, but in fact you will never finish, logically you made a mistake. But if you intend on helming Etsy and don't run A/B tests to provide a feedback loop for whether you were correct about customer desire, you also logically made a mistake—despite the fact that I think this is a perfectly reasonable decision. I expect a fair number of these false positives to come up given the way you describe your current filter.

Before we get to what is going wrong, first we should examine a bit of what's going right. You're taking a situation where you have an urge to do your standard behavior, and noticing that you can pattern-match it to a case where people often do wrong things as defined by biases research, logic, and the people whom you read. These sources have a very good track record in many domains. When you then switch your behavior, you will often be right, as well as having explored some new behavior.

However, there are certain domains or classes of thing where logic does not do especially well, and likewise in which many people you read will probably give good-sounding advice which turns out to be wrong. I think startups is probably one of them. Zvi's post about Leaders of Men makes this point in a way which I like, using the example of baseball managers. There are definitely lots of "dumb" things that managers do that are easy to point out by a logician from the sidelines. These cost them some games. But the mistakes are driven by policies that are actually very powerful, more beneficial than the costs imposed on the games. I think the A/B testing example fits this. Yes, it helps, but it's not as helpful as running with other more important action policies, perhaps exploring design space or letting your designers' intuitions run or just focusing resources on management or who-knows-what (they probably do, though). A/B testing is optimizing, and you don't want to commit the sin of premature optimization. 5 years in sounds reasonably less premature to me.

So, to try to put a point on what goes wrong: logic has its weak points, as any straw-postmodernist will tell you, but they're obviously wrong if they say logic isn't patchable. But just because their pendulum has swung too far doesn't mean there aren't some classic mistakes with overapplying straw logic. I think premature optimization is a really good example. I think "ignoring complexity" is another good example. Garbage-in garbage-out modeling is another good example. A huge number of "biases" I think are actually the correct thing to do/think a significant fraction of the time. I think social skills, sports, dancing, music, politics, system design, etc are all sandboxes of complex domains in which logic doesn't work very well in practical usage, and we indeed see tons of mistakes in them by both straw-rationalists and real ones.

Maybe I sound like I'm preaching to the choir here. But I think there's a subtle-ish point I would still like to get across: if you override a behavior with logic, the original behavior was basically always in place for a very good reason. Your behaviors are built on each other. Immediately stopping a behavior will hurt behaviors on top of it or supported by it. For the general example, halting "saying wrong things" may cause you to stop putting models out there to be destroyed by reality, which could hamper the feedback and growth process. There are a plethora of more specific ones (e.g. halting "using little white lies" is great to explore but can often cause jarring but hard-to-identify ways people won't feel as comfortable with you).

I think the solution here is something like "while you're exploring what logic says to do instead, also explore heavily all the reasons the first action was being done, because your neural net is complex as shit and who knows what processes you may accidentally deprecate" (and sometimes, you'll find fascinating new subgoals and important dynamics you didn't know existed!). Running with the logical action is great because it's growthy and you can smash the model into reality, but *don't cling onto the naively logical model if reality if reality is hinting it's more complex than that*. That's the sinkhole to really avoid. Any other mistakes are fixable.

Excited to see where this takes you.

Is this viable physics?

Love this description. All of the results I've skimmed look an awful lot like showing that a thing which can correspond to space and time (your "thing that is Turing-complete") allows you to rederive the things about our space and time.

That being said, I still think exploring various translations and embeddings of mathematical, physical, and computational paradigms into each other is a very valuable exercise and may shed light on very important properties of abstract systems in the future. Also, cool compressed explanation of how some concepts in physics fit together, even if somewhat shallowly.

Treatments correlated with harm

Yeah, I didn't mean to imply that causal modeling wasn't the obvious solution—you're right about the existence of the leukemia threshold. But I guess in my experience of these mistakes, I often see people taking the action "try to do superior statistical techniques" and that not working for them (including rationalists and not just terrible science reporting sites), whereas I think "identify the places where your model is terrible and call that out" is a better first step for knowing how to build the superior models.

In the ventilator case, for example, I'm not trying to advocate blindly following common sense, but I do think it's important to incorporate common sense heavily. If people said, "There's no evidence for respirators working, maybe hemoglobin is being denatured", I certainly wouldn't advocate for more common sense. But instead I tend to see "the statistics show respirators aren't working, maybe we shouldn't use them", which seems to imply that common sense isn't being given a say at all. It seems to me like always having common sense as one of your causal models is both an easy sell and a vital piece of the machine making sure your statistical techniques don't go off the rails at any of their many opportunities.

Ways you can get sick without human contact

The claim is that these biota in your lower digestive tract are in containment there, but if you re-ingest them in your mouth and stomach those same strains can infect you from these parts of your body.

Ways you can get sick without human contact

Definitely possible, though of course it takes a few probability hits for specificity (in this case, odds of getting it the last day each time is about 1/7, and the probability of no concurrent disease is around 1/4!, so something like 1e-4.5 as likely as a "typical" spread throughout the house)

Ways you can get sick without human contact

Ok, I changed it to be clear it's about this post. I agree you should be worried about spread within house.

Ways you can get sick without human contact

Yeah, was tongue in cheek but does happen! I.e. the perpetual influenza A case mentioned

Load More