Most of the time people say "Schelling point" they mean this. Maybe it would be better to call it a Schelling fence, but even that post claims that it is a Schelling point. I suspect that you can reframe it to make it true Schelling point, such as the participants coordinating to approximate the real game by a smaller tractable game, but I'm not sure.
I wanted to take measure theory in college, but my advisor talked me out of it, saying that it is an old, ossified field where writers play games of streamlining their proofs. They seek too much generality and defer applications to later courses. That complaint could apply more generally, that introductory graduate classes are bad because they have captive audiences, but it seems to me much worse in analysis than other fields of mathematics. What is the point of measure theory? Archimedes gave a rigorous delta-epsilon proof that if there is a coherent notion of measure, then the area of a circle is πr². But how do you know that you don't encounter inconsistencies?
Applications are related to constructibility. If you know what your goal is, you can see if you can skip the axiom of choice. Indeed, as I phrased it above, the goal is to show that measure is defined on some sigma algebra, not just the maximal one. And it is also related to constructibility. Why do we want measureable functions? What is a function? If a function is something you can apply at a point, then from a constructive viewpoint it must be continuous. But you can constructively describe things like infinite Fourier series. You can't evaluate them at points, but only do other things, like compute an average over a small interval. You want a theorem that the Hilbert space of square integrable functions on the circle is isometric to the Hilbert space of square summable sequences L²(S¹)=ℓ². Usually you define L² as measurable functions up to the equivalence relation of equality away from a measure zero set. But you could instead define it as the metric completion of infinitely differentiable functions under the appropriate norm. This is a much better definition for many reasons, including constructibility, but it requires you to open up your definition of function.
Here are two alternate books. Measure and Probability by Adams and Guillemin is a book about measure theory that tries to justify it by the context of things like 0-1 laws of probability. I'm not sure it succeeds in the justification, but it gives something more serious to think about if you want to drop the axiom of choice or the law of the excluded middle. Also, see this MO question.
The second book is more advanced, outside of the scope of this post. After measure theory, one has functional analysis, the study of infinite dimensional topological vector spaces of functions. I once heard it described as "degenerate topology." For this, I recommend Essential Results of Functional Analysis by Robert Zimmer. It gives a bunch of applications to differential equations with a geometric flavor. It minimizes the amount of theory to get to the applications, in particular, by only using Hilbert spaces, not general Banach spaces.
Could you say more about taste? How fast can you evaluate the taste of a book? If it's fast, could you check whether there was a trajectory over the 12 editions?
showcasing the fearsome technicality of the topic in excruciatingly detailed estimates (proofs involving chains of inequalities, typically ending on "< ε").
That sounds bad. The ultimate proof is a chain of inequalities, but just presenting it is bad compared to deriving it.
“I wish we had the education system they have in Doorways in the Sand,” I said... “Did you know, there’s a new Heinlein? The Number of the Beast. And he’s borrowed the idea of that education system, where you study all those different things and sign up and graduate when you have enough credits in everything, and you can keep taking courses forever if you want, but he doesn’t acknowledge Zelazny anywhere.”
Wim laughed. “That’s what they really do in America,” he said.
— Jo Walton, Among Others
I found this Terry Tao blog post helpful. In particular, this paragraph,
It is difficult to prove that no conspiracy between the primes exist. However, it is not entirely impossible, because we have been able to exploit two important phenomena. The first is that there is often a “all or nothing dichotomy” (somewhat resembling the zero-one laws in probability) regarding conspiracies: in the asymptotic limit, the primes can either conspire totally (or more precisely, anti-conspire totally) with a multiplicative function, or fail to conspire at all, but there is no middle ground. (In the language of Dirichlet series, this is reflected in the fact that zeroes of a meromorphic function can have order 1, or order 0 (i.e. are not zeroes after all), but cannot have an intermediate order between 0 and 1.) As a corollary of this fact, the prime numbers cannot conspire with two distinct multiplicative functions at once (by having a partial correlation with one and another partial correlation with another); thus one can use the existence of one conspiracy to exclude all the others. In other words, there is at most one conspiracy that can significantly distort the distribution of the primes. Unfortunately, this argument is ineffective, because it doesn’t give any control at all on what that conspiracy is, or even if it exists in the first place!
But I'm not sure how much this is just restating the problem.
Yes, if we accept your ifs, we conclude that the new business is net negative. This really happens and some new businesses really are net negative (although I think negligible compared to negative externalities). But why think your assumptions are normal? Why think that the fixed cost of the business is larger than the time savings of the closer customers? Why expect no price competition, no price sensitivity?
There is a standard analysis of competition. If you reject it, it would be good to address it, rather than ignoring it. The standard analysis is that competition reduces prices. The first order effect of reducing prices is a transfer from producer surplus to consumer surplus, taken as morally neutral. But the lower price induces more sales, creating increased surplus. The expectation is that the first order neutral effect swamps the second order positive effect, which swamps the fixed costs.
The producer surplus is a rent. It induces rent-seeking. The second company to enter the market is mainly driven by rent-seeking. But by lowering the price they probably produce much more aggregate surplus than they capture. The more competitive the market, the lower the rents and the less new entrants are driven by rent-seeking. Late entrants are driven by the belief that they are more efficient.
The producer surplus is a rent. It induces rent-seeking. One form of that rent-seeking is new entrants, but another form is parasites within the organization, which seem much worse to me. Competition applies discipline which discourages these parasites. If the producers are innovative, you might think that they will make better use of the surplus than the consumers. If you do not expect parasites, maybe it would be better for innovators to capture more wealth. Maybe this was true a century ago, but it seems to me very far from true today. So I think the dispersal of wealth by transferring from producer surplus to consumer wealth is morally good by discouraging parasites within larger firms.
Putting lamps in ducts is not very different from putting filters in ducts; but with the downside that I'm a lot more worried about fraudulent lamps than filters. I guess it's easy to retrofit a lamp into a duct, whereas a filter slows the air; but you probably already have a system designed with a filter.
The point of lamps is to use them in an open room where they cover the whole volume continuously.
This is standard today, but how recent is it? It looks like the industrial age to me.
How much of institutions is about solving akrasia and how much is about breaking the ability to act autonomously?
We get the word akrasia from Plato, but was he really talking about the same thing?
There is always the question of whether to study things bottom-up or top-down. These are bottom-up studies of what to do if you have a single infected patient. If you had an individual infected with a novel cold, that would be important, but we are generally interested in epidemics. In particular, why do colds go epidemic in the winter? We know there must be some environmental change. Maybe it's a small change, since it only takes a small change in reproduction number to cause an epidemic. Then these controlled experiments might identify the main method of transmission. But maybe the change from summer to winter is a big change that swamps the effects we can measure in these bottom-up experiments.
Let Nitter scrape for you. 1 2 3