Computer scientist, applied mathematician. Based in the eastern part of England.
Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.
I wrote the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.
Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.
I first found an LLM useful (other than for answering the question "let's see how well the dog can walk on its hind legs") in September 2025. As yet they do not form a regular part of anything I do.
It does, but very wastefully. Almost all of the avoidance manoeuvres you make will be unnecessary, and some will even cause a collision, but you will not know which ones. Further modelling (that I think would be belabouring the point to do) would allow a plot of how a decision rule for manoeuvring reduces the probability of collisions.
Let be the frequency of collisions given the tracking precision and some rule for manoeuvres. Let be the frequency of collisions without manoeuvres. Define effectiveness to be .
I would expect effectiveness to approach 1 for perfect tracking (and a sensible decision rule) and decline towards 0 as the precision gets worse.
I imagine the conversation in the control room where they're tracking the satellites and deciding whether to have one of them make a burn:
"What's the problem? 99.9% chance they're safe!"
"We're looking at 70%." [Gestures at all the equipment receiving data and plotting projected paths.] "Where did you pull 99.9% from?"
"Well, how often does a given pair of satellites collide? Pretty much never, right? Outside view, man, outside view!"
"You're fired. Get out of the room and leave this to the people who have a clue."
Some further remarks.
The above analysis is what immediately occurred to me on reading the OP. Yet the supposed paradox, dating from 2017 (the first version of the paper cited in the OP), seems to have genuinely perplexed the aerospace community.
A control process cannot control a variable to better than the accuracy with which it can measure it. It is pointless to try to avoid two satellites coming within 10 metres of each other, if your tracking process cannot measure their positions better than to 100 metres (the green trace in my figure). If your tracking process cannot be improved, then you must content yourself with avoiding approaches within around 100 metres, and you will be on the equivalent of the yellow line in that figure. The great majority of the evasive actions that your system will employ will be unnecessary to avert actual collisions, which only actually happen for much closer approaches. That is just the price of having poor data.
I went looking on Google Scholar for the origins and descendants of this "false confidence" concept, and it's part of a whole non-Bayesian paradigm of belief as something not to be quantified by probability. This is a subject that has received little attention on LessWrong, I guess because Eliezer thinks it's a wrong turning, like e.g. religion, and wrote it off long ago as not worth taking further notice of. The most substantial allusion to it here that I've found is in footnote 1 to this posting.
Are there any members of the "belief function community" here, or "imprecise probabilists", who believe that "precise probability theory is not the only mode of uncertainty quantification"? Not scare quotes, but taken from Ryan Martin, "Which statistical hypotheses are afflicted with false confidence?". How would they respond to my suggestion that "false confidence" is not a problem and that belief-as-probability is enough to deal with satellite collision avoidance?
With respect to the satellite problem, there is nothing problematic in the fact that when one knows nothing about the two satellites, one assigns a low probability to their collision. In agreement with this, there have been some accidental satellite collisions, but they are rare per pair of satellites.
Neither is there anything paradoxical about the fact that if you draw a small enough circle around a bullet hole, you would, before observing the hole, have assigned a probability approaching 1 to the false statement that the bullet would land outside that circle. This example is the gist of the proof of the "False Confidence Theorem".
The two satellites will either collide within the time frame of interest or they will not. Replacing "collide" with "not approach closer than some distance D", we can make a judgement about the accuracy of our tracking procedure by asking, suppose that the distance of closest approach were some value X; what probability p would we assign to the proposition that |X| < D?
I will simplify all the details of the tracking procedure into the assumption that our estimate of closest approach is normally distributed about X with a standard deviation s (and that X is a signed quantity). Then we can plot p(D,X,s)[1] as a function of X, for various values of s.
This is what I get.
As the Litany of Gendlin says: If there will be a close approach, I want to believe there will be a close approach. If there will not be a close approach, I want to believe there will not be a close approach. The image shows that the lower the tracking uncertainty the better this ideal is achieved. Nothing is gained by discarding good information for bad.
Which was obvious already.
If you look at the curves where the distance of closest approach is 15 (the dashed line), you can see the pattern of both high and low accuracy giving low probabilities of the satellites being too close, with medium accuracy giving a higher probability. This is an obvious curiosity of no relevance to the problem of detecting possible collisions. You cannot improve a situation by refusing to look at it, only degrade your abililty to deal with it.
p(D,X,s) = normcdf( D, X, s ) – normcdf( –D, X, s ).
normcdf(A,B,C) is the cdf of the normal distribution with mean B and std. dev. C, evaluated at A.
But as with dogs, there is nothing I can do to "fix" other people into being intellectually serious, or anything else I think they should be. I take people as I find them and leave them the same way.
Well, at least they're not dogs! Cats wandering around do prettify a neighbourhood, as long as they're not doing their business in my garden, and I extend them the same grudging tolerance as they do to us. I don't stroke them, despite Jordan Peterson's advice.
it turns out that when i am around people i find intellectually unserious, i deny them personhood and i act in an incredibly shitty way.
Dogs are intellectually unserious, yet many people love them, and "talk" to them on their level.
(But me, I don't like dogs and keep away from them.)
I'm mainly interested not in who best writes like lsusr, but in knowing which of the entries were intended as serious postings in their own right. I hope for all of them, but the goal might squeeze out other considerations.
How about 22 micrograms, the Planck mass? Epistemic status: idle speculation. There's this absolute mass that falls out of the fundamental constants, it must mean something.