Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you're always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.
For instance, regarding human level intelligence, I was also describing relative to "me a year/month ago". I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how "human level" intelligence is a straw target. I think their arguments were persuasive, and that I would have endorsed them a year ago, but that they don't really apply to GPT. I had pretty much concluded that the difference between a 150 IQ AI and a 350 IQ AI would be a matter of scale. GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state. Since I had previous thought the entire idea was a distraction, this is an update towards human level AI.
The impact on AI timelines mostly follows from diversion of investment. I will think on if I have anything additional to add on that front.
Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don't understand what the 95% means. I don't understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.
In my terms, when you say that a Bayesian wouldn't bet $1bil:$1 that the sun will rise tomorrow, that doesn't seem correct to me. It's true that I wouldn't actually make that nightly bet, because the risk free rate is like 3% per annum so it'd be a pretty terrible allocation of risk, plus it seems like it'd be an assassination market on the rotation of Earth and I don't like incentivizing that as a matter of course. But does the math of likelihood ratios not work as well to bury bad theories under a mountain of evidence?
I think not assigning 1e-40 chance to an event is an epistemological choice separate from Bayesianism. The math seems quite capable of leading to that conclusion, and recovering from that state quickly enough.
I think maybe the crux is "There is no way for a Bayesian to be wrong. Everything is just an update. But a Frequentist who said the die was fair can be proven wrong to arbitrary precision." You can, if the Bayesian announces their prior, know precisely how much of your arbitrary evidence they will require to believe the die is loaded.
Again, I hope this is taken in the spirit I mean it, which is "you are the only self proclaimed Frequentist on this board I know of, so you are a very valuable source of epistemic variation that I should learn how to model".
I am not sure I understand, probably because I am too preprogrammed by Bayesianism.
You roll a d20, it comes up with a number (let's say 8). The Frequentist now believes there is a 95% chance the die is loaded to produce 8s? But they won't bet 20:1 on the result, and instead they will do something else with that 95% number? Maybe use it to publish a journal article, I guess.
I would like to note that the naive version of this is bad. First, the naive version falls prey to new grads (who generally have nothing) declaring bankruptcy immediately after graduation. Then, lenders are forced to ask for collateral, which gets rid of a GREAT quality our current system has - you can go to college even if your parents weren't frugal, no matter their income. I think this criticism probably still lands with a 5 year time horizon, maybe less for a 10 year.
I like the concept that lenders would take an interest in which major you were getting, since that seems like something that could use an actuarial table. I think we would benefit from more directly incentivizing STEM (and other profitable) degrees, which IDR doesn't seem to do. What if IDR left lenders holding the bag?
This was the UX I was going to mention - watching GSL (SC:BW) VoDs. There it is tricky, especially since individual games can vary so heavily.
This article was great! Please define WIC much earlier, that was how I felt reading it and the first feedback I got after sharing it. Thanks for writing this!
My understanding is that the math textbooks were banned in Florida for their use of the "Common Core" framework. I was a math educator, and my experience is that resistance to Common Core comes primarily from parents who hate math, and are confused why they can't do their child's math, and who somehow take this as a failure mode.
I really appreciate this post. In Chinese, the vocal pronouns for "he" and "she" are the same (they are distinguished in writing). It is common for Chinese ESL students to mix the words "she" and "he" when speaking. I have been trying to understand this, and relate it to my (embarrassingly recent) understanding that probabilistic forecasts (which I now use ubiquitously) are a different "epistemology" than I used to have. This post is a very concrete exploration of the subject. Thank you!
I think finding the correct link required a good heart. In the hope Zvi will see you, I am commenting to further boost visibility.
I think top level posts generate much more than 10x the value than the entire comments section combined, based off my impression that the majority of lurkers don't get deep in the comments. I wonder if top level posts having a x^1.5 exponent would get closer to the ideal... That would also disincentivize post series...