All of Robert Kennedy's Comments + Replies

I wish this post talked about object level trade offs. It did that somewhat with the reference to the importance of "have a decision theory that makes it easier to be traded with". However, the opening was extremely strong and was not supported:

I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world. Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dolla

... (read more)

Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you're always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.

For instance, regarding human level intelligence, I was also describing relative to "me a year/month ago". I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how "human level" intelligence is a st... (read more)

I understand your reasoning much better now, thanks! "GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state" is a great way to put it and a very important observation. I think the attractor state is more nuanced than "human-level". GPT is incentivized to learn to model "everyone everywhere all at once" if you will, a superhuman task -- and while the default runtime behavior is human-level simulacra, I expect it to be possible to elicit superhuman performance by conditioning the model in certain ways or a relatively small amount of fine tuning/RL. Also, being simulated confers many advantages for intelligence (instances can be copied/forked, are much more programmable than humans, potentially run much faster, etc). So I generally think of the attractor state as being superhuman in some important dimensions, enough to be a serious foom concern. Broadly, though, I agree with the framing -- even if it's somewhat superhuman, it's extremely close to human-level and human-shaped intelligence compared to what's possible in all of mindspace, and there is an additional unsolved technical challenge to escalate from human-level/slightly superhuman to significantly beyond that. You're totally right that it removes the arbitrariness of "human-level" as a target/regime. I'd love to see an entire post about this point, if you're so inclined. Otherwise I might get around to writing something about it in a few months, lol.

Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don't understand what the 95% means. I don't understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.

In my terms, when you say that a Bayesian wouldn't bet $1bil:$1 that the sun will rise tomorrow, that doesn't seem correct to me. It's true that I wouldn't actually make that nightly bet, because t... (read more)

I am not sure I understand, probably because I am too preprogrammed by Bayesianism.

You roll a d20, it comes up with a number (let's say 8). The Frequentist now believes there is a 95% chance the die is loaded to produce 8s? But they won't bet 20:1 on the result, and instead they will do something else with that 95% number? Maybe use it to publish a journal article, I guess.

With 2 hypothesis: die is fair/die is 100% loaded, a single roll doesn't discriminate at all. The key insight is that you have to combine Baysean and Frequentist theories. The prior is heavily weighted towards "the die is fair" such that even 3 or 4 of the same number in a row doesn't push the actionable probability all the way to "more likely weighted" but as independent observations continue, the weight of evidence accumulates.
Bayesianism defines probability in terms of belief. Frequentism defines probability as a statement about the world's true probabiliity. Saying "[t]he Frequentist now believes" is therefore asking for a Frequentist's Bayesian probability.
2Ege Erdil3mo
I strong upvoted this because something about this comment makes it hilarious to me (in a good way).

I would like to note that the naive version of this is bad. First, the naive version falls prey to new grads (who generally have nothing) declaring bankruptcy immediately after graduation. Then, lenders are forced to ask for collateral, which gets rid of a GREAT quality our current system has - you can go to college even if your parents weren't frugal, no matter their income. I think this criticism probably still lands with a 5 year time horizon, maybe less for a 10 year.

I like the concept that lenders would take an interest in which major you were getting... (read more)

3Dennis Towne3mo
Yes, the naive version of this is bad; but the point of a change like this isn't that the immediate downstream effects are bad. The point is that the system as a whole is a giant adaptive object, and a critical part of the control loop is open. Closing the control loop has far, far more impact than just the naive version. Consider cause and effect down the timeline: * Students are allowed to default, and start defaulting. * Loan companies change behavior, both to work with existing loan holders (so they don't default) and be more selective about who they give loans to. * Loans become more likely for careers / degrees which have the ability to make money (STEM and friends), less likely for other degrees. * Number of students, and amount of money coming in to universities, drops. * Universities actually experience price pressure. They start cost cutting and dropping less useful things, and start shifting resources to degree programs with the most students. * Cost of a university degree slowly drops over time due to reduced demand and reduced funding. * Over time, there are broader societal shifts to deemphasize the idea that "everyone needs a degree". Trade and other schools gain more prominence. * Universities start experiencing increased competitive pressure with trade schools. ... and other effects. Also, this is iterative - all of these components take time to respond and adjust to the new equilibrium, after which they will need to re-adapt. Yes, it's not a perfect solution, and yes, there's definitely the concern that poor / disadvantaged students will have more trouble getting loans. But compensating somewhat for this would be the price drop, additional emphasis on trade schools, and deemphasis on needing a degree for any and all jobs. Another expected objection might be, "with all these possible changes, how do we know this will be better?" To that I would answer: because we know the system is at least partially broken because

This was the UX I was going to mention - watching GSL (SC:BW) VoDs. There it is tricky, especially since individual games can vary so heavily.

This article was great! Please define WIC much earlier, that was how I felt reading it and the first feedback I got after sharing it. Thanks for writing this!

My understanding is that the math textbooks were banned in Florida for their use of the "Common Core" framework. I was a math educator, and my experience is that resistance to Common Core comes primarily from parents who hate math, and are confused why they can't do their child's math, and who somehow take this as a failure mode.

I really appreciate this post. In Chinese, the vocal pronouns for "he" and "she" are the same (they are distinguished in writing). It is common for Chinese ESL students to mix the words "she" and "he" when speaking. I have been trying to understand this, and relate it to my (embarrassingly recent) understanding that probabilistic forecasts (which I now use ubiquitously) are a different "epistemology" than I used to have. This post is a very concrete exploration of the subject. Thank you!

I think finding the correct link required a good heart. In the hope Zvi will see you, I am commenting to further boost visibility.

I think top level posts generate much more than 10x the value than the entire comments section combined, based off my impression that the majority of lurkers don't get deep in the comments. I wonder if top level posts having a x^1.5 exponent would get closer to the ideal... That would also disincentivize post series...

No, since if I had rolled low I wouldn't want to like, give them significantly more notice than necessary as I job hunted. I offered to do something like hash a seed to use on a RNG, they didn't think that was necessary. 

Yeah, these kinds of real-life complications. Cool idea about the hash.

There is a "going going" in this chapter as well

Fixed. Thanks.

Actually, for any given P which works, P'(x)=P(x)/10 is also a valid algorithm.

If I am following, it seems like an agent which says "bet 'higher' if positive and 'lower' otherwise" does well


I do not believe that "any monotonically increasing bounded function over the reals is continuous". For instance, choose some montonically increasing function bounded to (0,0.4) for x<-1, another function bounded to (0.45,0.55) for -1<x<1, and a third function bounded to (0.6,1) for x>1.

I did not check the rest of the argument, sorry


 Could you explain why you are almost certain?

P.S. Thanks for making this post, it's been an interesting problem to think about.

Could you explain why it's clearly impossible to produce an algorithm that gives better than 50% chance of success on the first round? I think I follow the rest of your argument.

ROB selects A and B. First suppose A < B. Suppose A is revealed. Further Suppose that some deterministic Algorithm R exists which takes in A, and produces the probability that A is smaller. In round one the only input to the algorithm can be A alone. Furthermore since we have supposed that R is "better than 50%", we must see have R yield us that P( A smaller ) > 0.5. We can then easily extrapolate P( A bigger ) < 0.5. Now suppose we have the opposite case, that A > B. Again the only input to our algorithm can be A for the first round. However we must receive as output: P( A smaller ) < 0.5 and thus P( A bigger ) > 0 5 But consider that in both cases our only input was A, then it follows that R must not be deterministic since it produces two different results on the same input. This is a contradiction, hence there is no such deterministic algorithm R. It is possible that there is a nondeterministic algorithm R', however I'm almost certain that no such algorithm can outperform a deterministic one in a case like this.

Good questions! It's a forum with posts between two users "Iarwain" (Yudkowsky) and "lintamande", who is co-authoring this piece. There are no extraneous posts, although there are (very rarely) OOC posts, for instance announcing the discord or linking to a thread for a side story.

In each post, either user will post as either a character (ie Keltham, Carissa, and others - each user writes multiple characters) or without a character (for 3rd person exposition). I usually use the avatars when possible to quickly identify who is speaking.

You don't need to pay attention to history or post time, until your catch up to the current spot and start playing the F5 game (they are writing at a very quick pace).

By "better than 50% accuracy" I am trying to convey "Provide an algorithm such that if you ran a casino where the players acted as ROB, the casino can price the game at even money and come out on top, given the law of large numbers". 

(Perhaps?) more precisely I mean that for any given instantiation of ROB's strategy, then for any given target reward R and payoff probability P<1 there exists a number N such that if you ran N trials betting even money with ROB you would have P probability to have at least R payoff (assuming you start with 1 dollar or whatever).

You can assume ROB will know your algorithm when choosing his distribution of choices. 

Computability is not important. I only meant to cut off possibilities like "ROB hands TABI 1 and 'the number which is 0 if the goldbach conjecture is true and 2 otherwise'"

You can restrict yourself to arbitrary integers, and perhaps I should have

I don't see how 2 is true. 

If you always answer that your number is lower, you definitely have exactly 50% accuracy, right? So ROB isn't constraining you to less than 50% accuracy.

Even without that section, the modeling of your evaluation as a mapping from input number to a binary "you will say that this number is higher than the other" result is pretty binding. If ROB knows (or can infer) your mapping/algorithm, it can just pick numbers for which you're wrong, every time. Which turns this into a "whoever knows the other's algorithm better, wins" situation.
Yeah, I was confused. I was thinking you had to state a probability of having the larger number (rather than a binary guess) and try to get better than chance according to some scoring rule.