Programmer, rationalist, chess player, father, altruist.
I have no inside information. My guess is #5 with a side of 1, 6, and "the letter wasn't legally binding anyway so who cares."
I think that the lesson here is that if your company says "Work here for the principles in this charter. We also pay a shitload of money" then you are going to get a lot of employees who like getting paid a shitload of money regardless of the charter, because those are much more common in the population than people who believe the principles in the charter and don't care about money.
Interesting. I agree, I didn't even notice that Bb3 would be attacking a4, I was just thinking of it as a way to control the d-file. I hadn't really thought about how good that position would be if white just did "not much."
I also hadn't really thought about exactly how much better black was after the final position in the Qxb5 line (with Bxd5 exd5), it was just clear to me black was better and the position was personally appealing to me (it looks kind of one-sided, where white has no particular counterplay and black can sit around maneuvering all day to try to pick up a pawn.) Very difficult for me to guess whether it should be objectively winning or not.
Fun exercise, thanks for making it!
I'm 2100 USCF. I looked at the first position for a few minutes and read the AI reasoning. My assessment:
In the end, I would play Qxb5 and feel confident Black is doing well. I can't refute Qc5 though, I think it's probably sort of OK too. But if only one is a good move then I think it's Qxb5.
I don't feel this pressure. I just decline to answer when I don't have a very substantial opinion. I do notice myself sort of judging the people who voted on things where clearly the facts are barely in, though, which is maybe an unfortunate dynamic, since others may reasonably interpret it as "feel free to take your best guess."
I think "theory B" (DAE + EA) is likely true, but it also seems like he was independently considerably incompetent. The anecdotes about his mismanagement at Alameda and FTX (e.g. total lack of accounting, repeated expensive security breaches, taking objectively dumb risks, not sleeping, alienating the whole Alameda team by being so untrustworthy) weren't clever utilitarian coinflip gambits that he got unlucky on, or selfish defections that he was trying to get away with. They were just dumb mistakes.
My guess is that a number of those mistakes largely came from a kind of overapplication of startup culture (move fast, break things, grow at all costs, minimize bureaucracy, ask forgiveness rather than permission) way past the point where it made sense. Until the end he was acting like he was running a ten-person company that had to 100x or die, even though he was actually running a medium-sized popular company with a perfectly workable business model. (Maybe he justified this to himself by thinking of it like he had to win even bigger to save the world with his money, or something, I don't know.)
Since he was very inexperienced and terrible at taking advice, I don't think there's anything shocking about him being really bad at being in charge of a company moving a lot of money, regardless of how smart he was.
I work at Manifold, I don't know if this is true but I can easily generate some arguments against:
Personally for these reasons I am more eager to see features developed in the LW codebase than the Manifold codebase.
I tried Adderall and Ritalin each just for one day and it was totally clear to me based on that that I wasn't interested in taking them on a regular basis.
FWIW, I went from ~40/hrs week full-time programming to ~15/hrs week part-time programming after having a kid, and it's not obvious to me that I get less total work done. Certainly not twice less. But I would never have said I worked hard, so I could have predicted as much.
Never mind bettors -- part of my project for improving the world is, I want people like Casey to look at a prediction market and be like, "Oh, a prediction market. I take this probability seriously, because if it was obviously wrong, someone could come in and make money by fixing it, and then it would be right." If he doesn't understand that line of argument, then indeed, why is Casey ever going to take the probability any more seriously than a Twitter poll?
I feel like right now he might have the vibe of that argument, even if he doesn't actually understand it? But I think you have to really comprehend the argument before you will take the prediction market more seriously than your own uninformed feeling about the topic, or your colleague's opinion, or one research paper you skimmed.
I work at Manifold. I think it's notable that these two experienced tech journalists have had lots of repeated exposure to the idea of prediction markets, but it sounds like they only sort of figured out the basic concept?
In the discussion on insider trading, nobody mentions the extremely obvious point, which is that the prediction market is trying to incentivize the people with private information (maybe "insider" information, or maybe just something they haven't said out loud) to publicize what they know. If Casey actually cares about whether Linda Yaccarino will be the CEO of X next year, he should be excited by the idea that some guy at Twitter will come and insider trade in his market. But they never said anything like this -- they just said that maybe the market was supposed to very generally aggregate the wisdom of crowds.
It also sounds like they don't really understand why it would aggregate the wisdom of crowds better than, for example, a poll. Casey was like "well, when people have a Twitter poll, then partisans stuff the ballot box", implying that a similar result would be likely to happen with a prediction market on who will be the next Speaker, ignoring the obvious point that it costs a bunch of money to "stuff the ballot box" on a prediction market that isn't a self-fulfilling prophecy.
Perhaps relatedly, it sounded like Kevin and/or Casey had absolutely no clue how a prediction market actually works, numerically. At the end when they were making the market, Casey wasn't like "OK, bet it to 25%, since I think that's the chance." Instead Kevin was like "OK, I'll bet 100 mana," and then they were like "Huh, how about that, now it says 10%. Oops, I bet 100 more and now it says 8%." It seems like they are totally missing the core concept that the point of the prediction market is trying to specifically incentivize you to move the market to the probability you believe, which is like the first thing I ever learned about prediction markets in my life?
In the end, their feelings about prediction markets seemed totally vague and vibes-based. On the one hand, the wisdom of crowds has good vibes. On the other hand, insider trading and crypto/transactionalization of everyday things have bad vibes. On the gripping hand, gambling with play money is cute and harmless. Therefore, prediction markets are a land of contrasts.
My takeaway is that prediction markets are harder to understand than I think and I am not sure what to do about that.