Can you explain more about the job helping the bank "win zero sum games against people who didn't necessarily deserve to lose"? Doesn't match my model of how investment banks work
Collins not Gladwell lmao
Hey Zvi. Love and appreciate your writing. I've been an avid reader since the covid posts. I know it's difficult since your posts are so long, but this one and others could use a proof-reading for typos. I regret that I didn't write down any particular instances, but there were a number in this post.
That sort of thing doesn't usually bother me, but your writing is precise and high-entropy to the point where a single mistaken word can make the thought much harder to digest. For example, in the sentence:
If this changes the rule from ‘you can build a house but you owe us $23k’ to ‘you can build a house and pay us $230’ then that is good on the margin.
I wasn't sure where the $230 number came from. Was it supposed to be $230k, a figure you used later in that section? Or did it follow from something you wrote previously in that section and I just didn't understand? I just skipped to the next section without trying to resolve my confusion, since I knew that your posts often contain typos and trying to scrutinize the $230 may be futile in the end. If I had confidence that your writings lacked typos I would have spent more time with it
It's very possible that Murati's talk at Dartmouth was my source's source, i.e. the embedded video around 13:30. She doesn't say GPT-5 specifically but does sort of imply that by mentioning the jump from GPT-3 to GPT-4, then says "And then in the next couple of years we're looking at PhD-level intelligence for specific tasks...Yeah, a year and a half let's say"
I have moderately strong evidence that OpenAI has pushed back GPT-5 to late 2025 (not naming source for confidentiality reasons). Conditional on this being true:
Strong upvoted. This post (especially as it relates to ask/guess culture) puts into words what I've previously referred to vaguely as "spiritual differences". I'm hopeful that I can train myself to recognize mismatched stances and pivot instead of concluding that someone else and I have incompatible personalities
The speed with which GPT-4, was hooked up to the internet via plugins has basically convinced me that boxing isn't a realistic strategy. The economic incentive to unbox an AI is massive. Combine that with the fact that an ASI would do everything it could to appear safe enough to be granted internet access, and I just don't see a world in which everyone cooperates to keep it boxed.
Are you able to strong man the argument in favor of AI being an existential risk to humanity?
it is truly terrible