I operate by Crocker's rules.
Not OP but it could be that o1 underperformed their expectation.
Thank you, that's useful evidence!
Thank you for collecting those links :-)
I've listened to two or three of the interviews (and ~three other talks from a long time ago), and I still have no clue what the central claims are, what the reasoning supporting them is &c. (I understand it most for Zvi Mowshowitz and Sarah Constantin, less for Jessica Taylor, and least for Benjamin Hoffman & Vassar). I also don't know of anyone who became convinced of or even understood any of Michael Vassar's views/stances through his writing/podcasts alone—it appears to almost always happens through in-person interaction.
The obvious advice is of course "whatever thing you want to learn, let an LLM help you learn it". Throw that post in the context window, zoom in on terms, ask it to provide examples in the way the author intended it, let it generate exercises, let it rewrite it for your reading level.
If you're already doing that and it's not helping, maybe… more dakka? And you're going to have to expand on what your goals are and what you want to learn/make.
I remember that Lightcone was interested in working on human intelligence amplification and/or pausing AI (I can't find the LW comment, I'm afraid). Is that still part of the plan?
Huh, thanks, nevermind.
We had originally just wanted space cycles donated
I think this is a mistake, and it should be "spare cycles" instead.
I think I'd've wanted to know about tigramite when learning about causal inference, it's a library for doing causal inference on time-series data.
Has @MadHatter replied or transferred the money yet?