Dumping out a lot of thoughts on LW in hopes that something sticks. Eternally upskilling.
I write the ML Safety Newsletter
DMs open, especially for promising opportunities in AI Safety and potential collaborators.
Models have some trouble with PDFs sometimes - seems like they often much prefer markdown and such if possible, so may be worth trying to convert. I've had tasks completely fail with PDF input and work as expected with markdown input of the same content.
That's interesting, I don't know why it would take that much longer for you to get what appear to be the same things that I get within a single day of detoxing. I haven't tried 4 weeks, maybe it's a qualitatively different experience.
I generally agree on the point about reading consumption.
What is the DANNet joke?
I’ve reflected on whether that perception is largely subjective preference or a honed sense of how others come to understand things but I can’t know (time will tell).
I professionally and as in my personal capacity do a lot of communications to different levels of background knowledge about AI safety, and I got the same sense that this is not how you bridge the gap between the general population/DC folk/intellectual side of genpop/etc and what you're actually trying to communicate. I basically agree with Yudkowsky on all of his claims. My primary problem with his writing has always been that it only works on those who are already at least a bit rationalist and have the capacity to become moreso. I assumed from the preliminary reviews that the contributions from Nate and the editing team had fixed this, that they had finally turned Yudkowsky's writing into good general-audience writing, and I was surprised and disappointed to find out this was not the case. The praise for the book from outsiders still gives me some hope that I'm wrong on this front, but this doesn't meaningfully effect my assessment of its quality.
Fixed, thank you!
(I might write a post on this at some point.)
There's a meditation technique that I have used to improve my typing speed, but that seems pretty generalizable: Open up a typing test[1] and try to predict my mistakes in advance of them happening. This could look like my finger slipping, or running into a faulty muscle memory for a certain word, or just having a cache miss and stumbling for a second. Then, I use this awareness to not make those mistakes, ideally stopping them before they happen even once.
I've learned to type from scratch several times, going from hunt and peck to touch typing with qwerty, to touch typing colemak, to learning to use the Kinesis Advantage 2, to learning the CharaChorder 2 and its custom layout, which is now my daily driver. I only started doing this meditation about half way through learning colemak, and it noticeably boosted my accuracy in a relatively lasting way. However, it's relatively straining to meditate while also trying to type as fast as you can, especially on the CharaChorder because it has an entirely new type of cognitive load that I'm learning.
I would probably generalize this if I was trying to get really good at another DEX-reliant skill, but for now I'm not. It feels related to the part of me that more generally notices when I'm Predictably Wrong, but in practice it felt like a meaningfully different thing to train.
How did you determine the cost and speed of it, given that there is no unified model that we have access to, just some router between models? Unless I'm just misunderstanding something about what GPT-5 even is.
It's a balance between getting the utility out of using smarter and smarter assistants and not being duped by them. This is really hard, and it's definitely not a bet that everyone should make.
My understanding is that book stores put a decent amount of weight on it. The list is dependent on the amount it sells, so it seems like a good bet to buy many of them for one's store.