"Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery."
-- Charles Dickens, from David Copperfield
I think it's fine as a post, but I'd very much prefer not to have explicitly political (and especially explicitly culture-war) posts here
That suggestion (allowing certain contacts to have a message saying "This person is asleep, but if you're sure, I can wake them up") has that property of truly great ideas - that it had never occurred to me, but I'm now legitimately angry that it doesn't already exist.
Seconded. Sleep is the sine qua non of my productivity, and the first thing that suffers in heat waves.
I could imagine a language model tool like Write With Transformer outperforming a random word generator for this, have you tried it? They even have one trained on NLP arXiv papers!
I used TiddlyWiki a lot for my PhD work and it was excellent, but haven't touched it since for some reason. Maybe I'll pick it back up.
One thing I remember doing was modifying the JS so that it linkifies any mention of a Tiddler, rather than only doing it when you put the name in the 'make this a link' syntax. I found it handy to have it so that any time I mentioned an author, framework, algorithm, dataset etc it would effortlessly be a link to my other notes about that topic. I'll see if I can find the files.
I thought about this a lot when considering my work. I'm very far from the best Youtuber, and very far from the most knowledgeable person on AI Safety, but nobody else is trying to combine those things, so I'm probably the best AI Safety Youtuber.
The interaction with comparative advantage is interesting though. I can think of several people off the top of my head who are strictly better than me at both AI Safety and public speaking/communication, who I'm confident could, if they wanted to, do my job better than I can. But they don't want to, because they're busy doing other (probably more important) things. It's not the case that a person on the pareto frontier eats up everything in their chunk of skill space - in practice people can only do a few things at a time. So even if you aren't on the frontier, you're ok as long as the ratio of problem density to 'elbow room' is good enough. You can be the best person in the world to tackle a particular problem, not because nobody else could do it better, but because everyone better is busy right now.
Oh yes. I think for me some of this has come from the growth of the AI Safety field and the shift in the overton window around this in the time since I started thinking about it. In 2011 I had this feeling of "We are barrelling towards an apocalypse and nobody is paying it any attention". I think a lot of my fire came from the fact that drastic things clearly needed to be done and almost nobody was doing anything, so, shit, I guess it's on me. And now the situation has changed a fair bit, and my personal situation has changed a lot, in that I'm now surrounded by people who also care about this and are working on it, or at least recognise it as an important issue. Sys2 sees pretty clearly that what we've got is nowhere near enough and the problem is very far from solved, but Sys1 sees all these smart and competent people working hard on it, and feels like "Well the whole tribe is oriented to this threat pretty well, so if it can be met, we'll meet it". So what keeps me going is the social stuff, in the sense of "We're all working on this thing in some way, and nobody else seems to be set up to do the specific job I'm doing, so I can be useful to the group".
Makes sense. It seems to flow from the fact that the source code is in some sense allowed to use concepts like 'Me' or 'I', which refer to the agent itself. So both agents have source code which says "Maximise the resources that I have control over", but in Agent 1 this translates to the utility function "Maximise the resources that Agent 1 has control over", and in Agent 2 this translates to the different utility function "Maximise the resources that Agent 2 has control over".
So this source code thing that we're tempted to call a 'utility function' isn't actually valid as a mapping from world states to real numbers until the agent is specified, because these 'Me'/'I' terms are undefined.
A human with barely enough calories to survive is going to be a significantly weaker chess opponent.