I've formerly worked for MIRI and what's now the Center on Long-Term Risk; I'm now making a living as an emotion coach and Substack writer.
Most of my content becomes free eventually, but if you'd like to get a paid subscription to my Substack, you'll get it a week early and make it possible for me to write more.
I expected to strong-upvote this because "appealing to conservatives in the kind of language and values that they appreciate and working to join them on issues of AI safety" feels like a very laudable goal. However, much of this talk seemed to be not so much "trying to find common ground and things we can all agree on with the conservatives", instead being "demonizing anyone associated with building AI, including much of the AI safety community itself".
I'm confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming "rationalists" in your list of groups that are trying to destroy religion, among other things. I would expect that the net result of this talk is to makes anyone sympathetic to it discount the opinions of many of the people who've put the most work into understanding e.g. technical AI safety or AI governance.
I think that didn't tag/notify him but @geoffreymiller does, in case he wants to participate in the discussion.
Agree. My first thought was something like "belief correlation" or "belief interconnectedness", as in "I forgot that beliefs are correlated". Also a vague reference to everything is correlated.
It seems safe to assume that most YC companies were not using it much before the launch of ChatGPT (if only because the technology wasn’t available)
Strictly speaking the technology was available (I got a startup that I consulted for to adopt GPT-3 roughly a year before ChatGPT happened). That said, it wasn't very widely known, so your take still seems like a reasonable approximation.
Wikipedia articles have traditionally been emphasized in LLM training. OpenAI never told us the dataset used to train GPT-4 or GPT-5, but the dataset used for training GPT-3 involved 3.4 repetitions of Wikipedia.
The Pile also has English Wikipedia repeated three times, which is a higher multiplier than any other subcomponent.
This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don't want AI to become a partisan issue, and then... go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?
If you are talking about AI and listing various ways that AI could be used against conservatives, what's to prevent various liberals from going "hey good idea, let's build AI to shaft conservatives in exactly the way Geoffrey is describing"? Or from just generally going "aha conservatives seem to think of AI as a particularly liberal issue, I hadn't thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?"