LESSWRONG
LW

1616
Kaj_Sotala
51176Ω5513035655161
Message
Dialogue
Subscribe

I've formerly worked for MIRI and what's now the Center on Long-Term Risk; I'm now making a living as an emotion coach and Substack writer. 

Most of my content becomes free eventually, but if you'd like to get a paid subscription to my Substack, you'll get it a week early and make it possible for me to write more.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind
Concept Safety
Multiagent Models of Mind
Keith Stanovich: What Intelligence Tests Miss
6Kaj's shortform feed
Ω
7y
Ω
104
My talk on AI risks at the National Conservatism conference last week
Kaj_Sotala2d138

We need to morally stigmatize anyone associated with building AGI/ASI

This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don't want AI to become a partisan issue, and then... go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?

If you are talking about AI and listing various ways that AI could be used against conservatives, what's to prevent various liberals from going "hey good idea, let's build AI to shaft conservatives in exactly the way Geoffrey is describing"? Or from just generally going "aha conservatives seem to think of AI as a particularly liberal issue, I hadn't thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?"

Reply11
My talk on AI risks at the National Conservatism conference last week
Kaj_Sotala2d3123

I expected to strong-upvote this because "appealing to conservatives in the kind of language and values that they appreciate and working to join them on issues of AI safety" feels like a very laudable goal. However, much of this talk seemed to be not so much "trying to find common ground and things we can all agree on with the conservatives", instead being "demonizing anyone associated with building AI, including much of the AI safety community itself". 

I'm confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming "rationalists" in your list of groups that are trying to destroy religion, among other things. I would expect that the net result of this talk is to makes anyone sympathetic to it discount the opinions of many of the people who've put the most work into understanding e.g. technical AI safety or AI governance.

Reply2
MAGA speakers at NatCon were mostly against AI
Kaj_Sotala2d20

As far as I understand, Archive Today circumvents paywalls without permission.

Reply
MAGA speakers at NatCon were mostly against AI
Kaj_Sotala2d80

Note that the Internet Archive and Archive Today are different services.

Reply
The Rise of Parasitic AI
Kaj_Sotala2d90

The spiritual bliss attractor was apparently already in Opus 3 [1, 2, 3], Anthropic just didn't happen to notice it earlier.

Reply
MAGA speakers at NatCon were mostly against AI
Kaj_Sotala5d41

I think that didn't tag/notify him but @geoffreymiller does, in case he wants to participate in the discussion.

Reply
Kaj's shortform feed
Kaj_Sotala7d42

Looking at the karma for some of my recent posts, "publish more by just writing about anything that happens to interest you and don't try to predict whether it'll interest the readers" sure is a high-variance strategy (as you might have expected).

Reply
Simulating the *rest* of the political disagreement
Kaj_Sotala10d20

Agree. My first thought was something like "belief correlation" or "belief interconnectedness", as in "I forgot that beliefs are correlated". Also a vague reference to everything is correlated.

Reply
Generative AI is not causing YCombinator companies to grow more quickly than usual (yet)
Kaj_Sotala10d20

It seems safe to assume that most YC companies were not using it much before the launch of ChatGPT (if only because the technology wasn’t available)

Strictly speaking the technology was available (I got a startup that I consulted for to adopt GPT-3 roughly a year before ChatGPT happened). That said, it wasn't very widely known, so your take still seems like a reasonable approximation.

Reply
Mech Interp Wiki Page and Why You Should Edit Wikipedia
Kaj_Sotala11d60

Wikipedia articles have traditionally been emphasized in LLM training. OpenAI never told us the dataset used to train GPT-4 or GPT-5, but the dataset used for training GPT-3 involved 3.4 repetitions of Wikipedia.

The Pile also has English Wikipedia repeated three times, which is a higher multiplier than any other subcomponent.

Reply
Load More
58Defensiveness does not equal guilt
15d
16
42Four types of approaches for your emotional problems
1mo
5
266How anticipatory cover-ups go wrong
2d
23
1Creative writing with LLMs, part 2: Co-writing techniques
1mo
0
36Creative writing with LLMs, part 1: Prompting for fiction
2mo
10
70LLM-induced craziness and base rates
2mo
2
80You can get LLMs to say almost anything you want
2mo
10
172Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI
5mo
52
50Things I have been using LLMs for
8mo
6
158Don’t ignore bad vibes you get from people
8mo
50
Load More
Internal Family Systems
3 years ago
(+68/-20)
Internal Family Systems
4 years ago
(+306)
Internal Double Crux
4 years ago
(+92)
Arguments As Soldiers
5 years ago
(+473/-85)
AI Advantages
5 years ago
Willpower
5 years ago
(+6/-9)
Aumann's Agreement Theorem
5 years ago
(+26/-501)