LESSWRONG
LW

tailcalled
7844Ω7710724070
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Linear Diffusion of Sparse Lognormals: Causal Inference Against Scientism
No wikitag contributions to display.
6tailcalled's Shortform
4y
272
The Tortoise and the Language Model (A Fable After Hofstadter)
tailcalled17d40

It was quite real since I wanted to negotiate about whether there was an interesting/nontrivial material project I could do as a favor for Claude.

Reply
AI development as the first fully-automated job
tailcalled18d20

Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it's heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.

The trouble with intelligence (both human and artificial and evolution) is that it's all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.

Reply
AI development as the first fully-automated job
tailcalled18d20

I'm imagining a case where there's no intelligence explosion per se, just bags-of-heuristics AIs with gradually increasing competence.

Reply
The Tortoise and the Language Model (A Fable After Hofstadter)
tailcalled24d64

According to revealed preference, Claude certainly enjoys this sort of recursive philosophy - when I give Claude a choice, it's the sort of thing it tends to pick.

Reply
Inscrutability was always inevitable, right?
Answer by tailcalledAug 07, 202542

I think some of the optimism about scrutability might derive from reductionism. Like, if you've got a scrutable algorithm for maintaining a multilevel map, and you've got a scrutable model of the chemistry of a tire, you could pass through the multilevel model to find the higher-level description of the tire.

Reply
I am worried about near-term non-LLM AI developments
tailcalled1mo30

KANs seem obviously of limited utility to me...?

Reply
My Empathy Is Rarely Kind
tailcalled1mo60

I've recently been playing with the idea that you have to be either autistic or schizophrenic and most people pick the schizophrenic option, and then because you can't hold schizophrenic pack animals accountable, they pretend to be rational individuals despite the schizophrenia.

Edit: the admins semi-banned me from LessWrong because they think my posts are too bad these days, so I can't reply to dirk except by editing this post.

My response to dirk is that since most people are schizophrenic, existing statistics on schizophrenia are severely underdiagnosing it, and therefore the apparent correlation is misleading.

Reply
The Simple Truth
tailcalled1mo33

Feels like this story would make for an excellent Rational Animations video.

Reply
leogao's Shortform
tailcalled1mo20

I think part of the trouble is the term "emotional intelligence". Analytical people are better at understanding most emotions, as long as the emotions are small and driven by familiar dynamics. The issue is the biggest emotions or when the emotions are primarily driven by spiritual factors.

Reply
Against Infrabayesianism
tailcalled1mo2-1

There's reason to assume the unmeasurably unknown parts of the world are benevolent, because it is easier for multiple actors to coordinate for benevolent purposes than malevolent purposes. That infrabayesians then assume they're in conflict with the presumably-benevolent hidden purposes means that the infrabayesians probably are malevolent.

Reply
Load More
17AI development as the first fully-automated job
19d
4
-24Against Infrabayesianism
2mo
4
31Knocking Down My AI Optimist Strawman
7mo
3
13My Mental Model of AI Optimist Opinions
7mo
7
23Evolution's selection target depends on your weighting
9mo
22
43Empathy/Systemizing Quotient is a poor/biased model for the autism/sex link
10mo
0
12Binary encoding as a simple explicit construction for superposition
11mo
0
12Rationalist Gnosticism
11mo
12
32RLHF is the worst possible thing done when facing the alignment problem
1y
10
10Does life actually locally *increase* entropy?
Q
1y
Q
27
Load More