LESSWRONG
LW

519
Cole Wyeth
4082Ω288447712
Message
Dialogue
Subscribe

I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.

My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!

See my personal website colewyeth.com for an overview of my interests and work.

I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5Cole Wyeth's Shortform
Ω
1y
Ω
251
I recklessly speculate about timelines
Meta-theory of rationality
AIXI Agent foundations
Deliberative Algorithms as Scaffolding
Cole Wyeth's Shortform
Cole Wyeth25d22

Semantics; it’s obviously not equivalent to physical violence. 

Reply
AI 2027: What Superintelligence Looks Like
Cole Wyeth7mo68

I expect this to start not happening right away.

So at least we’ll see who’s right soon.

Reply
I "invented" semimeasure theory and all I got was imprecise probability theory
Cole Wyeth2d20

But it doesn’t come from indistinguishability, it comes from programs halting / looping. I don’t know what you’re trying to say. 

Reply
Cole Wyeth's Shortform
Cole Wyeth3d40

I'm giving a talk at the AIXI research meeting today which will summarize my work on embedded versions of AIXI and point to some future directions: https://uaiasi.com/2025/10/26/cole-wyeth-on-embedded-agency

Reply
Jacob Pfau's Shortform
Cole Wyeth4d72

Every time I see a story about an LLM proving an important open conjecture, I think "it's going to turn out that the LLM did not prove an important open conjecture" and so far I have always been somewhat vindicated for one or more of the following reasons:

1: The LLM actually just wrote code to enumerate some cases / improve some bound (!)

2: The (expert) human spent enough time iterating with the LLM that it is not clear the LLM was driving the proof.

3: The result was actually not novel (sometimes the human already knew how to do it and just wanted to test the LLM out on filling in details), or the result is immediately improved or proven independently by humans, which seems suspicious. 

4: No one seems to care about the result.

In this case 2 and 3 apply. 

Reply
Are We Their Chimps?
Cole Wyeth4d10

It's because we care about other things a lot more than chimps, and would happily trade off chimp well being, chimp population size, chimp optionality and self-determination etc. in favor of those other things. By itself that should be enough to tell you that under your analogy, superintelligence taking over is not a great outcome for us.

In fact, the situations are not closely analogous. We will build ASI, whereas we developed from chimps, which is not similar. Also, there is little reason to expect ASI psychology to reflect human psychology. 

Reply
Are We Their Chimps?
Cole Wyeth4d20

The point is that most people don’t care much about chimp rights, and this is still true of highly intelligent people.

Reply
Are We Their Chimps?
Cole Wyeth4d40

I don’t think that was because she was particularly intelligent. It’s not like our top mathematicians consistently become environmentalists or conservationists.

Reply
Are We Their Chimps?
Cole Wyeth4d148

You may have noticed that chimps don’t have a lot of rights.

Reply
Noah Birnbaum's Shortform
Cole Wyeth7d33

I think if someone is very well-known their making a particular statement can be informative in itself, which is probably part of the reason it is upvoted. 

Reply
Load More
23Nontrivial pillars of IABIED
13d
3
69Alignment as uploading with more steps
Ω
2mo
Ω
33
16Sleeping Experts in the (reflective) Solomonoff Prior
Ω
2mo
Ω
0
53New Paper on Reflective Oracles & Grain of Truth Problem
Ω
2mo
Ω
0
46Launching new AIXI research community website + reading group(s)
3mo
2
26Pitfalls of Building UDT Agents
Ω
3mo
Ω
5
16Explaining your life with self-reflective AIXI (an interlude)
Ω
3mo
Ω
0
29Unbounded Embedded Agency: AEDT w.r.t. rOSI
Ω
3mo
Ω
0
19A simple explanation of incomplete models
Ω
4mo
Ω
1
67Paradigms for computation
Ω
4mo
Ω
10
Load More
AIXI
10 months ago
(+11/-174)
Anvil Problem
a year ago
(+119)