I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.
I expect this to start not happening right away.
So at least we’ll see who’s right soon.
Well, guess I was wrong.
That’s very good!
Yeah - I don’t really like that the word “prosaic” has no connection to technical aspects of the currently prosaic models.
I don’t want to start referring to “the models previously known as prosaic” when new techniques become prosaic.
This is reasonable, but includes “transformer” which seems a bit too narrow.
The problem with “neuro-symbolic AI” is that Gary Marcus types use it to refer to something distinct from the current paradigm. Even though it is ironically a pretty good description of the current paradigm.
They seem to be drawing an important distinction against “pure / monolithic” models, but the name is long and too general.
Not pronounceable.
Interesting!
I think it would have been better, if you had to choose between starting on time and giving feedback to all rejected applicants, to do the former and drop the latter. Or, do it a month after the program - it is clearly not a part of the critical value-add of the program.
Semantics; it’s obviously not equivalent to physical violence.