I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.
I expect this to start not happening right away.
So at least we’ll see who’s right soon.
A talk on embedded AIXI from Alexander Meulemans and Rajai Nasser is in 2 hours: https://uaiasi.com/2025/12/14/alexander-meulemans-rajai-nasser-on-embedded-aixi/
This would be way above-trend??
By "Grain of Ignorance" I mean that the semimeasure loss is nonzero at every string, that is the conditionals of M are never a proper measure. Since this gap is not computable, it cannot be (easily) removed, though to be fair the conditional distribution is only limit computable anyway (same as the normalized M). However, it is not clear that there is any natural/forced choice of normalization, so I usually think of the set of possible normalizations as a credal set (and I mean ignorance in that sense). I will soon put an updated version of my "Value under Ignorance" paper (about this) on arXiv.
Vovk's trick refers to predicting like the mixture - a "specialist expert" can opt out of offering a prediction by matching the Bayesian mixture's prediction, so that its weight is not updated (assuming that it has access to the Bayesian mixture). I think the usual citation is "Prediction with Expert Evaluators Advice" (referring to section 6) which is with Chernov. I believe this was an influence on logical induction.
I mean, I think so. In those papers it's often not clear how "elicited" that key step was. The advantage of this example is that it very clearly claims the researchers made no contribution whatsoever, and the result still seems to settle a problem someone cares about! Only caveat is that it comes from OpenAI, who has a very strong incentive to drive the hype-cycle about their own models (but on the other hand, also has access to some of the best models which are not publicly available yet, which lends credibility).
OpenAI claims 5.2 solved an open COLT problem with no assistance: https://openai.com/index/gpt-5-2-for-science-and-math/
This might be the first thing that meets my bar of autonomously having an original insight??
Chaitin was quite young when he (co-)invented AIT.
The evidence just seems to keep pointing towards this not being a bubble.
Semantics; it’s obviously not equivalent to physical violence.