I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.
I expect this to start not happening right away.
So at least we’ll see who’s right soon.
Pursue mentorship from highly agentic people.
Indeed in algorithmic information theory, the lower semicomputable semimeasures are an example of "subprobabilities." Much has been written about updating in this context.
Your comment is unhelpful. I am pretty sure I do know what the post says, having recently read it.
The post focuses on independent institutions but the same principle applies to technocratic institutions. Otherwise I am not sure what you are getting at.
At 12 pm EST today (Jan 26th), Marcus Hutter will be answering questions on his latest book (An Introduction to Universal Artificial Intelligence): https://uaiasi.com/2026/01/24/qa-with-marcus-hutter-at-the-final-meeting-of-the-iuai-reading-group/
Formally, this is the final meeting for the reading group, but feel free to drop in if you have read (most of) the book independently.
Though he seems to have overestimated the difficulty of the Turing test relative to e.g. robotics. Not clear he’s even directionally correct about robotics? Unless AGIs solve it for us :)
There's a difference between becoming a partisan colony and noticing that democratic institutions are being actively undercut. Becoming clear on this is actually a (fairly easy) test of rationality, which we should pass as a community.* Yes, the world is complicated and there have been some serious defections on both sides. Yes, some of the MAGA complaints are valid. ALSO, Trump is degrading democracy and his administration is very rapidly sliding towards authoritarianism (in their actions, which match years of clear rhetoric). It is possible to separate this from underlying trends toward increasing executive power on the scale of decades and recognize it as the active crisis that it is.
I expect that this reply will be downvoted, given the tone of existing replies, but I don't think turning LessWrong into a partisan colony is a good thing. For one, it means that half of America will inherently view everything LW wants as enemy. It also incentivizes a lack of skepticism when a same-party source makes a claim that merits it. Reddit went this route after 2016, and a lot of previously interesting and vibrant communities are now composed almost exclusively of, for lack of a better phrase, political slop. Compare r/videos before and after the ban on political content was lifted.
I find this equivocation annoying. Is your comment about messaging or truth-seeking?
*Some rationalists may acknowledge that Trump has authoritarian tendencies and approve - though I think his blatant disrespect for the truth contradicts most constructions of rationalist virtue. But we should at least be aware of what is happening.
There was also the video from the homeland security secretary blaming democrats for the government shutdown in blatant violation of the hatch act.
Not the highest impact of his many crimes, but does really drive home how casually Trump is willing to undercut any apolitical technocracy for political ends.
Semantics; it’s obviously not equivalent to physical violence.