Top postsTop post
Daniel C
Message
Master's student in applied mathematics, funded by Center on Long-Term Risk to investigate the cheating problem in safe pareto-improvements. Former dovetail fellow with @Alex_Altair.
282
Ω
11
9
64
Short summary of Condensation Condensation is a theory of concepts by Sam Eisenstat. The paper can be read here. Abram wrote a review on Lesswrong, and a followup. The paper is very much worth reading, and can be skimmed to just understand the motivation if you’re time constrained. What is...
A core subproblem in ontology identification is to understand why and how humans and agents break down their world models into distinct, structured concepts like tables, chairs and strawberries. This is important because we want AIs to optimize the real world things we care, but the things we care about...
Context:I (Daniel C) have been working with Aram Ebtekar on various directions in his work on algorithmic thermodynamics and the causal arrow of time. This post explores some implications of algorithmic thermodynamics on the concept of optimization. All mistakes are my (Daniel's) own. A typical picture of optimization is when...
Epistemic status: This first collaboration between Daniel Chiang (who is interested in the algorithmic information theory of incrementally constructed representations) and myself (Cole Wyeth) contains some fairly simple but elegant results that help illustrate differences between ordinary and reflective Oracle Solomonoff Induction (rOSI). Update 02/15/26: The connection between rOSI and...
This dialogue is part of the agent foundations fellowship with Alex Altair, funded by the LTFF. Thank you Dalcy, Alex Altair and Alfred Harwood for feedback and comments. Context: I (Daniel) am working on a project about ontology identification. I've found conversations to be a good way to discover inferential...
Suppose that an embedded agent models its environment using an approximate simplicity prior, would it acquire a physicalist agent ontology or an algorithmic/logical agent ontology? One argument for the logical agent ontology is that it allows the agent to compress different parts of its observations that are subjunctively dependent: If...