As promised yesterday — I reviewed and wrote up my thoughts on the research paper that Meta released yesterday:
Full review: Paper Review: TRImodal Brain Encoder for whole-brain fMRI response prediction (TRIBE)
I recommend checking out my review! I discuss some takeaways and there are interesting visuals from the paper and related papers.
However in quick take form, the TL;DR is:
From The Rundown today: "Meta’s FAIR team just introduced TRIBE, a 1B parameter neural network that predicts how human brains respond to movies by analyzing video, audio, and text — achieving first place in the Algonauts 2025 brain modeling competition."
This ties extremely well to my post published a few days ago: Third-order cognition as a model of superintelligence (ironically: Meta® metacognition).
I'll read the Meta AI paper and write up a (shorter) post on key takeaways.
Just published "Meta® Meta Cognition: Intelligence Progression as a Three-Tier Hybrid Mind"
TL;DR: We know that humans and some animals have two tiers of cognition — an integrative metacognition layer, and a lower non-metarepresentational cognition layer. With artificial superintelligence, we should define a third layer and model it as a three-tier hybrid mind. I define the concepts precisely and talk about their implications for alignment.
I also talk about chimp-human composites which is fun.
Really interested in feedback and discussion with the community!
I've been reading a new translation of the Zhuangzi and found its framing of "knowledge" interesting, counter to my expectations (especially as a Rationalist), and actionable in how it is related to Virtue (agency).
I wrote up a short post about it: Small Steps vs. Big Steps
In the Zhuangzi knowledge is presented pejoratively in contrast to Virtue. Confucius presents simplified, modest action as a more aligned way of being. I highlight why this is interesting and discuss how we might apply it.
I’m delineating two core political positions I see arising as part of AI alignment discussions. You could pattern-match this simply to technologists vs. luddites.
Unionists believe that we should partner, dovetail, entangle, and blend our objectives with AI.
Separatists believe that we should partition, face-off, isolate, and protect our objectives from AI.
Read the full post: https://www.lesswrong.com/posts/46A32JqxT37dof9BC/unionists-vs-separatists
My optimistic AI alignment hypothesis: "Because, or while, AI superintelligence (ASI) emerges as a result of intelligence progression, having an extremely comprehensive corpus of knowledge (data), with sufficient parametrisation and compute to build comprehensive associative systems across that data, will drive the ASI to integrate and enact prosocial and harm-mitigating behaviour… more specifically this will happen primarily as a result of identity coupling and homeostatic unity with humans."
This sounds like saying that AI will just align itself, but the nuance here is that we control the inputs — we control the data, parametrisation [I'm using this word loosely - this could also mean different architectures, controllers, training methods etc.], and compute.
If that's an interesting idea to you, I have a 7,000 word/ 18-page manifesto illustrating why it might be true, and how we can test it:
A take on simulation theory: our entire universe would actually be a fantastic product for some higher dimensional being to purchase just for entertainment.
For example: imagine if they could freely look around our world — see what people are thinking and doing, how nature is evolving.
It would be the funniest, most beautiful, saddest, craziest piece of entertainment ever!
Disclaimer: I'm not positioning this as an original idea — I know people have discussed simulation theory with "The Truman Show" framing before. Just offering the take in my own words.