Matt Goldenberg


Non-Coercive Motivation
Changing your Mind With Memory Reconsolidation

Wiki Contributions


Here's Peter Thiel making fun of the rationalist doomer mindset in relation to AI, explicitly calling out both Eliezer and Bostrom as "saying nothing":

I think what we're seeing here is that LLMs can act as glue to put together these modules in surprising ways, and make them more general. You see that here and with Saycan. And I do think that Chapman's point becomes less tenable with them in the picture.

I didn't say it should update your beliefs (edit: I did literally say this lol but it's not what I meant!) I said it should update the beliefs of people who have a specific prevailing attitude.

I don't think that Cicero is a general agent made by gluing together superhuman narrow agents! It's not clear that any of its components are super human in a meaningful sense

I don't either! I think it should update your beliefs that that's possible though.

Yes, I got down to the Nash Bargaining part which is a bit harder and got confused again, but this helped as a very simple math intuition for why to Kelly Bet, if not how to calculate it in most real world betting situation.

I think people who are freaking out about Cicero moreso than foundational model scaling/prompting progress are wrong; this is not much of an update on AI capabilities


I think there's a standard argument that goes "You can't just copy paste a bunch of systems that are superhuman in their respective domains and get a more general agent out."  (e.g. here's David Chapman saying something like this:

If you have that belief, I imagine this paper should update you more towards AI capabilities.  It is indeed possible to duct tape a bunch of different machine learning models together and get out something impressive.  If you didn't believe this, it should update you on the idea that AGI could come from several small new techniques duct taped together to handle each other's weakness.

You, however, do not know if it is a fair coin, and are offering me a fair bet. I only have 100 dollars to my name, and I am can bet as much as I want (up to 100 dollars) in either direction at even odds.

If I bet 100 dollars on heads, heads-me gets 200 dollars, and tails-me gets nothing. If I bet 100 dollars on tails, tails-me gets 200 dollars, and heads me gets nothing. If I bet nothing, both versions of me get 100 dollars.

However, every dollar in the hands of heads-me is worth 1.5 times as much as a dollar in the hands of tails-me, since heads-me exists 1.5 times as much. (I am ignoring here any diminishing returns in my value of money.)

Thus, to maximize value I should bet 100 dollars on heads. However, maybe it is better to think of tails-me as the rightful owner of 40 percent of my resources. When I bet 100 dollars on heads, I am seizing money from tails-me for the greater good, since heads-me has the (proportionally greater) existence necessary to better take advantage of it.

Alternatively, I could say that since 60 percent of me is heads-me, heads me should only control 60 dollars, which can be bet on heads. Tails me should control 40 dollars, which can be bet on tails. These two bets partially cancel each other out, and the net result is that I bet 20 dollars on heads.

If you are especially fast at maximizing expected logarithms, you might see where this is going.


Wow I have been looking for an intuitive explanation of Kelly Betting for years, and this is the first one that really hit from an intuitive mathematical perspective.

I think there is a name for the core of the approaches which works, which is "parts work."

The ICF framework seems to add some things on top of the basic parts work idea that make it similar to IFS. For instance, the process of unblending at the beginning is basically the same as what IFS calls "getting into self".  In contrast, there are many effective parts work frameworks that do the work from a blended state, such as voice dialogue.  It imports the assumption from IFS that there is some "neutral self" that can be reached by continually unblending, and that this self can moderate between parts.

In addition, IFS and ICF both seem to emphasize "conversation" as a primary modality, whereas other parts work modalities (e.g. Somatic Experiencing) emphasize other modalities when working with parts, such as somatic, metaphorical, or primal. Again, there's an assumption here about what parts are and how they should be worked with, around the primacy of particular ways of thinking and relating which is heavily (if unconsciously) influenced by the prevalance of IFS and its' way of working.

It seems like while ICF is trying to describe a general framework, it is quite influenced by the assumptions of IFS/IDC and imports some of their quirks, even while getting rid of others.

Load More