Sequences

Non-Coercive Motivation
Changing your Mind With Memory Reconsolidation

Wiki Contributions

Comments

i don't think the constraint is that energy is too expensive? i think we just literally don't have enough of it concentrated in one place

but i have no idea actually

Zuck and Musk point to energy as a quickly approaching deep learning bottleneck over and above compute.

This to me seems like it could slow takeoff substantially and effectively create a wall for a long time.

Best arguments against this?

Answer by Matt GoldenbergApr 23, 202490

Paul Ekmans software is decent. When I used it (before it was a SaaS, just a cd) it just basicallyflashed an expression for a moment then went back to neutral pic. After some training it did help to identify micro expressions in people

People talk about unconditional love and conditional love. Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.

 

Yes. this is my experience of cultivating unconditional love, it loves everything without target. I doesn't feel confused or strange, just like I am love, and my experience e.g. cultivating it in coaching is that people like being in the present of such love.

It's also very helpful for people to experience conditional love! In particular of the type "I've looked at you, truly seen you, and loved you for that."

IME both of these loves feel pure and powerful from both sides, and neither of them are related to being attached, being pulled towards or pushed away from people.

 

It feels like maybe we're using the word love very differently?

Both causal.app and getguesstimate.com have pretty good monte carlo uis

IME there is a real effect where nicotine acts as a gateway drug to tobacco or vaping

in general this whole post seems to make this mistake of saying 'a common second order effect of this thing is doing it in a way that will get you addicted - so don't do that' which is just such an obvious failure mode that to call it a chesterton fence is generous

The question is - how far can we get with in-context learning.  If we filled Gemini's 10 million tokens with Sudoku rules and examples, showing where it went wrong each time, would it generalize? I'm not sure but I think it's possible

It seems likely to me that you could create a prompt that would have a transformer do this.

i like coase's work on transaction costs as an explanation here

coase is an unusually clear thinker and writer, and i recommend reading through some of his papers

i just don't see the buddha making any reference to nervous systems or mammalians when he talks about suffering(not even some sort of pali equivalent that points to the materialist understanding at the time)

Load More