How far along are you on the Path?

Some say the Path has always existed and there have always been those who have walked it. The Path has many names and there are many notions of what the Path is.

To some the path is called philosophy, to others it is the art of rationality. To my people, it was called science and the pursuit of truth. Physics, psychology, neuroscience, machine learning, optimal control theory. All of these are aspects to the path. It has no true name.

With certainty, the only way to know whether you walk the path is by talking to those who don't walk the path.

One man, Scott Alexander describes the path as,

"Some people are further down the path than I am, and report there are actual places to get to that sound very exciting. And other people are around the same place I am, and still other people are lagging behind me. But when I look back at where we were five years ago, it’s so far back that none of us can even see it anymore, so far back that it’s not until I trawl the archives that realize how many things there used to be that we didn’t know. "

--------------------------------------------------------------------------------------------

How to tell where you are on the path?

One day, a man comes up to a woman and says:

"I am further down the path than you are."

In response, the woman says:

"No, I am further down the path than you are."

Instantly, both people recognize the solution as they have both at least progressed some degree through the path. The man speaks:

"Only one of us can be more right than the other. Let us make a prediction about all things in this world for all time and whoever is more correct about the state of the world for all time is closer to the end of the path."

Realizing the recursive time constraint of determining who is correct with infinite precision, the woman instead points to a random child.

"Whoever can guess correctly how this child behaves for the next 1 minute has walked further along the path and we will default to their position as truth when we are in conflict and have no time to discuss the details."

The man proceeds to describe in detail the neurobiological system of the baby and all the deterministic forces that would lead the baby to breathe, think, move in the manner in which he predicted it would. He then goes on to describe all the biases in their environment and how it would play a role in the way the child would act.

The woman looks at the man and says, you are wrong. You are doubly wrong and you do not understand the nature of the path at all.

She writes something down on the piece of paper, gives it to the man and tell him to open it in 10 seconds.

He looks at it for a couple seconds and then realizing its time, he opens it.

The child will cry and you are an idiot. I create the future

In that moment, he hears a cry and realizes that she was much farther along the path than he was. She had gone to the child in the seconds he was focusing on the paper, picked it up and pinched it with what looked like significant force.

The woman looks at him and asks:

"Do you understand?"

He thinks for a long while and replies:

Either I have the ability to affect the entropic state of the universe or I dont. If I can, then I can create any future constrained by energy and possible state transitions. The truest prediction is one that I am already causally bound to.
In the case where I can't effect the entropic state of the universe, I should still act and believe as if i do, because it is the most effective way within the closed system to affect the likelihood of a prediction.

She looks at him with a smile and a surprising glint of curiosity in her eyes. She thinks to herself silently:

Close. I was once where you were. There is a flaw in that logic. The flaw is axiomatic and has to do with the essence of reality. Pursue this question, does reality exist if you have no sensors in which to perceive it?

The answer lies in this. What is the difference between a prediction, the current perceived state and the reason to transition to the next state?

But instead says,

The ability to manipulate the system is far more important than the ability to predict its outcomes. At some point your ability to manipulate the system will become equivalent to the best prediction system in the world.

My Path and Others

The path isn't linear and it isn't constrained to a single dimension. It is at the very least 4 dimensional and has no boundaries or edges as far as I know.

Some Condensed Examples of Path Progression

  • Scott'09 -> Scott'2020
    • Metropolitan Man -> Slate Star Codex -> Secret
  • Elizier'09 -> Elizier'2020
    • HPMOR -> Less Wrong -> AI to Zombies
  • Aires'09 -> Aires'20
    • MM/HPMOR Reader -> Engineer -> Kinect -> Hololens -> MASc in AI -> Uber Michelangelo Founder -> Uber AI -> Meta-learning/Neuroevolution -> Emotional Tensor Theory

I forked from Lesswrong in 2009, when I originally worked in SF and haven't returned except for a brief stint in 2011/12 when i returned to the bay area.

In my pursuit of rationality, AI and AGI, I sought to analyze the human emotion system from a neurobiology and machine learning perspective.

What is represented by the feelings we feel i.e what does the embedded neurotransmitter representation of emotions actually correspond to in terms of hardware and information theory?

---------------------------------------------------------------------------------------------------

Does the Lesswrong Path have a blindspot related to emotions?

A quick search of emotions in the Lesswrong Archives shows less than 5 results

  • Why is there such a large gap of exploration into emotions on Lesswrong. Is it because they are colloquially the anathema to rationality?
  • Is it an inherited bias from the ideology of the lesswrong creators? or simply ignorance
  • Perhaps emotions aren't relevant in any way and are encompassed in rationality?

-----------------------------------------------------------------------------------------------

Call for Aid: Lesswrong 2.0 is enormous as is the path I have walked. I'm sure while there is overlap there are likely very strong contention points in both how AI systems work and human systems work. Help me find them.

I'd love to talk to two or three less wrong experts for 2 1.5hr sessions in July/August. If you'd like to help me, please comment directly and we can set up a calendar invite over email.

New to LessWrong?

New Answer
New Comment

4 Answers sorted by

Why is there such a large gap of exploration into emotions on Lesswrong. Is it because they are colloquially the anathema to rationality?

I don't think that's accurate. In fact, Eliezer says as much in Why Truth?. He explicitly calls out the view that rationality and emotion are opposed, using the example of the character of Mr. Spock in Star Trek to illustrate his point. In his view, Mr. Spock is irrational, just like Captain Kirk, because denying the reality of emotions is just as foolish as giving in wholeheartedly to them. If your emotions rest on true beliefs, then they are rational. If they rest on false beliefs they are irrational. The fact that they are instinctive emotions rather than reasoned logic is irrelevant to their (ir)rationality.

I think LessWrong has actually done a fairly good job at avoiding this mistake. If we look at the posts on circling [1], [2], for example, you'll see that they're all about emotions and management of emotions. The same applies to Comfort Zone Expansion, ugh fields, meditation and Looking, and kenshō. It's just that few of them actually mention the word "emotion" in their titles, which might lead one to the false assumption that they are not about emotions.

Also, see the Emotions tag. So even if you just directly search for the term, you will find much more than just 5 results.

[-][anonymous]4y110

There's also Alicorn's sequence on luminosity, which explicitly deals with emotions despite (apparently) not being tagged as such: https://www.lesswrong.com/s/ynMFrq9K5iNMfSZNg

2meedstrom6mo
Also Nate's Replacing Guilt sequence. I'm still reading it, but I predict it'll be the single most important sequence to me.

Interesting. I've seen this argument in other areas and I believe this is a step in the right direction. However there's a gap between how belief is encoded and updated.

I do like Eliezer's formulation of rationality. The nuance is that emotions are actually the result of a learning system that is according to Karl Friston's free-energy principle, optimal in its ability to deviate from high entropic states.

Does the Lesswrong Path have a blindspot related to emotions?

Julia Galef wrote about how she updated in CFAR towards emotions being more important then initially assumed in 2013. When it comes to dealing with emotions there's Gendlin's Focusing and Circling and a discourse on meditation.

The discourse is however more focused on applied knowledge then neurobiology based knowledge.

To be honest, I was hoping to see some discussion on the true nature of the underlying embedding of emotions. What they mean from a computation framework. More importantly recent papers such as on the nature of dopamine as an temporal error propogation signal by google all suggest that dopamine and emotions may actually be the rational manifestation of some sort of RL algorithm based on Karl Friston's free-energy principle.

The nature of the algorithm is now the nature of the learner and what rule determines the nature of the learner? Likely some comp... (read more)

5Kaj_Sotala4y
There's Lisa Feldman Barrett's theory of constructed emotion, which applies a predictive processing lens on emotion, and takes a similar perspective to what you said. She has a popular book about it, but it felt to me like it was pretty wordy while also skimming over the more technical details. You could read this summary of the book and combine it with the paper that presents the theory to a more academic audience. Separately, there's the model in Unlocking the Emotional Brain, which goes into much less detail about algorithmic detail but draws upon some neuroscience, fits together with a predictive view of emotion, and seems practically useful.
2ChristianKl4y
There are cases where the word rationality gets used in such a way but it's not how the word gets used in this community.  I think you make a mistake when you try to reduce emotions to spikes in neurotransmitters. Interacting with emotions via Gendlin's Focusing suggests that emotions reflect subagents that are more complex then neurotransmitters. Emotions also seem to come with motorcortex activity as they can be felt to be located in body-parts. Given plausible reports that they can be felt in amputed body-parts as well, a main chunk of the process will be in the motor cortex instead of in the actual part of the body where the emotion is felt.  The fact that you have the possibility of an emotional label to produce a fit in Gendlin's focusing suggests that "Anger" is more then just a coarse label.  I'm myself neither deeply into machine learning nor into neuroscience. I don't know of someone who cares about both towards which I could point you. That said, if you have ideas writing them up on LessWrong is likely welcome and might get people to give you valuable feedback.
1AiresJL4y
Thanks. I'll just point out that the the coarse label is the human intuition and mistake. There is no such label. The instance of anger is a complex encoding of information relating to not "subagents" but to something more fundamental, your "action set." The coarse resolution of anger is a language one, but biologically, anger does not exist in any form you or I are familiar with.
2ChristianKl4y
It seems to me hard to explain why an emotion such anger might release itself when the corresponding emotion subagent gets heard in Gendlin's Focusing if anger is not related to subagents.  That sounds to me like you are calling something anger that is not the kind of thing most people mean when they say anger.  If you burrow a word like anger to talk about something biological and the biological thing is not matching with what people mean with the term, it suggests that you should rather use a new word for the biological thing you want to talk about. 

Not that far.  I'm still quite wrong.

Cartesian boundary is, while intellectually seen through, not experientially seen through most of the time.

2 comments, sorted by Click to highlight new comments since: Today at 2:08 PM
  • Eliezer'09 -> Eliezer'2020
    • HPMOR -> Less Wrong -> AI to Zombies

It seems you have the order wrong. AI to Zombies is a compilation of the sequences that started to get written over at OvercomingBias before the founding of LessWrong. HPMOR was written when Eliezer mostly stopped engaging with LessWrong. 

Why is there high certainty that talking to people not walking the path tells that you are walking the path?

Why when two people meet is there a need to establish a hierachy of epistemic authority? Knowing who is further won't actually make you progress on the path.

While it is a common strategy to use knowledge to improve manipulation there might be strategies where you get control beyond your knowledge. Plants can photosynthesis without knowing about quantum theory despite using quantum effects. If you have control beyond knowledge you are very likely to have unintended side effects. Sure in the limit of "all side effects" they can seem to converge. But what "all side effects" might include will be dependent on how you model the world. Thus there can be side-effects you are unable to model. Thus knowledge can lag control. Thus they are not guaranteed to converge.