Michael Edward Johnson

Wiki Contributions

Comments

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

“By their fruits you shall know them.”

A frame I trust in these discussions is trying to elucidate the end goal. What does knowledge about consciousness look like under Eliezer’s model? Under Jemist’s? Under QRI’s?

Let’s say you want the answer to this question enough you go into cryosleep with the instruction “wake me up when they solve consciousness.” Now it’s 500, or 5000, or 5 million years in the future and they’ve done it. You wake up. You go to the local bookstore analogue, pull out the Qualia 101 textbook and sit down to read. What do you find in the pages? Do you find essays on how we realized consciousness was merely a linguistic confusion, or equations for how it all works?

As I understand Eliezer’s position, consciousness is both (1) a linguistic confusion (leaky reification) and (2) the seat of all value. There seems a tension here, that would be good to resolve since the goal of consciousness research seems unclear in this case. I notice I’m putting words in peoples’ mouths and would be glad if the principals could offer their own takes on “what future knowledge about qualia looks like.”

My own view is if we opened that hypothetical textbook up we would find crisp equations of consciousness, with deep parallels to the equations of physics; in fact the equations may be the same, just projected differently.

My view on the brand of physicalism I believe in, dual aspect monism, and how it constrains knowledge about qualia: https://opentheory.net/2019/06/taking-monism-seriously/

My arguments against analytic functionalism (which I believe Eliezer’s views fall into): https://opentheory.net/2017/07/why-i-think-the-foundational-research-institute-should-rethink-its-approach/

Common knowledge about Leverage Research 1.0

Goal factoring is another that comes to mind, but people who worked at CFAR or Leverage would know the ins and outs of the list better than I.

Common knowledge about Leverage Research 1.0

Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.

I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.

The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels different.”

There’s quite a lot about Leverage that I don’t know and can’t speak to, for example the internal social dynamics.

One item that my Leverage friends were proud to share is that Leverage organized the [edit: precursor to the] first EA Global conference. I was overall favorably impressed by the content in the weekend workshop I did, and I had the sense that to some degree Leverage 1.0 gets a bad rap simply because they didn’t figure out how to hang onto credit for the good things they did do for the community (organizing EAG, inventing and spreading various rationality techniques, making key introductions). That said I didn’t like the lack of public output.

I’ve been glad to see Leverage 2.0 pivot to progress studies, as it seems to align more closely with Leverage 1.0’s core strength of methodological experimentation, while avoiding the pitfalls of radical self-experimentation.

Would the world have been better if Leverage 1.0 hadn’t existed? My personal answer is a strong no. I’m glad it existed and was unapologetically weird and ambitious in the way it was and I give its leadership serious points for trying to build something new. 

A Primer on the Symmetry Theory of Valence

Hi Steven,

This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).

First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details. 

One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.

Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.

Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.

>(Also, not to gripe, but if you don't yet have a precise definition of "symmetry", then I might suggest that you not describe STV as a "crisp formalism". I normally think "formalism" ≈ "formal" ≈ "the things you're talking about have precise unambiguous definitions". Just my opinion.)

I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.

>(FWIW, I think that "pleasure", like "suffering" etc., is a learned concept with contextual and social associations, and therefore won't necessarily exactly correspond to a natural category of processes in the brain.)

I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.

A Primer on the Symmetry Theory of Valence

Hi Steven, amazing comment, thank you. I’ll try to address your points in order.

0. I get your Mario example, and totally agree within that context; however, this conclusion may or may not transfer to brains, depending on how e.g. they implement utility functions. If the brain is a ‘harmonic computer’ then it may be doing e.g. gradient descent in such a way that the state of its utility function can be inferred from its large-scale structure.

1. On this question I’ll gracefully punt to lsusr‘s comment :) I endorse both his comment and framing. I’d also offer that dissonance is in an important sense ‘directional’ — if you have a symmetrical network and something breaks its symmetry, the new network pattern is not symmetrical and this break in symmetry allows you to infer where the ‘damage’ is. An analogy might be, a spider’s spiderweb starts as highly symmetrical, but its vibrations become asymmetrical when a fly bumbles along and gets stuck. The spider can infer where the fly is on the web based on the particular ‘flavor’ of new vibrations. 

2. Complex question. First I’d say that STV as technically stated is a metaphysical claim, not a claim about brain dynamics. But I don’t want to hide behind this; I think your question deserves an answer. This perhaps touches on lsusr’s comment, but I’d add that if the brain does tend to follow a symmetry gradient (following e.g. Smolensky’s work on computational harmony), it likely does so in a fractal way. It will have tiny regions which follow a local symmetry gradient, it will have bigger regions which span many circuits where a larger symmetry gradient will form, and it will have brain-wide dynamics which follow a global symmetry gradient. How exactly these different scales of gradients interact is a very non-trivial thing, but I think it gives at least a hint as to how information might travel from large scales to small, and from small to large.

3. I think my answer to (2) also addresses this;

4. I think, essentially, that we can both be correct here. STV is intended to be an implementational account of valence; as we abstract away details of implementation, other frames may become relatively more useful. However, I do think that e.g. talk of “pleasure centers” involves potential infinite regress: what ‘makes’ something a pleasure center? A strength of STV is it fundamentally defines an identity relationship.

I hope that helps! Definitely would recommend lsusr’s comments, and just want to thank you again for your careful comment.

A Primer on the Symmetry Theory of Valence

Neural Annealing is probably the most current actionable output of this line of research. The actionable point is that the brain sometimes enters high-energy states which are characterized by extreme malleability; basically old patterns ‘melt’ and new ones reform, and the majority of emotional updating happens during these states. Music, meditation, and psychedelics are fairly reliable artificial triggers for entering these states. When in such a malleable state, I suggest the following:

>Off the top of my head, I’d suggest that one of the worst things you could do after entering a high-energy brain state would be to fill your environment with distractions (e.g., watching TV, inane smalltalk, or other ‘low-quality patterns’). Likewise, it seems crucial to avoid socially toxic or otherwise highly stressful conditions. Most likely, going to sleep as soon as possible without breaking flow would be a good strategy to get the most out of a high-energy state- the more slowly you can ‘cool off’ the better, and there’s some evidence annealing can continue during sleep. Avoiding strong negative emotions during such states seems important, as does managing your associations (psychedelics are another way to reach these high-energy states, and people have noticed there’s an ‘imprinting’ process where the things you think about and feel while high can leave durable imprints on how you feel after the trip). It seems plausible that taking certain nootropics could help strengthen (or weaken) the magnitude of this annealing process.

(from The Neuroscience of Meditation)

Nonspecific discomfort

Here’s @lsusr describing the rationale for using harmonics in computation — my research is focused on the brain, but I believe he has a series of LW posts describing how he’s using this frame for implementing an AI system: https://www.lesswrong.com/posts/zcYJBTGYtcftxefz9/neural-annealing-toward-a-neural-theory-of-everything?commentId=oaSQapNfBueNnt5pS&fbclid=IwAR0dpMyxz8rEnunCbLLYUh1l2CrjxRhNsQT1h_qdSgmOLDiVx5-G-auThTc

Symmetry is a (if not ‘the central’) Schelling point if one is in fact using harmonics for computation. I.e., I believe if one actually went and implemented a robot built around the computational principles the brain uses, that gathered apples and avoided tigers, it would tacitly follow a symmetry gradient.

Nonspecific discomfort

I think this is very real. Important to also note that non-specific joy exists and can be reliably triggered by certain chemicals. 

My inference from this is that preferences are a useful but leaky reification, and if we want to get to ‘ground truth’ about comfort and discomfort, we need a frame that emerges cleanly from the brain’s implementation level.

This is the founding insight behind QRI — see here for a brief summary https://opentheory.net/2021/07/a-primer-on-the-symmetry-theory-of-valence/

Qualia Research Institute: History & 2021 Strategy

Hi Charlie,

To compress a lot of thoughts into a small remark, I think both possibilities (consciousness is like electromagnetism in that it has some deep structure to be formalized, vs consciousness is like elan vital in that it lacks any such deep structure) are live possibilities. What's most interesting to me is doing the work that will give us evidence which of these worlds we live in. There are a lot of threads mentioned in my first comment that I think can generate value/clarity here; in general I'd recommend brainstorming "what would I expect to see if I lived in a world where consciousness does, vs does not, have a crisp substructure?"

Load More