Michael Edward Johnson

Michael Edward Johnson's Comments

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

Right-- one question that Milan Griffes asked me was, "how can you tell if you should trust your aesthetic?"

Presumably integration practices should make it more trustworthy, but would be nice to have a good heuristic for when it's trustworthy (and when pushing with meditation/psychedelics might be safe) vs untrustworthy (and they would be a bad idea).

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

CSHWs offer pretty compelling Schelling points for the brain to self-organize around, and there's a *lot* of information in a dynamic power distribution among harmonic modes. We might distinguish CSHW-the-framework from CSHW-the-fMRI-method.

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

Agreed! The current version of CSHW depends on fairly high-tesla fMRI, which is somewhat new. Possibly there will be ways to adapt the concept to EEG, although this will take pretty advanced modeling and a lot of validation.

The real answer though might be it's only now that we're starting to clearly see the limits of the functional localization paradigm of neuroscience, and the need for something like CSHW. I'm reminded of this paper: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

Selen evidently got a huge amount of pushback from 'old guard skeptics' on her framework, and almost didn't survive this professionally. So I might point to political/factional factors.

[Link] Is the Orthogonality Thesis Defensible? (Qualia Computing)

Hi Donald- author of opentheory.net here. Really appreciate the thoughtful comment. A few quick notes:

  • I definitely (and very strongly) do not "predict that agents that believe in open individualism will always cooperate in prisoners dilemmas" - as I said in the OP, "an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources." I would say OI implies certain Schelling points, but I don't think an agent that believes in OI has to always cooperate (largely due to the ambiguity in what a 'belief' may be- there's a lot of wiggle-room here. Best to look at the implementation).
  • I think the overall purpose of discussing these definitions of personal identity is first, dissolving confusion (and perhaps seeing how tangled up the 'Closed Individualism' cluster is); second, trying to decipher Schelling points for each theory of identity. We only get predictions indirectly from this latter factor; mostly this is a definitional exercise.