Posts

Sorted by New

Wiki Contributions

Comments

This post parses to meaning pretty fully for me, but I'm somewhat familiar with Mark's writings.

In case it helps anyone else, as I read it the key points are:

  1. hypothesis: the mind is highly neuroplastic in the long term, capable of arbitrarily "large" error corrections. Also there are momentary moves it can do to encode more than one piece of information in the same bit of network. Inference: while maybe arbitrary neuroplasticity and error correction is possible in the limit, locally this looks like doing as series of highly-constrained changes like a sliding puzzle. We probably have some particular neural mechanism handling these updates 1b) these updates look like going through changing layers of encodings one at a time, to be more faithful/useful: "deconvolution" was a helpful word for me here. (I'm not quite capturing the meaning of this point but it's also somewhat hard to without drawing diagrams)

  2. hypothesis: the mind doesn't have noise in it, only mis-encoded signal - error. (with 1, this makes it possible to "error-correct", locally and eventually globally with enough repeated local work)

  3. hypothesis: there aren't "separate representations" for memories, ontologies, models etc: it's the same kind of network (perception=>action) all the way down with just lots and lots of layers (which contain information abstractions) in the middle. Inference: you can use the same kinds of "moves" to 1b) error-correct the whole thing, eventually. You don't need different "moves" for different kinds of "stuff"