jollybard

Comments

Additive Operations on Cartesian Frames

Fantastic work!

How do we express the way that the world might be carved up into different agent-environment frames while still remaining "the same world"? The dual functor certainly works, but how about other ways to carve up the world? Suppose I notice a subagent of the environment, can I switch perspective to it?

Also, I am guessing is that an "embedded" cartesian frame might be one where  i.e. where the world is just the agent along with the environment. Or something. Then, since we can iterate the choice function, it ould represent time steps. Though we might in fact need sequences of agents and environments. Anyway, I can't wait to see what you came up with.

Gödel Incompleteness: For Dummies

There are two theorems. You're correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar's paradox, and then the second is proved by repeating the proof of the first internally.

However I chose to take the reverse route for a more epistemological flavour.

Gödel Incompleteness: For Dummies

But we can totally prove it to be consistent, though, from the outside. Its sanity isn't necessarily suspect, only its own claim of sanity.

If someone tells you something, you don't take it at face value, you first verify that the thought process used to generate it was reliable.

Gödel Incompleteness: For Dummies

You are correct. Maybe I should have made that clearer.

My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn't even believe it if it claimed to be consistent).

Let There be Sound: A Fristonian Meditation on Creativity

It's essentially my jumping off point, though I'm more interested in the human-specific parts than he is.

The Parable of Predict-O-Matic

The relevance that I'm seeing is that of self-fulfilling prophecies.

My understanding of FEP/predictive processing is that you're looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.

More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.

Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.

I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb's theorem as a degenerate case). I haven't looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it's very applied math-flavored and yet very theoretical.

The Parable of Predict-O-Matic

Excellent post, it echoes much of my current thoughts.

I just wanted to point out that this is very reminiscent of Karl Friston's free energy principle.

The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-­maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.
Why We Age, Part 2: Non-adaptive theories
Mammals and birds tend to grow, reach maturity, and stop growing. Conversely, many reptile and fish species keep growing throughout their lives. As you get bigger, you can not only defend yourself better (reducing your extrinsic mortality), but also lay more eggs.

So, clearly, we must have the same for humans. If we became progressively larger, women could carry twins and n-tuplets more easily. Plus, our brains would get larger, too, which could allow for a gradual increase in intelligence during our whole lifetimes.

Ha ha, just kidding: presumably intelligence is proportional to brain size/body size, which would remain constant, or might even decrease...

Why Rationalists Shouldn't be Interested in Topos Theory
I'm not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether.

My feeling is that the arguments I give above are pretty decent reasons to think that they're not truth values! As I wrote: "The thesis of this post is that probabilities aren't (intuitionistic) truth values."

Load More