In the real world, there are no "objects" in the way we get used to feeling them. While working with computers, we map "objects" to bytes by compiling programs. And we use "object" word for different stuff: variables, functions, etc. If you will look from the physics point on your chair, there is no "chair object." That's something even more complicated. But the "objectivization" of that complex "something" makes you don't care about all this philosophic stuff when you want to sit on it. I've decided to find what represents an "object" in the world of neurons.

I'd been playing with texts, words, letters, optical illusions for months. That was driving my friends and colleagues mad. I've been creating quizzes. I've been asking them to finish phrases or read words without some lttrs. I've been giving them examples like "theory epigenetics memory" and "Epigenetics memory theory." Or "relativity special" and "Special relativity."

I've found that well-known pattern they were describing like "one object," but if I had broken arrangement and decoupled it, they were describing it as a "set of objects."

Decoupled and broken arrangement. Sounds pretty similar to Hebb's rule!

To represent the model of each letter you are reading from your screen, we need to use thousands of neurons. Maybe more, I haven't tested yet. But there are about 80 billion of them in the brain. And with that amount of free memory, thousands won't make a big deal.

And if two neurons can wire together - there should be strongly coupled subnetworks of thousands of them. And if we will activate enough parts of a subnetwork that will enable it whole.

If strongly-coupled networks exist, we should have a less strong connection in between. If we adapt Hebb's rule to subnetworks level, it will explain why people count "Special relativity" as one, but "relativity special" as two objects. Because one is a pattern that fires without any noticeable delay, and the other is not.

It also explains why do we see optical illusions. They provide us information that is enough to activate some patterns in our brain. And yep, that's why cartoon characters look normal. They have the only specific subset of patterns. The interesting thing - if you will try to add more details to them and will break the template - it will ruin all the magic. If you meet in reality some person with hair behind the eyes, I bet that it will be not cute, but scary.

I've been proud of myself. I've been making observations. I've created a theory. I've explained to myself what "object" is.

Object, or "object feeling," is the result of activation of the strongly coupled subnetwork in the brain.

For objects, rules will stay nearly the same as for single neurons. Hebb's rule, E-LTP, L-LTP, will work for them.

We had defined objects. And now we have a right to use them. They no longer "magic word." And this gives us Power: we can forget about neural subnetworks. If we know how objects behave, there is no need to remember how they work inside.

We can use them as our building blocks. At least as long as it suits our approximation. If it won't - we can decompose them to implementation level and work on it.

And if you've had a feeling that I am missing something - you are great. It took me much more time to notice it.

What's the problem:

Let's take two objects that can act for us like one object. If they fire together - wire together, how will we explain that this new object can have a completely different meaning for us? It should be like a summation of "meanings," but instead, we receive a completely different. What more, we can rearrange the objects, and based on the same set, get two objects with different meanings. E-M-I-R and R-I-M-E, and other anagrams, for example.

There is a blank space in my model in this part. I am sure that to create and retrieve a "new object" from the set of others, we need a proper arrangement of activation. I am sure, that new object behaves independently from its parts. But I don't know how to explain it on the language of neurons activations.

I have an idea that it's because of our connection between subnetworks is not single axon, but thousands of neurons. Activation of two connected subnetworks activates different "neuronal paths" between them, and that paths are new objects.

Good news, everyone - we are building a simple model, so let's skip this part. We will take a rule that strongly connected objects can organize into one and behave like one - it's our simplification.

If you have an idea, how does it works on neural networks level, it would be great if you'll share it.

That's it about objects.

It's Summarizing Time:

We defined the object as a coupled subnetwork. We agreed to use them as building blocks.

The list of things we know about them:

  1. Objects wire together nearly like single neurons.
  2. For memorizing objects, we use E-LTP, which works on the previous rule for a short period.
  3. By using L-LTP, we can remember them far longer.
  4. Objects can be combined from other objects if they will strongly couple together. If it happened - the new object can act as an independent one.

But there are a lot of things we don't know:

How L-LTP decides to start work?

Why don't we continuously remember something and can focus on one task?

What is the task in terms of our model?

Let's get to the processing part.

New Comment