Posts

Sorted by New

Wiki Contributions

Comments

paul5y20

Seems like the planning process or algorithm is recursive but the plans are merely hierarchical.

Speaking or recursion in human cognition, I've always wondered if it is implemented in the human brain in what computer scientists (programming language compiler writers, to be specific) call "unrolling" as opposed to true recursion. Many modern compilers, when they detect that a recursive algorithm or simple loop will actually only nest 5 times, say, will generate machine code that unrolls the recursion into a simple linear series of 5 steps. The brain really can't handle very many levels of recursion so this may be why. it implements what abstractly requires recursion as a linear sequence, turning the recursion level into a simple sequence index. Nature never (or hardly ever) implements true recursion as it always stops after a few levels.

paul5y40

I am doing AI work (not neural nets) and I'm also a programming language aficionado. I've invented several special purpose languages and have implemented them. The role programming language might play in AI is something I have though about.

That all said, the place to start is the AI model. It only make sense to invent a programming language as an aid to humans to express designs using a chosen model. In short, you don't start with the language but the designs you would like to express. The purpose of a language is solely to make designs easier to read and write by humans.

Answer by paulFeb 26, 201930

Any system that takes a huge amount of input data and reduces it to some sort of representation will have input cases it doesn't handle well. The reduction throws away data of a certain, supposedly unimportant, variety. Input cases are bound to exist where the data thrown away by the reduction algorithm are, in fact, important. Visual illusions are such cases for the human visual system. Those that work on autonomous vehicles have to deal with such cases. Humans that understand how such recognition systems work can purposefully construct such cases in order to "hack" them. It's a jungle out there.

paul5y10

I look at this from a functional point of view. If I were designing an AGI, what role would emotions play in its design? In other words, my concern is to design in emotions, not wait for them to emerge from my AGI. This implies that my AGI needs emotions in order to function more competently. I am NOT designing in emotions in order to better simulate a human, though that might be a design goal for some AGI projects.

So what are emotions and why would an AGI need them? In humans and other animals, emotions are a global mechanism for changing the creature's behavior for some high priority task. Fear, for example, readies a human for a fight or flight response, sacrificing some things (energy usage) for others (speed of response, focused attention). Such things may be needed in an AGI I'm designing. A battlefield AGI or robot, for example, might need an analogous fear emotion to respond to a perceived threat (or instructed to do so by a controlling human).

Obviously, the change brought about by "fear" in my AGI will be different from the 6 qualities you describe here. For example, my battlefield robot would temporarily suspend any ongoing maintenance activities. This is analogous to a change in its attention. It might rev up its engines in preparation for a fight or flight response. Depending on the nature of the threat, it might change the configuration of its sensors. For example, it may turn on a high-resolution radar that is normally off to save energy. Generally, as in humans, emotion is a widespread reallocation of the AGI's resources for a particular perceived purpose.

Answer by paulFeb 16, 201910

Agoric Computing seems like a new name given to a very common mechanism employed by many programs in the software industry for decades. It is quite common to want to balance the use of resources such as time, memory, disk space, etc. Accurately estimating these things ahead of their use may use substantial resource by itself. Instead, a much simpler formula is associated with each resource usage type and that stands as a proxy for the actual cost. Some kind of control program uses these cost functions to decide how best to allocate tasks and use actual resources. The algorithms to compute costs and manipulate the market can be as simple or as complex as the designer desires.

This control program can be thought of as an operating system but it might also be done in the context of tasks within a single process. This might result in markets within markets.

I doubt many software engineers would think of these things in terms of the market analogy. For one thing, they would gain little constraining their thinking to a market-based system. I suspect many software engineers might be fascinated to think of such things in terms of markets but only for curiosity sake. I don't see how this point of view really solves any problems for which they don't already have a solution.