by [anonymous]
6 min read17th Jun 20143 comments

-1

Introduction

(post has been edited on 17.6 18:50 GMT, there's is an added part)

I'd like to write my thoughts about structuring the psyche for an AI. However I am not an expert on computers, programming or AIs, nor even psychology or the human mind in general. In fact I have very little reason to think it should be me writing about this topic. I think that's ok. It's a thread to share thoughts and interact. Part of the problem is socially managing this message. It requires a certain degree of boldness to present your thoughts. It think also requires a certain degree of uncertainty to improve those thoughts.

The content of this thread will be on a very general and vague level. I would be more specific, if I could. But it's hard enough as it is. Perhaps this thread will stimulate thoughts for those who are more adept with the matter. Most of how I think about this, originates from my personal theory of mind, which probably is not anything special, and therefore it resembles an attempt of emulating the human brain. This according to some posts written on lesswrong would be a socially approved signal for bad thinking perhaps.

None of this probably makes sense. And it's probably too general to have any real meaning. But I hope this post is useful to stimulate thought if nothing else. I'm a little worried about potential criticism, since I kind of feel like I'm trying to take a too large bit of a cake that's really not meant for me.. But that's ok.

Let's move on.

 

Concept of time

I think understanding the concept of time essentially requires arranging things into a series. Objects in the series can have an order in the sense that some objects come before, others come later. Based on this very simple sequential thought can be constructed a sense for passage of time by adding the notion that the present time can be some object on such series.

 

Logical sequences as part of sequential reasoning

If you can model sequences with an idea of something that is now and later or before, it allows formulating some sort of logic with the idea of moving towards some state in that sequence.

 

Decision making by altering the states of sequences

So understanding decisions requires the idea of influencing an outcome that resides somewhere along the way of a sequence.

 

Goals and utility functions

Goals are intents of changing something that comes later in the sequence. Then what is a wanted outcome is based on somekind of utility function. What the function contains is another thing.

 

Differentiating statebased reasoning from sequential reasoning

Statebased reasoning to me means having an environment with flags or stimuli, things that are loaded into the sequence. For an example a series of pictures would be a sequence, while the content of the pictures would be the statebased environment. You can think of this as a plane in a spatial sense. Or temporary noise.

 

Statebased reasoning overtime

To formulate states not as single moments but instead short series of moments, in which multiplte vague states are interpolated to form a vague series overtime. So that you don't have for an example fractions of cyllables, but instead those fractions are mashed together as cyllables, words or entire sentences and so forth. This would also include a reward systems feedback.

 

Predicting change in the background

Sequential logic allows thinking forward but it should be possible to consider the progression of things as a background function.

 

Qualitatively different content of environment

Statebased reasoning should contain all kinds of different things and more precisely they should be qualitatively different. The different kinds of realms of quality can be analoguous to the human brain. For an example, we can have spatial stimulus, emotional stimulus, and so forth.

 

Goal based reward system, simulation and testing for input

This is another way of saying utility function again. But to more specific you can have a search function that tests for what the utility function returns, which is sort of like imagining something. For an example, you can think of eating something as a human being, and it gives you some pleasure and an imagined stimulus that is somewhat comparable to actually eating. If the outcome seems positive then and there is input in that case, you can proceed towards executing the plan of actually getting to that simulated state.

 

Thoughts with multiple qualitatively different components

Thought should be something that exists on several different qualitatively different types of information. For an example you could have a thought that comprises of sequantial reasonining, environmental or statebased reasoning, spatial reasoning and goal based reasoning.

 

Agent logic and goal orientation

Agents could be formulated as things that have qualities or functions. Goals are part of sequential logic.

 

Analyzing for functions and qualities

A method of observing objects and linking qualities and functions to them based on different types of information.

 

Managing goals

Some kind of logic for managing different kind of goals is necessary so that it can be used to assess the functions and goals of agents. This also means that it will have to manage function between different information types.

 

Master routines and subroutines - the ability to call subroutines from those qualitatively different types of information handlers

For an example you can think of a picture and in the picture there is a spatial quality, in which a box is trying to achieve something. But for that to be possible it needs to be possible for the spatial logic to use the agent logic to create it as an agent. And it also needs to use the sequantial and goal oriented logic to be able to formulate the expression that the box has a goal.

It is also necessary to maintain a structure where there is this primary sequential and statebased logic that tries to do someting, and the content of this master logic should be able to use any part of the information handlers so that they can call each other practically indefinitely.

 

Tying agent's qualities to functions, sequential and statebased logic

An object can have several flags for functions, which actually are causing something based on sequential logic, but the changes in teh sequences are actually changes in the based logic, and the qualities themselves are the statebased logic of the object.

 

 

 

Added later:

Sequences and logic

I think of sequential logic as the very standard logic, thinking in terms of decision trees and so forth. I believe this is also lateralized to certain brain regions in humans. Amygdala I think have this difference, but have not verified that. It would actually be rhythm, sequence and syntatic processing of language.

States and logic

States on the otherhand I think are about induction largescale input. I think somekind of parallel could be drawn to sensing music emotionally and the pitches of notes, perhaps also visuality like seeing colors as radiant and having mor accurate perception in that sense.

I think an example where these entwine could beif in visual images sequential logic is applied on objects to produce spatial dimensions.

 

Neural networks and bayes

I think this kind of entwining of sequences and states, that should be similar to the thought of humans, could be applied on artificial intelligences also. Maybe not.

From a more accurate point it could mean for an example having some sort of neuralnetworks and using bayesian probabilities to activate nodes based on number of associations. This could be used to produce the priming effect seen in humans.

So the difference between two is that associations and statebased logic can be acausal and mostly probabilistic, which is closer to emotion and intuition, where as sequential logic or deductive reasoning is causal and more closely related to those already mentioned standard decision trees and logic that distinctly separates objects.

 

Using free association based on object qualities, flags or associations

States can be described to include objects which have qualities attached or associated to them. The nature of these qualities can be used to produce intuition or association in that acausal manner.


Gauging importance based on qualitative definitions for the flagged properties or qualities and guiding selective search for associations

To draw a parallel with human mind you can have objects tagged with for an example food. Then when food is important, between each neuron there can be a link based on some type of communication pattern of some neurotransmitters - which I don't know too much about - and this can be used to have control on intuition or association based on states. So in otherwords states can control the activation of axons or something like that.

* end of added part

 

 

I could continue this weird list somewhat further, but Im not sure if it's useful.. Since it would be more this kind of short notes, which might all be equally useless. So instead of continuing...

What do you think about this so far? Does this make any sense?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:12 PM
[-]gjm10y60

(It's clear that you have misgivings along the following lines yourself.)

I don't think I understand the purpose of this. By your own assessment

I am not an expert on computers, programming or AIs, nor even psychology or the human mind in general

so what is the value for you (or for others) in putting together a proposal for how to organize the psyche of an AI? I mean, wouldn't you almost certainly get better results by first becoming something of an expert in at least one (I'd have thought at least two) of those areas, and then putting together your proposal?

I can think of one possibly-good reason. Perhaps there is some insight you think you have that you don't expect others -- even (especially?) experts in those fields to have had, and you think there's value in getting that insight out there for others to think about, even before you take the time to develop expertise of your own. But surely you can't expect that many of the fifteen separate points in your proposal embody such insights -- that would be a really remarkable level of overconfidence.

Therefore:

I suggest that you attempt to distil from your proposal a small kernel of ideas that you're prepared to defend as both original and valuable despite your admitted lack of expertise, and actually defend them as such.

Because otherwise what you're doing is saying "I don't really know much about this field, which is known to be very difficult. Here is my proposal for how to solve its central very difficult problem." and it's hard to see why a proposal presented in those terms is worth paying attention to.

(For the avoidance of doubt: I am not saying that you need some sort of AI credentials before having useful ideas. For all I know, you may indeed have some really valuable insights. Evidence and argument outrank credentials. But if you offer neither credentials nor evidence and arguments, what are we to do?)

[-][anonymous]10y10

Essentially it's about arguments of authority, which have valid basis, in the basis of the authority. However I don't think that I should have extremely special insights to be able to write about a subject. I'm not actually expecting any of my insights to be meaningful, just considering probabilities I think it would be a fairly bad calibration. But then how would I write about a subject, for which I have no basis of authority for?

For an example let's consider the same issue from the perspective of democracy, since it's easier. Average person probably should not consider themselves as proper representatives, however does that mean those people should not discuss politics? Considering the consequences of a common social norm, where unqualified people are unable to talk about matters of interest outside their expertise, that lowers the outcomes of the democracy, as a social process would not be involved. I suppose there's an important difference though, which is that average people vote in democracies, where as sciences are not based on public votes.

Isn't it (Or wouldn't it be) a shame if so called average people, which I basically am too, would not be able to participate in conversations that they do not have expertise in?

But this is all around the subject, sinec your arguments are not really directed at the substance of what I wrote, but rather about how I wanted to represent them. This though seems logical as I did put quite a bit of effort into trying to be somewhere between "at least trying to be humble" and writing things as I see them. In anycase it's not like you were to blame for that, as the substance itself was too general and vague to provide for an actual arguement. But that is due to my personality, which is that I am theoretical person and think in terms of some kind of abstractions, however that's not to say I would be good at it. This is still relevant considering your reply, as I can't actually take on any of these points from the perspective of debating over the substance of the thread.

In anycase I think you've pretty good points, so thanks for replying.

[This comment is no longer endorsed by its author]Reply

Read everything by Ben Goerztel on OpenCog and related architectures. I am not saying that he is right about everything, but he has written more cogently along these lines than most.