Wiki Contributions

Comments

I think a semantic check is in order. Intuition can be defined as an immediate cognition of a thought that is not inferred by a previous cognition of the same thought. This definition allows for prior learning to impact intuition. Trained mathematicians will make intuitive inferences based on their training, these can be called breakthroughs when they are correct. It would be highly improbable for an untrained person to have the same intuition or accurate intuitive thoughts about advanced math.

Intuition can also be defined as untaught, non-inferential, pure knowledge. This would seem to invalidate the example above since the mathematician had a cognition that relied on inferences from prior teachings. Arriving at an agreement on which definition this thread is using will help clarify comments.

More specifically, epistemology is a formal field of philosophy. Epistemologists study the interaction of knowledge with truth and belief. Basically, what we know and how we know it. They work to identify the source and scope of knowledge. An epistemological statement example goes something like this; I know I know how to program because professors who teach programming, authoritative figures, told me so by giving me passing grades in their classes.

Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.

While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.

Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.

If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.

Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.

Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.

As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.

When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.

In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.

This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.

Kaj,

Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.

In every human endeavor, humans will shape their reality, either physically or mentally. They go to schools where their type of people go and live in neighborhoods where they feel comfortable based on a variety of commonalities. When their circumstances change, either for the better or the worse, they readjust their environment to fit with their new circumstances.

The human condition is inherently vulnerable to wireheading. A brief review of history is rich with examples of people attaining power and money who subsequently change their values to suit their own desires. The more influential and wealthy they become, enabling them to exist unfettered, the more they change their value system.

There are also people who simply isolate themselves and become increasingly absorbed in their own value system. Some amount of money is needed to do this, but not a great amount. The human brain is also very good at compartmentalizing value sets such that they can operate by two (or more) radically different value systems.

The challenge in AI is to create an intelligence that is not like ours and not prone to human weaknesses. We should not attempt to replicate human thinking, we need to build something better. Our direction should be to create an intelligence that includes the desirable components and leaves out the undesirable aspects.

Well, I'm a sailor and raising the waterline is a bad thing. You're underwater when the waterline gets too high.

Thanks for the feedback. I agree on the titling; I started with the title on the desired papers list, so wanted some connection with that. I wasn't sure if there was some distinction I was missing, so proceeded with this approach.

I know it is controversial to say super intelligence will appear quickly. Here again, I wanted some tie to the title. It is a very complex problem to predict AI. To theorize about anything beyond that would distract from the core of the paper.

While even more controversial, my belief is that the first AGI will be a super intelligence in its own right. An AGI will have not have one pair of eyes, but as many as it needs. It will not have just one set of ears, it will immediately be able to listen to many things at once. The most significant aspect is an AGI will immediately be able to hold thousands of concepts in the equivalent of our short term memory, as opposed to the typical 7 or so for humans. This alone will enable it to comprehend immensely complex problems.

Clearly, we don't know how AGI will be implemented or if this type of limit can be imposed on the architecture. I believe an AGI will draw its primary power from data access and logic (i.e., the concurrent concept slots). Bounding an AGI to an approximation of human reasoning is an important step.

This is a major aspect of friendly AI because one of the likely ways to ensure a safe AI is to find a means to purposely limit the number of concurrent concept slots to 7. Refining an AGI of this power into something friendly to humans could be possible before the limit is removed, by us or it.

I just wanted to express some thoughts here. I do not intent to cover this in the paper as it is a topic for several focused papers to explore.

I struggle with conceiving wanting to want, or decision making in general, as a tiered model. There are a great many factors that modify the ordering and intensity of utility functions. When human neurons fire they trigger multiple concurrent paths leading to a set of utility functions. Not all of the utilities are logic-related.

I posit that our ability to process and reason is due to this pattern ability and any model that will approximate human intelligence will need to be more complex than a simple linear layer model. The balance of numerous interactive utilities combine to inform decision making. A multiobjective optimization model, such as PIBEA, is required.

I'm new to LW, so I can't open threads just yet. I'm hoping to find some discussions around evolutionary models and solution sets relative to rational decision processing.

Granted. My point is the function needs to comprehend these factors to come to a more informed decision. Simply doing a compare of two values is inadequate. Some shading and weighting of the values is required, however subjective that may be. Devising a method to assess the amount of subjectivity would be an interesting discussion. Considering the composition of the value is the enlightening bit.

I also posit that a suite of algorithms should be comprehended with some trigger function in the overall algorithm. One of our skills is to change modes to suit a given situation. How sub-utilities impact the value(s) served up to the overall utility will vary with situational inputs.

The overall utility function needs to work with a collection of values and project each value combination forward in time, and/or back through history, to determine the best selection. The nature of the complexity of the process demands using more sophisticated means. Holding a discussion at the current level feels to me to be similar to discussing multiplication when faced with a calculus problem.

Load More