Monkeymind

Monkeymind's Posts

Sorted by New

Monkeymind's Comments

The scourge of perverse-mindedness

"What do you mean?"

I may have wrongly determined (because of your name) that you held the same view as other plasma cosmologists (the Electric Universe folks) that I have been talking with the last couple of weeks. Their view is that reality is at the single level, but 'observable reality' (the multi-level model) is the interface between the brain and reality. Consequently, all their discussions are about the interface (phenomena).

If so, then understanding the difference between an object and a concept might help one come up with ways to make reductionism kewl for the 'normal' folk. Math is an abstract and dynamic language that may be good for describing (predicting) phenomena like rainbows (concepts) but raindrops are static objects and better understood by illustration.

While the math concepts make the rainbow all the more beautiful and wonderful for you, this may not be the case for normal folks. I for one have a better "attitude" about so called knowledge when it makes sense. When I understand the objects involved, the phenomena is naturally more fascinating.

But as you suggested, I may be totally misunderstanding the Scourge of Perverse-mindedness.

BTW: The negative thumbs are not mine, but most likely your peers trying to tell you not to talk to me. If you doubt this check my history.... Take care!

Configurations and Amplitude

So what's up with that? I went to a lot of work writing those posts.

Is this the sort of thing done with approval of the site owner?

They were well thot out and reasoned posts. The majority were very civil and violated no posted rules. In fact there aren't any posted rules that I am aware of. Just because my posts are annoying to some folks is not reason to delete them. NO one has to read anything.

I just don't understand the reasoning there, or here:

"A specific suggestion I have is to establish a community norm of downvoting those participating in hopeless conversations, even if their contributions are high-quality."

Holden's Objection 1: Friendliness is dangerous

If the evolutionary process results in either convergence, divergence or extinction, and most often results in extinction, what reason(s) do I have to think that this 23rd emerging complex homo will not go the way of extinction also? Are we throwing all our hope towards super intelligence as our salvation?

Holden's Objection 1: Friendliness is dangerous

Humans have a values hierarchy. Trouble is, most do not even know what it is (or, they are). IOW, for me honesty is one of the most important values to have. Also, sanctity of (and protection of) life is very high on the list. I would lie in a second to save my son's life. Some choice like that are no-brainers, however few people know all the values that they live by, let alone the hierarchy. Often humans only discover what these values are as they find themselves in various situations.

Just wondering... has anyone compiled a list of these values, morals, ethics... and applied them to various real-life situations to study the possible 'choices' an AI has and the potential outcomes with differing hierarchies?

ADDED: Sometime humans know the right thing but choose to do something else. Isn't that because of emotion? If so, what part does emotion play in superintelligence?

Configurations and Amplitude

Not a yes, or no question D. (Like'''Have you stopped beating your wife). We don't test a hypothesis. It is an assumption or assumptions that we accept or not based upon it's rationality.

Thoughts on the Singularity Institute (SI)

I have long complained about SI's narrow and obsessive focus on the "utility function" aspect of AI -- simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the "utility function" mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.

I often observe very intelligent folks acting irrationally. I suspect superintelligent AI's might act superirrationally. Perhaps the focus should be on creating rational AI's first. Any superintelligent being would have to be first and foremost superrational, or we are in for a world of trouble. Actually, in my experience, rationality trumps intelligence every time.

Thoughts on the Singularity Institute (SI)

If you are concerned about Intellectual Property rights, by all means have a confidentiality agreement signed b4 revealing any proprietary information. Any reasonable person would not have a problem signing such an agreement.

Expect some skepticism until a working prototype is available.

Good luck with your project!

GAZP vs. GLUT

@TheOtherDave:

Anotherblackhat said :

How can you be 100% confident that a look up table has zero consciousness when you don't even know for sure what consciousness is?

In response Monkeymind said :

Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout?

Not being100% confident what consciousness is, seemed to be a concern to anotherblackhat. Defining consciousness would have removed that concern.

No need to "read between the lines" as it was a straight forward question. I really didn't understand why the definition of consciousness wasn't laid out in advance of the thot experiment.

Defining terms allows one to communicate more effectively with others which is really important in any conversation but essential in presenting a hypothesis.

I was informed by Dlthomas that conceptspace is different than thingspace, so I think get the jest of it now.

However, my point was, and is, that the theorist's defs are crucial to the hypothesis and hypotheses don't care at all about goals, preferences, and values. Hypotheses simply illustrate the actors, define the terms in the script and set the stage for the first act. Now we can move on to the theory and hopefully form a conclusion.

No need to apologize, it is easy to misunderstand me, as I am not very articulate to begin with, and as usual, I don't understand what I know about it!

ADDED: And I still need to learn how to narrow the inferential gap!

GAZP vs. GLUT

Thanx! TheOtherDave:

The point of defining one's terms is to avoid confusion in the first place. It doesn't matter what anyone else thinks consciousness means. Only the meaning as defined in the theorist's hypothesis is important at this stage of the scientific method.

"there's a good chance that I've lost sight of my goal"

That's something I don't understand (with epistemic rationality- "The art of choosing actions that steer the future toward outcomes ranked higher in your preferences ").

This is fine when a person is making personal choices on how to act, but when it comes to knowledge (and especially the scientific method)....It seems like ultimately one would be interested in increasing one's understanding regardless of an individual's goals, preferences or values.

Oh well, at least we aren't using Weber's affectual rationality involving feelings here.

Load More