DilGreen

Comments

Words as Mental Paintbrush Handles

It's been a few years, but the answer is now - yes. Here's a link to a New Scientist article from earlier this year. I'm afraid there's a pay barrier: https://www.newscientist.com/article/2083706-my-minds-eye-is-blind-so-whats-going-on-in-my-brain/ The article documents recent experiments and thinking about people who are poor or incapable (about 2 to 3% report this) of forming mental pictures (as opposed to manipulating concepts). Key quote:

To find out how MX’s brain worked, Zeman put him into an MRI scanner and showed him pictures of people he was likely to recognise, including former UK prime minister Tony Blair. The visual areas towards the back of his brain lit up in distinctive patterns as expected. However, when MX was asked to picture Blair’s face in his mind’s eye, those areas were silent. In other words, the visual circuits worked when they had a signal from the outside world, but MX couldn’t switch them on at will (Neuropsychologia, vol 48, p 145).

Test yourself here: http://socrates.berkeley.edu/~kihlstrm/MarksVVIQ.htm

Excluding the Supernatural

No sane, rational, and sufficiently-educated person puts forward arguments incompatible with science.

The problem with this statement is that it puts 99.999% of everyone 'beyond the pale'. It disallows meaningful conversations about things which have huge functional impacts on all humans, but about which science has little of use or coherence to say. It cripples conversation about things which our current science deems impossible, without allowing for the certainty that key aspects of what is currently accepted science will be superseded in the future.

In other words, it is an example of a reasonable sounding thing to say that is almost perfectly useless. You have argued yourself into a box.

I would suggest that no sane, rational and sufficiently-educated person ascribes zero probability to irrational seeming propositions.

Superexponential Conceptspace, and Simple Words

Infants do not possess many inborn categories, if they have any at all. They perceive the world as directly as their senses permit. But they do not remain this way for long.

This seems to be objectively untrue. Many ingenious experiments with very young children forcefully suggest a wide range of inborn categories, including faces,. There is even evidence that male and female children pay different attention to different categories long before they can talk.

Further, there is strong evidence that children have inborn expectations of relationships between sensory input. The physics of the eye ensures that images focussed on the retina are upside-down, and experiment shows that, for a few days, this is how the world is perceived. But babies learn to invert the image, so that it tallies with reality. This happens automatically, and within days - presumably through some hard-wired expectation of the interrelation between senses - eg proprioception and sight.

Planning Fallacy

As an architect and sometime builder, as an excellent procrastinator, I heartily concur with this comment.

The range of biases, psychological and 'structural' factors at work is wide. Here are a few:

  • 'tactical optimism' : David Bohm's term for the way in which humans overcome the (so far) inescapable assessment that; 'in the long run, we're all dead'. Specifically, within the building industry, rife with non-optimal ingrained conditions, you wouldn't come to work if you weren't an optimist. Builders who cease to have an optimistic outlook go and find other things to do.

  • maintaining flexibility has benefits: non-trivial projects have hidden detail. It often happens that spending longer working around the project - at the expense of straight-ahead progress - can lead to higher quality at the end, as delayed completion has allowed a more elegant/efficient response to inherent, but unforeseen problems.

  • self-application of pressure: as someone tending to procrastinate, I know that I sometimes use ambitious deadlines in order to attempt to manage myself - especially if I can advertise that deadline - as in the study

  • deadline/sanction fatigue: if the loss incurred for missing deadlines is small, or alternatively if it is purely psychological, then the 'weight' of time pressure is diminished with each failure.

I'm going to stop now, before I lose the will to live.

Magical Categories

So many of the comments here seem designed to illustrate the extreme difficulty, even for intelligent humans interested in rationality, and trying hard to participate usefully in a conversation about hard-edged situations of perceived non-trivial import, to avoid fairly simplistic anthropomorphisms of one kind or another.

Saying, of a supposed super-intelligent AI - one that works by being able to parallel, somehow, the 'might as well be magic' bits of intelligence that we currently have at best a crude assembly of speculative guesses for - any version of "of course, it would do X", seems - well - foolish.

Magical Categories

Whether or not the AI finds the abstraction of human happiness to be pertinent, and whether it considers increasing it to be worthwhile sacrificing other possible benefits for, are unpredictable, unless we have succeeded in achieving EY's goal of pre-destining the AI to be Friendly.

Magical Categories

Surely the discussion is not about the issue of whether an AI will be able to be sophisticated in forming abstractions - if it is of interest, then presumably it will be.

But the concern discussed here is how to determine beforehand that those abstractions will be formed in a context characterised here as Friendly AI. The concern is to pre-ordain that context before the AI achieves superintelligence.

Thus the limitations of communicating desirable concepts apply.

Magical Categories

A utility function measured in dollars seems fairly unambiguously to lead to decisions that are non-optimal for humans, without a sophisticated understanding of what dollars are.

Dollars mean something for humans because they are tokens in a vast, partly consensual and partially reified game. Economics, which is our approach to developing dollar maximising strategies, is non-trivial.

Training an AI to understand dollars as something more than data points would be similarly non-trivial to training an AI to faultlessly assess human happiness.

Surprised by Brains

Surely this is not an example of search-space compression, but an example of local islands of fitness within the space? Evolution does not 'make observations', or proceed on the basis of abstractions.

An even number of legs 'works best' precisely for the creatures who have evolved in the curtailed (as opposed to compressed) practical search space of a local maxima. This is not a proof that an even number of legs works best, period.

Once bilateral symmetry has evolved, the journey from bilateralism to any other viable body plan is simply too difficult to traverse. Nature DOES search the fringes of the space of centipedes with an odd number of legs- all the time.

http://www.wired.com/magazine/2010/04/pl_arts_mutantbugs/

That space just turns out to be inhospitable, time and time again. One day, under different conditions, it might not.

BTW, I am not claiming, either, that it is untrue that an even number of legs works best - simply that the evolution of creatures with even numbers of legs and any experimental study showing that even numbers of legs are optimal are two different things. Mutually reinforcing, but distinct.

Surprised by Brains

This comment crystallised for me the weirdness of this whole debate (I'm not picking sides, or even imagining that I have the capacity to do so intelligently).

In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I'm using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum - that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);

Worm1: I tell you it's really important to consider the possibility that these "intelligent beings" might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....

Worm2: But why can't you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won't understand that disrupting this flow will be sub-optimal....

I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those 'how many angels would fit onto the head of a pin' ones that we fondly ridicule.

The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method - and they didn't distinguish between them!

Load More