More generally, it seems we should be avoiding anything while distracted.
It makes sense that it would mess our learning, as it makes attributing cause & consequence confusing.
But it may also mess replaying our learned skills, as it is a big cause of accidents.
Advertisement.
AKA parasitic manipulation so normalized it invades every medium and pollutes our minds by hogging our attention, numbing our moral sense of honesty, and preventing a factual information system from forming.
Trivial inconveniences are alive and kicking in digital piracy, where one always has to jump through hoops such as using obscure services, softwares, settings or procedures.
I suspect it is to fend off the least motivated users: numerous enough to bring attention, and most likely to expose the den in the wrong place.
I suspect it is a form of subtle "ancestral tribe police".
Throwing trivial inconveniences at offenders is a good way to hint they are out of line, avoiding:
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
In fact the winged males look far more like females than they look like wingless males.
All the "3rd sex" I can think of are this : males in female form, for direct reproduction advantage.
Not a big departure from 2 sexes.
Eusocial insects might be more interesting.
N Rays deserve an honorable mention.
Blondlot was very scientific (in appearance), and followed by some scientists (of the same nationality).
Other good candidates today would be : Nanotech, space elevator, anything too much futurist-sounding.
Yes it's going to happen some day, no it won't be like we imagine.
EY uses Bayes to frame reality ever closer, not just to answer abstract homework on paper and call it a day.
If you solve a given problem without spotting it is ill-formed, your answer is correct but not practical.
I would guess thinking "frequency" implies it happens, while "probability" might trigger the But there is still a chance, right ? rationalization.
Others :
We know it's bad, yet we keep sweeping valuable knowledge under the rug just because it's embarrassing. Confirmation bias anyone ?
One consequence is that researchers are kind of expected to know what they will find before they even begin, a form of weak insurance on productivity. This discourages to venture in the unknown.
This is a design-stance explanation...
I worded poorly, but evolution does produce such apparent result.
The Hard Problem of Consciousness
Is way out my league, I did not pretend to solve it : "It's a far cry from a proper explanation".
But pondering it led to another find : "Feeling conscious" looks like an incentive to better model oneself, by thinking oneself special, as having something to preserve... which looks a lot like the soul.
A simple, plausible explanation that dissolves a mystery, works for me ! (until better is offered)...
It would be stupid and dangerous to deliberately build a "naughty AI" that tests, by actions, its social boundaries, and has to be spanked. Just have the AI ask!
Pitfall : We tend to tell embellished, disguised, misguided, or sometimes plain wrong versions of reality.
An AI would have to see through that to make sense.
From the inside we can't judge the relative speed or power, but we can judge the efficiency.
And it's abysmal : the jumps from quarks to particles to atoms to molecules to cells to animals to stars to galaxies each throw orders of magnitude around like it's nothing.
What could this possibly tell us ?
Otherwise there could be an abstract mathematical object structurally identical to this world, but with no experiences in it, because it doesn't exist. And papers that philosophers wrote about subjectivity wouldn't prove they were conscious, because the papers would also 'not exist'.
didn't you just solve the mystery of the First Cause?
My take :
A universe is not just math, it also needs processing to run.
Existence is not in the software or the processor, but in the processing.
So long as that universe is not run/simulated, it's philosophers do not exist, and what they would write is unknown.
Okay. Q: Why do I think I am conscious ?
A: Because I feel conscious.
Q: Why ?
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
Q: What situation ? What behavior ?
A: Modeling oneself. Paying extra attention.
Q: And how ?
A: I expect a kluge fitting of the blind idiot god, like detecting when proprioception matches and/or drives agent modeling, probably with feedback loops. This would lower environment perception, inhibit attention zapping etc., leading to how consciousness feels.
It's a far...
It can do what the mind it is made from can. No more, no less.
How about : The logic of a system applies only within that system ?
Variants of this are common in all sorts of logical proofs, and it stands to reason that elements outside a system do not follow the rules of that system.
A construct assuming something out-of-universe acting in-universe just can't be consistent.
I assume that I have an error per each inference step
This.
The further a reasoning reaches, the more likely to be wrong.
Any step could be not accurate enough, or not account for unknown effects in unusual situations, or rely on things we have no mean of knowing.
Typical signs that it is drifting too much from reality :
Errors or imagination produce these easily, reality not.
One is central to one's map, not to reality.
If so, then why ...
How on earth can humans overcome this problem?
Why, eugenics of course ! The only way to change our nature.
First, selective breeding. Then genetic engineering.
Yes, there is a risk of botching it. No, we don't have a better solution.
Reminiscent of [CODING HORROR] Separating Programming Sheep from Non-Programming Goats
Ask programming students what a trivial code snippet of an unknown language does.
Right or wrong, these can learn programming.
These will fail no matter what.
I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.