Bruno Mailly

Posts

Sorted by New

Wiki Contributions

Comments

For such (most) people, reality is social, rather than something you understand/ control.

Reminiscent of [CODING HORROR] Separating Programming Sheep from Non-Programming Goats

Ask programming students what a trivial code snippet of an unknown language does.

  • Some form a consistent model.
    Right or wrong, these can learn programming.
  • Others imagine a different working every time they encounter the same instruction.
    These will fail no matter what.
    I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.

More generally, it seems we should be avoiding anything while distracted.

It makes sense that it would mess our learning, as it makes attributing cause & consequence confusing.

But it may also mess replaying our learned skills, as it is a big cause of accidents.

Advertisement.

AKA parasitic manipulation so normalized it invades every medium and pollutes our minds by hogging our attention, numbing our moral sense of honesty, and preventing a factual information system from forming.

Trivial inconveniences are alive and kicking in digital piracy, where one always has to jump through hoops such as using obscure services, softwares, settings or procedures.

I suspect it is to fend off the least motivated users: numerous enough to bring attention, and most likely to expose the den in the wrong place.

I suspect it is a form of subtle "ancestral tribe police".

Throwing trivial inconveniences at offenders is a good way to hint they are out of line, avoiding:

  • Direct confrontation, with risk of fuss and escalation.
  • Posing as authority, with risk of dedication or consequences.
  • Goofing on the tribe policy, as such enforcing requires repeating and consensus.
  • Misunderstandings, as a dim offender will eventually just give up with no need to understand.

Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI ?

Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.

They would focus on transmitting what they want to be, not what they currently are.

...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?

In fact the winged males look far more like females than they look like wingless males.

All the "3rd sex" I can think of are this : males in female form, for direct reproduction advantage.

Not a big departure from 2 sexes.

Eusocial insects might be more interesting.

N Rays deserve an honorable mention.

Blondlot was very scientific (in appearance), and followed by some scientists (of the same nationality).

Other good candidates today would be : Nanotech, space elevator, anything too much futurist-sounding.

Yes it's going to happen some day, no it won't be like we imagine.

EY uses Bayes to frame reality ever closer, not just to answer abstract homework on paper and call it a day.

If you solve a given problem without spotting it is ill-formed, your answer is correct but not practical.

I would guess thinking "frequency" implies it happens, while "probability" might trigger the But there is still a chance, right ? rationalization.

Load More