Fanilo RABENJAMINA

5y10

For me it may appears too that apparent agents become apparent machines if the observer has strong prediction power.

As an adult I remember my mind states as a teenager during an aha moment on simple euclidian geometry poofs. The recall efforts, the checks, the sight, the comparisons, the size of the A4 sheet where the figures had to fit in with a margin for the written demo, the colors of my pens and the choices of coloring some figures or use a pencil instead etc.

Well, strong prediction power includes stonger understanding of causes and outcomes, I mean the birth, life and death of the event is well connnected to well defined internal and external values.

So, if a powerful predictor reads the mind of a teenager making the same euclidian geometry that i did, i think the predictor would know before and better than the teenager the outcomes of the efforts, checks, sight, comparisons, agency, results and aha.

The problem is the internal and external values, are they valued before the teenager's birth ? Or are they real time values like for weather forecast or obervation of quantum state ?

Maybe the internal and external values or finite sets of possible values nearest to the event are known sufficiently in advance to call the teenager an apparent machine for a predictor with faster real time.

But if the complete causal chain defines what the teenager is then the predictor is just wit-nicknaming in real time to satisfy its hurry agenda.

So free will is an appearence to others' perspective. If the CIA gets mechanically answers from humans 1,2 and asks itself if techniques for n works for n+1 then since all sapiens are the same, we are all machines in this perspective. But if the question is "would you have worked for the CIA if you were born in America" then all is undecidable because the machines flip sides with the predictors, it's like the famous Gödel prequel "The Cretans, always liars" by Epimenides, Cretica.

5y-30

"statistical unbiased" is important for a data project but is neglected by everyday's intuition because you will never meet the full dataset, the thousands of persons or the red/white solution balls.

Intuition, or "System 1" in the article, is the most important for viablility and for survival. It really feels like System 1 has all the working memory it wants and system 2 has the burden of proof.

The inevitable bias is that the understanding process of System 2 seems to always end in System 1, System 2 has to cast knowledge into System 1's sensibility, improving intuition or failing to scale it up.

So how can we cast probabilities into System 1's decision making ?

Euclid knows how, axioms, definitions and forever true simple theorems. Math quantifiers can help too, it is very easy to cast their semantics in everyday's intuition.

The inevitable bias is our perception of our own intuition. Some people don't introspect, it even seems a sin for them, unnatural. It is not obvious for them that their intuition would benefit from litterally "bitting the apple".

I look at the sky, it is not really empty, it is blue because the biosphere absorbs the other wavelength, the biosphere is warm, breathable, smooth, perfumed, but may be the only nest in the whole universe. Who wants to see the sky like that or with even more discernment ?

There is a bootstrap problem when System 2 knows that System 1 should change its paradigm, because the content has to be casted for System 1 which has the working memory.

For example you can browse wikipedia for the Bayes Theorem, it needs reading, interpretation, and weighting the equations to sort the information in an order that your intuition approve, factual and/or sensible, meaningful.

All these steps require working memory therefore you're stuck there with your own IQ or with a very long list of intermediate steps.

After few steps one may select the interpretation that with Bayesian equations you can quantify causal hypothesis and update the weights of each cause after each event until the set of causes becomes stable.

A machine could run tons of experiments within an hour and store the stable causality chain in an unreadable format.

The inevitable bias is that you don't see any causality chain worth to be calculated every days, incessantly, consciently. None, some System 1 just want to understand all the concepts involved in all correct causality chains to see the surrounding reality with a telescope, in line with their idea of a viable homo sapiens.

The story telling of all the concepts involved in all correct causality chains seems worth too, according to the idea that all homo sapiens is part of progress through pedagogy.

If you want to cast probabilities into you intuitive decision making, you need decision making archetypes that first convince you that calculating is worth the pain because it clearly improves the archetypes' live efficiency.

An arbitrary example : Hacking a dating site https://www.youtube.com/watch?v=d6wG_sAdP0U

It’s an arbitrary example that came to my mind. The real idea is to convince shy Steve, the librarian, that talking to the nice girl he met every day is not as risky as he think.

Steve is shy, first he knows he can embarrass the girl and he cannot predict that. Second, he thinks that talking to her only as a friend is a betray of his feelings and just another risk.

So Steve has absolutely no clue about the correct causality chain of seducing the girl he likes and he absolutely cannot start with a random weighting of the possible causes for the first try.

Steve's System 1 needs an archetype of girls' affection (I don't really know) updatable in few retries and enlightening his own love feelings.

The target event is a radical active "can you date me" with fewer risks.

The 1st intuition may be that a girl's affective attention is dynamic not static. And it's not dynamic in all directions.

The 2nd intuition may be that if he can contemplate his girl's best behavior then he should become aware that he has himself a contemplable aspect.

Then Steve should bite the apple for his true love. Having a seduction agenda seems impure. But love is to think very frequently about a person, the more you care for the more you remember the context.

If Steve can intuitively see the library as the scene where his love affair deploys her dynamic affective attention then he has room to catch some recurring declaration windows.

Aware that his love is diverse he may see different windows, one for taking care, one for curiosity, one physical beauty, one for romantic, one for enthusiasm, one for radical dating declaration.

How calculations can really improve this archetype of love decision making. I'm not sure, at the archetype stage quantitative and numerical is not the same thing.

When you search for declaration windows of different aspects of your love, you compare area tranquilities, extrapolate moods, rate responses to radical dating, understand how you match each other.

So once you are in a rational love fall, the more the causes are identified the less an numerical unbiased reasoning would harm your love feelings.

But the numerical extreme it is necessary only for someone who can met the entire dataset. Maybe there will be an mobile application to reduce the divorce rate.

"25%" of mankind are shy.

"75%" of librarians are shy.

"1%" of salesmen are shy.

The most valued finding (environment's milestone) is the shy salesman. The average valued finding is the shy librarian or corrolary bookworm. We already know shy persons in our surrondings. We are searching objets that map the territory. The bias is about reading the map, not seeing its heterogeneity or multiple authors.