Cristian-Curaba
Cristian-Curaba has not written any posts yet.

Cristian-Curaba has not written any posts yet.

Great work! I have a technical question.
My current understanding is as follows:
1. If we have even one observable variable with agreement observation and for which the latent variables satisfy the exact naturality condition, we can then build the transferability function exactly.
2. In the approximation case, if we have multiple observable variables that meet these same conditions, we can choose the specific variable (or set of variables, in the proofs you used a couple) that will minimize the errors. We would not need to use all of them.
Is this correct?
Additionally, I was wondering if you have attempted to implement the algorithm derived from the proof to construct the isomorphism. It seems that some effort could be dedicated to developing an algorithm that minimizes or reduces these errors. It could one day be helpful for interpreting and aligning different ontological frameworks, like mapping an alien Bayesian network to a human one.
TL;DR
If the narrow (but intelligent) AI can weakly interact with the environment, adapt from feedback and exfiltrate secretly, even once, it can learn how to translate from its ancient language to the environmental one by building samples of pairs. The crucial aspect is training the agent on an unknown/unconventional alphabet: the outputs are meaningless from the environmental point of view, limiting the environmental learning possibilities of the language.
The goal of the following comment is two-sided:
Let's assume that the narrow (but intelligent) AI can adapt from feedback and exfiltrate secretly.
- If the output tokens are bit-vectors compatible with the environment (for example they follow the Unicode Standards), then the "Sumerian + reporter" method... (read 507 more words →)
By scanning the graphical proof, I don't see any issue on the following generalization of the Mediator Determines Redund Theorem:
Let X1,…,Xn,Λ and Λ′ be random variables and let X1,…,Xm be any not-empty subset of X1,…,Xn that satisfy the following conditions:
- Λ Mediation: X1,…,Xm are independent given Λ
- Λ′ Redundancy: ∀j∈{1,…,m}Λ′←Xj→Λ′
Then Λ′←Λ→Λ′.
In the above, I've weaken the Λ′ Redundancy hypothesis, requiring that the redundancy of any subset of random variables is enough to conclude the thesis.
Does the above generalization work (if don't, why?).
If the above stands true, then just one observational random variable (with agreement) is enough to satisfy the Redundancy condition (Mediation is trivially true with one variable), an therefore ΛA is determined by ΛB. Moreover, in the general approximation case, if we have various sets of random variables that meet the naturality condition, we... (read more)