I was thinking about Machine Learning in the context of the “multiverse probability mass” concept (often used by Eliezer) - which has us thinking in counterfactual universes, and some measure of how many of our alternate selves experience one outcome as opposed to another, and this led me to the thought that in that narrative frame, Machine Learning can be seen as a form of “mind summoning”.
We can imagine training a neural network by randomly perturbing its logical circuits, and making the random circuits that give the desired answer more likely (there is an equivalence between gradient descent and zeroth order random sampling[1]). There is an obvious analogy there, to “exploring locally” in the mind-multiverse until we manifest a mind that fits the outputs we desire.
Narratively, I wonder if there’s something there. Maybe it partly explains why training large ML models feels so much like summoning alien minds from alternate dimensions?
[1] Random Gradient-Free Minimization of Convex Functions. Nesterov, Y., Spokoiny, V.