Aleksi Liimatainen

Waking up to reality. No, not that one. We're still dreaming.

Wiki Contributions

Comments

doesn't correspond to anything real

There's a trivial sense in which this is false: any experience or utterance, no matter how insensible, is as much the result of a real cognitive process as the more sensible ones are.

There's another, less trivial sense that I feel is correct and often underappreciated: obfuscation of correspondence does not eliminate it. The frequency by which phenomena with shared features arise or persist is evidence of shared causal provenance, by some combination of universal principles or shared history.

After puzzling over the commonalities found in mystical and religious claims, I've come to see them as having some basis in subtle but detectable real patterns. The unintelligibility comes from the fact that neither mystics nor their listeners have a workable theory to explain the pattern. The mystic confabulates and the listener's response depends on whether they're able to match the output to patterns they perceive. No match, no sense.

The world is full of scale-free regularities that pop up across topics not unlike 2+2=4 does. Ever since I learned how common and useful this is, I've been in the habit of tracking cross-domain generalizations. That bit you read about biology, or psychology, or economics, just to name a few, is likely to apply to the others in some fashion.

ETA: I think I'm also tracking the meta of which domains seem to cross-generalize well. Translation is not always obvious but it's a learnable skill.

Did you write this reply using a different method? It has a different feel than the original post.

Partway through reading your post, I noticed that reading it felt similar to reading GPT-3-generated text. That quality seems shared by the replies using the technique. This isn't blinded so I can't rule out confirmation bias.

ETA: If the effect is real, it may have something to do with word choice or other statistical features of the text. It takes a paragraph or two to build and shorter texts feel harder to judge.

If AI alignment were downstream of civilization alignment, how could we tell? How would the world look different if it were/were not?

If AI alignment is downstream of civilization alignment, how would we pivot? I'd expect at least some generalizability between AI and non-AI alignment work and it would certainly be easier to learn from experience.

Yeah, there were important changes. I'm suggesting that most of their long-term impact came from enabling the bootstrapping process. Consider the (admittedly disputed) time lag between anatomical and behavioral modernity and the further accelerations that have happened since.

ETA: If you could raise an ape as a child, that variety of ape would've taken off.

Upgrading a primate didn't make it strongly superintelligent relative to other primates. The upgrades made us capable of recursively improving our social networking; that was what made the difference.

If you raised a child as an ape, you'd get an ape. That we seem so different now is due to the network effects looping back and upgrading our software.

Are you ontologically real or distinct from the sum of your parts? Do you "care" about things only because your constituents do?

I'm suggesting precisely that the group-network levels may be useful in the same sense that the human level or the multicellular-organism level can be useful. Granted, there's more transfer and overlap when the scale difference is small but that in itself doesn't necessarily mean that the more customary frame is equally-or-more useful for any given purpose.

Appreciate the caring-about-money point, got me thinking about how concepts and motivations/drives translate across levels. I don't think there's a clean joint to carve between sophisticated agents and networks-of-said-agents.

Side note: I don't know of a widely shared paradigm of thought or language that would be well-suited for thinking or talking about tall towers of self-similar scale-free layers that have as much causal spillover between levels as living systems like to have.

The network results are no different from the sum of behaviors of the components (in the same sense as they work out the same in the brain). I was surprised to realize just how simple and general the principle was.

ETA: On closer reading, I may have answered somewhat past your question. Yes, changes in connectivity between nearby nodes affects the operation of those nodes, and therefore the whole. This is equally true in both cases as the abstract network dynamic is the same.

You seem to be focused on the individual level? I was talking about learning on the level of interpersonal relationships and up. As I explain here, I believe any network of agents does Hebbian learning on the network level by default. Sorry about the confusion.

Looking at the large scale, my impression is that the observable dysfunctions correspond pretty well with pressures (or lack thereof) organizations face, which fits the group-level-network-learning view. It seems likely that the individual failings, at least in positions where they matter most, are downstream of that. Call it the institution alignment problem if you will.

I don't think we have a handle on how to effectively influence existing networks. Forming informal networks of reasonably aligned individuals around relatively object-level purposes seems like a good idea by default.

Edit: On reflection, in many situations insulation from financial pressures may be a good thing, all else being equal. That still leaves the question of how to keep networks in proper contact with reality. As our power increases, it becomes ever easier to insulate ourselves and spiral into self-referential loops.

If civilization really is powered by network learning on the organizational level, then we've been doing it exactly wrong. Top-down funding that was supposed to free institutions and companies to pursue their core competencies has the effect of removing reality-based external pressures from the organization's network structure. It certainly seems as if our institutions have become more detached from reality over time.

Have organizations been insulated from contact with reality in other ways?

Load More