A combination of ideas from books I have been reading recently ("The Science of Can and Can't", "What We Owe the Future") have me wondering about potential recipes for sustainable moral exploration and potentially approaching cultural lock-in type effects (moral/ethical staticity/stagnation).
If I imagine the option of locking in the culture of almost any population from any time period in my species' history, It looks to me like that option would lead to unnecessary suffering and great moral harm. The same could be said, at least in principle, if the option to prevent cultural lock-in from ever occurring were to be chosen, as the process by which cultures change does not guarantee those changes are improvements or are on a sustainable trajectory.
The challenge, as I see it, would be to find the best answers available to us given our current knowledge, without ignoring the possibility that greater knowledge might improve or negate those answers in the future. A very very rough idea for a solution of this type might be to find some sort of way to implement cultural error correction and knowledge base growth in a minimally restrictive (respecting uncertainty and variational freedoms), effective (ethical, easily replicated to minds/societies/systems), but resilient (resisting entropy and decline) form.
I feel like I can see structural similarities between such an endeavor and the abstract design of a novel organism, as well. Abstracting away the unnecessary physical components, and drawing on the functional similarities of information replicators and the universal constraints that govern them, I wonder if, even in principle, there is a potential recipe, framework, or blueprint capable of being designed that might function in that way if implemented faithfully. Is this frame reasonable, reaching for an impossible balance, framed incorrectly, or is there something else I am missing?
General Purpose/Universal(?) definition of Magic.
Magic (for an observer O at time t) is any event or capability that appears to O to involve purposive causal power for which O lacks a coherent, lawful, and sufficiently detailed explanation.
Magic_O(t) = { e ∈ Events | Power(e) ∧ ¬GoodExplanation_O,t(e) ∧ Directed(e) }