A curious random process that wants to explore and optimize all universes.
Are you reading Halfbakery, Eliezer? A similar idea has been shared rather recently there, though, I posted something along these lines 4 years ago (4 months before the post on steemit) over here and here. Would be quite curious to engage in this, due to potential benefits to health and cryonics, as described in this video.
Thanks to Moe and Suji indeed. I'm putting the link to Chinese description to the top of the page.
It's great that we already had these ideas before. The "short-form" would definitely be of interest to some. In addition, it doesn't have to necessarily be ephemeral. For example, on the Halfbakery mentioned above, posts (even if short) continue to function (e.g., I can comment on something from the last century), even if it was just a short post.
Rationality has no axioms, just heuristics and rules for different environments. In other words, rationality is a solution to a problem (optimality of thinking and deciding) to reason within a domain, but because of the diversity of domains, it is not axiomatizable to a single specific set of axioms. I suppose best one can do given arbitrary domain, is to say: maybe try exploring.
Certainly true, yet, just because this is how almost every field of research works, doesn't mean that it is how they should work, and I like shminux's point.
Random or complex processes are curiosities. Infinitely complex cellular automata are infinitely curious to explore all possible worlds. Entropy of the world itself may be. As described on my monologue here, agents are fundamentally entities capable of volition, cognition, action. Therefore, they are instances of F(X)=Y, where volition is Y, cognition is perception of world F, and action is the process X that parametrizes the world seeking to satisfy the equation.
If X is within F, we have embedded processes. So, yeah agency may be an illusion of processes (curiosities) seeking to satisfy (optimize for) various conditions, and it is already be happening, as the processes that are trying to satisfy conditions are emerging on the world-wide web, not just within our brains.
Safety is assurance of pursuit of some goals (Y) - some conditions. So, one thing that's unlikely to have a paradigmatic shift, is search for actions to satisfy conditions:
1. Past: dots, line, regression
2. Present: objects, hyperplane, deep learning
3. Future: ?, ?, ?
Both 1. and 2. are just a way to satisfy conditions, that is, solve equation F(X)=Y (equation solving as processes (X) to model world (F), to satisfy conditions (Y)). The equation model had not changed for ages, and is so fundamental, that I would tend to assume, that world's processes X will continue to parametrize world F by being part of it, to satisfy conditions Y, no matter what the 3. is.
I wouldn't expect the fundamental goals (specific conditions Y) to change either: the world's entropy (F) (which is how world manifests, hence world's entropy is the world) trains learning processes such as life (which is fundamentally mutating replicators) to pursue goal Y which may be formulated as just information about the entropy to counteract it (create world's F model F' to minimize change = reach stability).
Islands of stability exist for chemical elements, for life forms (mosquitoes are an island of stability among processes in existence, although they don't have to be very intelligent to persist), and I believe they exist for the artificial life (AI/ML systems) too, just not clear where exactly these islands of stability will be.
Where the risk to civilization may lie, is in the emergence of processes evolving independently of the existing civilization (see symbiosis in coordination problem in biological systems), because of incorrect payoffs, making useful services parasitize our infrastructures (e.g., run more efficient economically self-sustaining processes on computers).
What I would find interesting, is how these biological patterns compare and could apply to software systems. For example, take a look at the codons as curly braces. Can we look at the software development as an evolution of functions coded within the curly braces (some of them dormant, but some of them expressed (like proteins are), through being hosted on places like hosting providers (like ribosomes), or server processes, as in serverless computing).
While the behavior of society at the psychological and socio-economic level will have parallels to the aforementioned biological phenomena, however, it may be argued that in the long term, the future of the evolution and behaviors is going to be decided by the evolution of functions as on-line services, that create the foundation for social behaviors, and how they evolve may be even more interesting to consider than just the psychological and socio-economic decisions.
Personally to each of us the value of content is a function of our goals. So, ideally, I would want to be able to have access to all comments, and simply have a smart filter to zero-in to those comments that matter to me. That would be a lot more universally useful and desirable to me than having something one-directionally useful, such as a machine learning model that simulates moderator based on a single standard or a limited set of values or limited set of extracted features.
So, one way to be universally useful would be to empower the user to compute the scores themselves based on arbitrary goals by providing all uninterpreted raw data to the user. However, since the community usually does have opinion, of what type of posts matter to be seen by a first-time viewer to give a sense of what the forum should feel like, it would make sense for the forum community to define some specific goal to be set as a default moderating filter.