This article didn’t provide evidence that it is possible, it just defined the intended approach.
I agree that the analogy with the plants or the forests is not trivial.
However, even though civilization is short lived, human communities did evolve for much longer.
Also, large scale systems like the market seem very robust (it survived wars, as well as environmental and technological perturbations).
So the idea here is conditional:
If we did find a root cause for moloch dynamics and catastrophic failures, then adressing it MIGHT allow society to solve problems by itself.
Think of it like a wild bet we should at least put some chips in. I’ll write more on this soon.
Sorry for the late reply.
I agree it doesn’t have much practical value. I thought it was “philosophically” interesting. This is because it seems an important benchmark in terms of our relationship with machines.
The hypothesis is that our intuitive / system 1 aporoximation to treating machines as thinking beings depends on a benchmark we already passed.
I wrote it down because i want to refer it in future pieces (like the one on ai and loneliness)
I guess that could be true.
Just another thought that might add to that, I believe our values are highly dependant on our communities. Sort of a bandwaggon effect.
So from the outside, I don’t like that possible future. Maybe when we’re there i might prefer it. However it might be a local optimum for everyone involved (like using social networks currently).
I think many people, if given the choice, would choose social networks not to exist
I agree!
I forgot to mention all this happened in Argentina, where this not yet the case. Also, he was only learning how to use social media stuff.
But yeah, there are more pressures than just the social one. I think they have to do with network effects as well. Just thought loneliness was even one more important topic
I suppose there is also a Moloch dynamic: if every other kid believes in Santa, the one who doesn’t might be left out. In worse cases, other kids might react agressively to their comments. Something similar happens to atheist adults in strongly christian communities.
The “sense of wonder” argument may be a rationalization for “I don’t want my kid to be the only one who doesn’t share THIS socially important sense of wonder with the rest”. The kid who obsesses with space shuttles instead of santa might be socially worse off.
If this is the case, arguing against the explicit arguments of santaism won’t be so effective, because the attitude isn’t rationally founded, but rather a post-hoc justification of an individual incentive within the Moloch dynamic. I suspect the main issue here is the lock-in effect.
I’m not sure what the solution would be. It might need coordination (a group of parents whose kids are friends agreeing on not telling the “noble lie” and finding alternatives instead). Maybe, if it works better (or, at least, if it gets easier) once a small critical mass is achieved, more parents will adopt this approach.
I sort of agree, but would reformulate it as naturally staying brutish and people spending extra effort to make it harder to “humanize”.
While I agree this distinction is important, and would make some people reflect upon their actions, I think that heuristics and social network dynamics strongly dominate what actually happens.
In the end, many times the “deshumanization” doesnt require extra effort, but its rather the result of automatic rationalizations / cognitive dissonance