With the kind of scaling required to approach utopia, by what mechanism do we screen out the bad bits? Our legacy includes total war and human sacrifice; these things too would scale.
I say mostly that you have competent people working on the right cause. You do also need to look at exactly what you are doing, but the reason having competent people working on the cause matters more is because finding the right thing to do is the hardest part. If it were obvious it would already be done, and mediocre execution on the right thing beats superlative execution on the wrong thing, it seems to me.
As an intuition pump, consider that successful startups usually pivot at some point and this is why investors prefer evaluating the team to the idea. A little more consideration of the same point reveals that the people are where the investment goes and how the capital is built in both the literal and gearsy senses.
There are a variety of experiences which are relatively common in religion, but quite rare elsewhere. The best articulators of spiritual experience I know of are Buddhist mediation traditions, which describe a whole range of different mental states. Others specialize in reaching a few, like the ecstasy pursued by whirling dervishes and Pentecostals passing around a snake.
This has practical consequences; there is a notion in political anthropology called the theatre state, wherein the government's function is not resources or security but rather to provide dramatic ritual experiences. Naturally these are through the vehicle of religion. It is an increasingly popular viewpoint when considering issues like large-scale human sacrifice in Mesoamerican empires like the Aztec and Maya.
But if you are actively searching for empathy for faith, I think we are better served by not thinking about people or groups. Instead, consider math: I propose that when we use equations or methods of analysis that are dissimilar from how people think (which is most of them) and then use those answers, what we are doing is similar to the radical surrender you point to. The difference is of accuracy and precision, so it is less of a leap, but still goes in the same direction.
Imagine the Apollo mission: a few people wrapped in a few layers of foil drifting through an environment utterly inimical to human life. The course is computed; they do the burn for traveling to the moon, and then stop the thrusters to conserve fuel. Hineni, hineni O Newton.
The two I immediately thought to check were 1, 2. I fully expected these posts to be at least controversial, with a substantial chance of negative score, but they outperformed my expectations. Now that the karma has been normalized, the ratio of vote:score is much more consistent with the some people liked it and some people didn't expectation I had.I note the breakdown is also consistent with a medium amount of small upvotes and very little dislike, but considering the content I was betting on at least a few strong downvotes.
Reviewing some older posts current scores, I am in that weird place where I have to un-do some updating about community preferences. It is mildly gratifying to have been pretty accurate in the first place, though.
This feels like one of those social-reality level problems. It seems to me that as long as the concept has a single socially real meaning, we get all the same value out of it with respect to communication. I am unsure about thinking; on the one hand individually it is better to have a concept that cleaves physical reality at the joints, but it feels like with a group that understands a socially real meaning we are more likely to discover the limits of the concept.
I suppose the question there boils down to whether transmission is more important than generation, and if so by how much?
I too have voted!
And then I walked away to come back later for points adjustments. I don't expect to make many more, but see no reason not to keep the option.
The single-agent MDP setting resolves my confusion; now it is just a curiosity with respect to directions future work might go. The action varies with discount rate result is essentially what interests me, so refocusing in the context of the single-agent case: what do you think of the discount rate being discontinuous?
So we are clear there isn't an obvious motivation for this, so my guess for the answer is something like "Don't know and didn't check because it cannot change the underlying intuition."
I have a question about this conclusion:
When 0<γ<1, you're strictly more likely to navigate to parts of the future which give you strictly more options (in a graph-theoretic sense). Plus, these parts of the future give you strictly more power.
What about the case where agents have different time horizons? My question is inspired by one of the details of an alternative theory of markets, the Fractal Market Hypothesis. The relevant detail is an investment horizon, which is how long an investor keeps the asset. To oversimplify, the theory argues that markets work normally with a lot of investors with different investment horizons; when uncertainty increases, investors shorten their horizons, and then when everyone's horizons get very short we have a panic.
I thought this might be represented by step function in the discount rate, but reviewing the paper it looks like γ is continuous. It also occurs to me that this should be similar in terms of computation to setting γ=1 and running it over fewer turns, but this doesn't seem like it would work as well for the case of modelling different discount rates on the same MDP.
Does anyone know of a good tool for animating 3d shapes? I have a notion for trying to visualize changes in capability by doing something like the following:
In this way there would be kind of a twisted tree, and I could deploy our intuitions about trees as a way to drive the impact. I thought there would be something like a D3 for these kinds of manipulations, but I haven't found it yet.