FeepingCreature

Wiki Contributions

Comments

Yeah I never got the impression that they got a robust solution to fog of war, or any sort of theory of mind, which you absolutely need for Starcraft.

Shouldn't the king just make markets for "crop success if planted assuming three weeks" and "crop success if planted assuming ten years" and pick whichever is higher? Actually, shouldn't the king define some metric for kingdom well-being (death rate, for instance) and make betting markets for this metric under his possible roughly-primitive actions?

This fable just seems to suggest that you can draw wrong inferences from betting markets by naively aggregating. But this was never in doubt, and does not disprove that you can draw valuable inferences, even in the particular example problem.

This is just human decision theory modules doing human decision theory things. It's a way of saying "defend me or reject me; at any rate, declare your view." You say something that's at the extreme end of what you consider defensible in order to act as a Schelling point for defense: "even this is accepted for a member." In the face of comments that seem like they validate Ziz's view, if not her methods, this comment calls for an explicit rejection of not Ziz's views, but Ziz's mode of approach, by explicitly saying "I am what you hate, I am here, come at me."

A community that can accept "nazis" (in the vegan sense) cannot also accept "resistance fighters" (in the vegan sense). Either the "nazi" deserves to exist or he doesn't. But to test this dichotomy, somebody has to out themselves as a "nazi."

Right, but if you're an alien civilization trying to be evil, you probably spread forever; if you're trying to be nice, you also spread forever, but if you find a potentially life-bearing planet, you simulate it out (obviating the need for ancestor sims later). Or some such strategy. The point is there shouldn't ever be a border facing nothing.

Sure; though what I imagine is more "Human ASI destroys all human value and spreads until it hits defended borders of alien ASI that has also destroyed all alien value..."

(Though I don't think this is the case. The sun is still there, so I doubt alien ASI exists. The universe isn't that young.)

I believe this is a misunderstanding: ASI will wipe out all human value in the universe.

Maybe it'd be helpful to not list obstacles, but do list how long you expect them to add to the finish line. For instance, I think there are research hurdles to AGI, but only about three years' worth.

Disclaimer: I know Said Achmiz from another LW social context.

In my experience, the safe bet is that minds are more diverse than almost anyone expects.

A statement advanced in a discussion like "well, but nobody could seriously miss that X" is near-universally false.

(This is especially ironic cause of the "You don't exist" post you just wrote.)

Not wanting to disagree or downplay, I just want to offer a different way to think about it.

When somebody says I don't exist - and this definitely happens - to me, it all depends on what they're trying to do with it. If they're saying "you don't exist, so I don't need to worry about harming you because the category of people who would be harmed is empty", then yeah I feel hurt and offended and have the urge to speak up, probably loudly. But if they're just saying when trying to analyze reality, like, "I don't think people like that exist, because my model doesn't allow for them", the first feeling I get is delight. I get to surprise you! You get to learn a new thing! Your model is gonna break and flex and fit new things into it!

Maybe I'm overly optimistic about people.

I'll cheat and give you the ontological answer upfront: you're confusing the alternate worlds simulated in your decision algorithm with physically real worlds. And the practical answer: free will is a tool for predicting whether a person is amenable to persuasion.

Smith has a brain tumor such that he couldn’t have done otherwise

Smith either didn't simulate alternate worlds, didn't evaluate them correctly or the evaluation didn't impact his decisionmaking; there is no process flow through outcome simulation that led to his action. Instead of "I want X dead -> murder" it went "Tumor -> murder". Smith is unfree, despite both being physically determined.

Second, would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions?

Does the algorithm morally evaluate the outcomes of his moves? No. Hence it is not morally responsible. The algorithm does evaluate the outcomes of its moves for chess quality; hence it is responsible for its victory.

Is my dog in any sense “responsible” for peeing on the carpet?

Dogs can be trained to associate bad actions with guilt. There is a flow that leads from action prediction to moral judgment prediction; the dog is morally responsible. Animals that cannot do this are not.

Fourth, does it ever make sense to feel regret/​remorse/​guilt on a compatibilist view?

Sure. First of, note that our ontological restatement upfront completely removed the contradiction between free will and determinism, so the standard counterfactual arguments are back on the table. But also, I think the better approach is to think of these feelings as adaptations and social tools. "Does it make sense" = "is it coherent" + "is it useful". It is coherent in the "counterfactuals exist in the prediction of the agent" model; it is useful in the "push game theory players into cooperate/cooperate" sense.

Load More