cousin_it

Wiki Contributions

Comments

Hah. And even if the king had a computer that could simulate how the golem would react to the suicide order, even that wouldn't help the king, if the golem followed updateless decision theory.

I don't fully understand Vanessa's approach yet.

About caring about other TDT agents, it feels to me like the kind of thing that should follow from the right decision theory. Here's one idea. Imagine you're a TDT agent that has just been started / woken up. You haven't yet observed anything about the world, and haven't yet observed your utility function either - it's written in a sealed envelope in front of you. Well, you have a choice: take a peek at your utility function and at the world, or use this moment of ignorance to precommit to cooperate with everyone else who's in the same situation. Which includes all other TDT agents who ever woke up or will ever wake up and are smart enough to realize the choice.

It seems likely that such wide cooperation will increase total utility, and so increase expected utility for each agent (ignoring anthropics for the moment). So it makes sense to make the precommitment, and only then open your eyes and start observing the world and your utility function and so on. So for your proposed problem, where a TDT agent has the opportunity to kill another TDT agent in their sleep to steal five dollars from them (destroying more utility for the other than gaining for themselves), the precommitment would stop them from doing it. Does this make sense?

I think you can make it more symmetrical by imagining two groups that can both coordinate within themselves (like TDT), but each group cares only about its own welfare and not the other group's. And then the larger group will choose to cooperate and the smaller one will choose to defect. Both groups are doing as well as they can for themselves, the game just favors those whose values extend to a smaller group.

About 2TDT-1CDT. If two groups are mixed into a PD tournament, and each group can decide on a strategy beforehand that maximizes that group's average score, and one group is much smaller than the other, then that smaller group will get a higher average score. So you could say that members of the larger group are "handicapped" by caring about the larger group, not by having a particular decision theory. And it doesn't show reflective inconsistency either: for an individual member of a larger group, switching to selfishness would make the larger group worse off, which is bad according to their current values, so they wouldn't switch.

Edit: You could maybe say that TDT agents cooperate not because they care about one another (a), but because they're smart enough to use the right decision theory that lets them cooperate (b). And then the puzzle remains, because agents using the "smart" decision theory get worse results than agents using the "stupid" one. But I'm having a hard time formalizing the difference between (a) and (b).

Administered by the state, of course. Open air prison where you can choose where to live, when to go to bed and wake up, what to eat, who to work with and so on, would feel a lot less constraining to the spirit than the prisons we have now.

I think that's the key factor to me. It's a bit hard to define. A punishment should punish, but not constrain the spirit. For example, a physical ball and chain (though it looks old-fashioned and barbaric) seems like an okay punishment to me, because it's very clear that it only limits the body. The spirit stays free, you can still talk to people, look at clouds and so on. Or in case of informational crimes, a virtual ball and chain that limits the bandwidth of your online interactions, or something like that.

Just my opinions.

  1. How an anarchist society can work without police. To me the example of Makhno's movement shows that it can work if most people are armed and willing to keep order, without delegating that task to anyone. (In this case they were armed because they were coming out of a world war.) Once people start saying "eh, I'm peaceful, I'll delegate the task of keeping order to someone else", you eventually end up with police.

  2. Is police inherently bad. I think no, it depends mostly on what kind of laws it's enforcing and how fairly. Traffic laws, alright. Drug laws, worse. Laws against political dissent, oh no. So it makes more sense to focus on improving the laws and courts.

  3. Prisons. I think prisons should be abolished, because keeping someone locked up is a long psychological torture. The best alternative is probably exile to designated "penal" territories (but without forced labor). Either overseas, or designated territories within the country itself.

Maybe one example is the idea of Dutch book. It comes originally from real world situations (sport betting and so on) and then we apply it to rationality in the abstract.

Or another example, much older, is how Socrates used analogy. It was one of his favorite tools I think. When talking about some confusing thing, he'd draw an analogy with something closer to experience. For example, "Is the nature of virtue different for men and for women?" - "Well, the nature of strength isn't that much different between men and women, likewise the nature of health, so maybe virtue works the same way." Obviously this way of reasoning can easily go wrong, but I think it's also pretty indicative of how people do philosophy.

I don't say it's not risky. The question is more, what's the difference between doing philosophy and other intellectual tasks.

Here's one way to look at it that just occurred to me. In domains with feedback, like science or just doing real world stuff in general, we learn some heuristics. Then we try to apply these heuristics to the stuff of our mind, and sometimes it works but more often it fails. And then doing good philosophy means having a good set of heuristics from outside of philosophy, and good instincts when to apply them or not. And some luck, in that some heuristics will happen to generalize to the stuff of our mind, but others won't.

If this is a true picture, then running far ahead with philosophy is just inherently risky. The further you step away from heuristics that have been tested in reality, and their area of applicability, the bigger your error will be.

Does this make sense?

I'm pretty much with you on this. But it's hard to find a workable attack on the problem.

One question though, do you think philosophical reasoning is very different from other intelligence tasks? If we keep stumbling into LLM type things which are competent at a surprisingly wide range of tasks, do you expect that they'll be worse at philosophy than at other tasks?

This could even be inverted. I've seen many people claim they were more romantically successful when they were poor, jobless, ill, psychologically unstable, on drugs and so on. Have experienced something like that myself as well. My best explanation is that such things make you come across as more real and exciting in some way. Because most people at most times are boring as hell.

That suggests the possibility to get into some kind of hardship on purpose, to gain more "reality". But I'm not sure you can push yourself on purpose into as much genuine panic and desperation as it takes. You'd stop yourself earlier.

Load More