Cause "according to the criterion of others' welfare" doesn't require "at ones own expense".
Expanding on this from my comment:
Wouldn't that be an example of agents faring worse with more information / more "rationality"? Which should hint at a mistake in our conception of rationality instead of thinking it's better to have less information / be less rational?
Eliezer wrote this in Why Our Kind Can't Cooperate:
Doing worse with more knowledge means you are doing something very wrong. You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better. You definitely should not do worse. If you find yourself regretting your "rationality" then you should reconsider what is rational.
It's interesting to note that in case, he specifically talked about coordination when saying that. And this post claims common knowledge can make rational agent specifically less able to cooperate. The given example is this game
Imagine that Alice and Bob are each asked to name dollar amounts between $0 and $100. Both Alice and Bob will get the lowest amount named, but whoever names that lowest number will additionally get a bonus of $10. No bonus is awarded in the case of a tie.
According to traditional game theory, the only rational equilibrium is for everyone to answer $0. This is because traditional game theory assumes common knowledge of the equilibrium; if any higher answer were given, there would be an incentive to undercut it.
You didn't say the name of the game, so I can't go read about it, but thinking about it myself, it seems like one policy rational agents can follow that would fare better than naming zero is picking a number at random. If my intuition is correct, that would let each of them win half of the time, and for the amount named to be pretty high (slightly less than 50?). An even better policy would be to randomly pick between the maximum and the maximum-1, which I expect would outperform even humans. With this policy, common knowledge/belief would definitely help.
The InfiniCheck message app problem is a bit more complicated. Thinking about it, It seems like the problem is that it always creates an absence of evidence (which is evidence of absence) which equals the evidence, i.e, the way the system is built, it always provides an equal amount of evidence and counter-evidence, so the agent is always always perfectly uncertain, and demands/desires additional information. (If so, then a finite number of checks should make the problem terminate on the final check - correct?)
The question is whether it can be said that the demand/desire for additional information, rather than the additional information itself, creates the problem, or that these can't actually be distinguished, cause that would just be calling the absence of evidence "demand" for information rather than just information (which it is).
Also, this actually seems like a case where humans would be affected in a similar way. Even if with the one checkmark system people reason based on not seeing the checkmark, so except for the blur @Dacyn mentioned, I expect people will suffer from this too.
I also want to point that that commitment solves this, whether or not you adopt something like an updateless decision theory. Cause once you simply said "I'm going to go here", any number of checkmakrs becomes irrelevant. (ah, re-reading that section the third time that's actually exactly what you say, but I'm keeping this paragraph because if it wasn't clear to me it might also have not have been clear to others).
Formatting error: The end of the first paragraph is at the start of the second paragraph.
that it, it shouldn’t apply
That is*
I thought "extensional definition" referred to what "ostensive definition" refers to (which is how Eliezer is using it here), so I guess I already learned something new!
The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can't refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
It may be good to think of Standpoint Epistemology as an erisology, i.e. a theory of disagreement. If you observe a disagreement, Standpoint Epistemology provides one possible answer for what that disagreement means and how to handle it.
Then why call it an epistemology? Call it Standpoint Erisology. But...
According to Standpoint Epistemology, people get their opinions and beliefs about the world through their experiences (also called their standpoint). However, a single experience will only reveal part of the world, and so in order to get a more comprehensive perspective, one must combine multiple experiences. In this way the ontology of Standpoint Epistemology heavily resembles rationalist-empiricist epistemologies such as Bayesian Epistemology, which also assert that people get their opinions by accumulating experiences that contain partial information.
This is already clearly epistemological. So calling it just a theory of disagreement seems already out of place. This also sets up a motte and bailey. Once the less standard claims, like "white people need to shut up and listen", get criticized, it will be possible to claim it's only saying that "people get their opinions and beliefs about the world through their experiences" which "heavily resembles rationalist-empiricist epistemologies such as Bayesian Epistemology", which begs the question why it needs a special name at all.
One important difference is that whereas rationalists often focus on individual epistemology, such as overcoming biased heuristics or learning to build evidence into theories, Standpoint Epistemology instead focuses on what one can learn from other people’s experiences. (...) As such, Standpoint Epistemology emphasizes that if someone tells you about something that you haven’t had experience with, you should take this as a learning opportunity
So it's a pointer to a useful source of information?? A technique for gathering it? That would be a much simpler and clearer argument to make than trying to cast this as an epistemology and comparing it with Bayesian epistemology.
it can be mathematically proven from the assumptions of Bayesian Epistemology, in a theorem known as Aumann’s Agreement Theorem
Aumann’s Theorem requires two honest Bayesians with the same prior. You definitely can't just assume that about everybody (and how much it applies to rationalists is also debatable). But really that's irrelevant, you don't need Aumann to make this point - every observation is Bayesian evidence, including what people say[1], which makes this point trivial, and again raises the question of why it needs to be a special epistemology.
10 participants
When you interview so few people you can easily get a biased sample out of pure randomness. This is the same ol' point of anecdotes vs data. The former lets you go deep, but risks missing the full picture and getting bogged down in noise (like dishonesty and outliers), and the latter lets you go wide and cut through noise, but risks a shallow understanding.
Really, they're best when used together. anecdotes help you learn what to study widely, and data lets you situate anecdotes inside a larger context. So instead of hearing the "experience of a black person" (Which is pretty abstract) you can hear the experience of someone in the X percentile of income, Y percentile of educational attainment, living in a Z percentile rate of crime city, etc.. By using both you can get a truly full picture. e.g, see how many blacks (compared absolutely and relatively with whites) have how much income, and then see in depth through interviews how it is to be a poor black person, a median income black person, and a rich black person. Same with racism, you can use data to approximate how many people experience racism, and then go deeper to see how it is to experience heavy racism, to experience light racism, no racism at all, or even 'reverse' racism (where people treat you better because of your race).
And I'd like to stress, that this is all still trivial inside a Bayesian framework (or even just a "commonsense" framework).
So let's think of how we should respond to hearing one response in the survey. As Bayesians, before taking the anecdote at face value, we should consider what information it actually constitutes. We should consider the sample (black people who use the survey website), and ask how much should we trust the sample (or people in general, if there's no reason to expect a relevant difference) to be honest, have good judgement, have good memory, and be equally likely to report both good and bad experiences. And then we should consider how surprising it is to hear a specific anecdote from someone in the sample. My point isn't to answer any of these questions, but to point out that Bayes doesn't allow you to just take the responses at face value, as standpoint epistemology may instruct you to do. if it does, it's false, and if it doesn't then it's trivial.
I asked black people to describe their experiences, but I haven’t allocated time to ask police about their experiences.
This is treating this like a conflict and going to hear the other side before even checking how different the experience of black people is to white people (unless you base off your experience, which I guess standpoint epistemology would approve of). Not that I disapprove of learning about police officer's experiences, but first make sure you have the experience of the first group situated correctly in the larger context.
To conclude, your post gives an extremely trivial account of "standpoint epistemology" (except the mention of "white people need to shut up and listen") which makes me disagree with the framing more than the content (I downvoted for the framing, but were it missing I would probably upvote for the content). "Standpoint Epistemology" is either trivial, like it's explained here, in which case the framing is bad and I suggest dropping it, or it's non-trivial, in which case it should be judged on the merits of its non-trivial claims, which from what I've read and heard, I believe are entirely mistaken (I might explore this further in a followup comment).
And as this is LessWrong, perhaps a good final question is, would you want to code standpoint epistemology into an AGI? Either there's something not contained in a bayes-like epistemology you would want it to do, in which case, what is it? Or it's fully contained inside such an epistemology, in which case it's too trivial for the framing.
it can also be evidence against what they're saying, if you believe they're more likely to say that in worlds where it's false
Great post! I already saw Common Knowledge as probabilistic, and any description of something real as common knowledge as an implicit approximation of the theoretical version with certainty, but having this post spell it out, and giving various examples why it has to be thought of probabilistically is great. "p-common knowledge" seems like the right direction to look for a replacement, but it needs a better name. Perhaps 'Common Belief'.
However, humans will typically fare much better in this game. One reason why this might be is that we lack common knowledge of the equilibrium. We can only guess what the other player might say, and name a slightly smaller number ourselves. This can easily result in both players getting significantly more than $0.
Wouldn't that be an example of agents faring worse with more information / more "rationality"? Which should hint at a mistake in our conception of rationality instead of thinking it's better to have less information / be less rational?
I think coordination failures from lack of common belief (or the difficulty of establishing it) happen more than this post suggest. And I think this post correctly shows that it often happens from over-reliance on conditionals instead of commitment. For example:
"Hey, are you going to the bar today?"
"I will if you go."
"Yeah I'll also go if you go."
"So, are you going?"
"I mean, if you're going. Are you going?"
....
I've had conversations like this. Of course, when it's one-on-one in real time it's easy to eventually terminate the chain. But in group conversations, or when texting and there's latency between replies, this use of conditionals can easily prevent common belief from being established (at all, or in time).
I agree this is a good and important concept. 'scope matching' is fine, but I do think it can be improved upon. Perhaps 'scope awareness' is slightly better?