Sorted by New

Wiki Contributions


Yes, that is true as well.

My point was that since our cultural instinct is to give, but in practice this is done inefficiently, [charities are wasteful, people don't give to charities to optimize utility but rather to charities that they think they like, and a flat percentage is probably worse than a progressive tax], and therefore it would probably be better for society if we didn't expect charity from people - this seemingly beneficial cultural obligation can be argued to be harmful.

I like this approach.

It makes sense, and it mostly dodges the problem that other "simple" formulae for charity have - namely that most simple systems tend to be essentially voluntary regressive taxation.

This is why the 10% rule has always bugged me - it is a culturally accepted voluntary regressive tax, and as such it exacerbates social inequality.

[Also, one of my friends likes to joke that our culture holds that you give 10% of your income to charity, but capital gains are exempt...]

I'm always on the lookout for things that seem innocuous or even beneficial that actually are ways of enforcing the social structure and preventing upwards mobility, like our strange insistence on prescriptive rules of language, and upon the necessity of "sounding intelligent".

Language are evolved social constructs, and "correct grammar" is determined by native speakers. However, we impose additional rules that stray from the natural form of the language, and develop a notion that certain ways of speaking/writing are proper, and that other ways are ignorant. To learn how to speak in a way that sounds intelligent requires additional investment of time and effort, and those that cannot afford to do so (can't afford to spend as much time reading, or comes from an area with worse schools) will grow up speaking a completely intelligible version of the language, but one that is generally recognized as sounding like a marker of ignorance, and thus limits possibilities for advancement.

Ok, I really got off topic there, but my point was that our cultural construct that people should give a fixed percentage of their income to charity might very well not be a force for good, but rather a force opposing good.

It is a regressive taxation system, but one that is culturally supported. Further, because so many people feel like everyone is already voluntarily consenting to give to charity (especially through religious organizations) that actual taxation is an unnecessary imposition.

If we didn't have a culturally accepted obligation for charity, we wouldn't give as much money to inefficient charities and religious institutions, and might be more willing to consent to a higher progressive tax.

I'm not prepared to make that bet.

I don't suspect the bias would vanish, but rather be diminished.

people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.

This is exactly what I was saying.

I didn't mean to imply I thought it was, though I see how that wasn't clear.

I didn't intend that last bracketed part to be an example, but rather a related phenomenon - it is interesting to me how asking a random sample of people who they voted for is a worse predictor than asking a random sample of people who they would predict got the most votes, and that this accuracy further improves when people are asked to stake money on their predictions.

I simply was pointing out that certain biases might be significantly more visible when there is no real incentive to be right.

For instance, one supplemental explanation for the False Consensus Effect (because just because it is one effect doesn't mean it has only one cause) that I have heard is that in most cases it is a "free" way of obtaining comfort.

If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.

I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.

[See also - political predictions are more accurate when the masses are asked to make monetary bets on the winner of the election, rather than simply indicate who they would vote for]

It sounds like you might be looking for something like The Onion Router (Tor)

For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?

If so, why would supposedly symmetrical models retain this symmetry?

I actually acknowledge that deeper in the thread [in the response to PECOS-9], noting that this is the publicly understood complement, despite being wrong: society teaches that the primary colors are Red, Yellow, Blue and not Magenta, Yellow, Cyan.

Fair enough.

I must admit, this makes my theory less likely, but I still don't see your reading as the unambiguously correct interpretation, but I will freely cede that it look plausible that it is an interrupt, not an elaboration. This may, in part, stem from the fact that I am a big proponent of using "-" in my writing, and my usage is somewhat nonstandard.

Even if that is right, I don't think it rules out my guess about Quirrell's plan, but again, I'm significantly less confident now.

Load More