Doesn't that mean we should expect that Bayesians often disagree and they have no way to resolve it except consulting reality (i.e., an experiment)?
Short answer: Yes.
Longer answer: Two Bayesians who start out with the same prior probabilities, and see the same evidence, should update their posterior probabilities in the same way, and so their mental models should stay consistent with each other. Two Bayesians who start out with different prior probabilities, but see the same evidence, should update their posterior probabilities in ways that are predictable...
There isn't a set breakpoint that separates weak from strong priors, it's a continuum from "it seems extremely unlikely that everything I know about the world is false, but it's not technically impossible" to "it seems extremely likely that I'm typing on a keyboard right now, but there's a tiny possibility that something else is going on, like a hallucination or me being a brain in a vat or some other possibility I haven't thought of".
Bayesianism says that if you have strong priors about a particular matter, you should be surprised with corresponding stren...
[Unsure] The probability of GR being true is independent of whether the Bayesian knows about it or not[1]
Keep in mind that these "probabilities" are subjective assessments of probability based on an individual's prior knowledge, not facts about reality. Two Bayesians with different prior experience may disagree about how probable something is (/seems to them), but reality will not disagree or debate with itself about the truth of the matter, or assign probability to different possibilities (mumble mumble I don't really understand quantum mechanics and am p...
If you have weak/uncertain priors, the thing to do is run low-cost experiments that differentiate between your different hypotheses of what's going on.
A real cheap experiment in relation to the question "is this a scam?" is to Google to see if others have received similar letters and what their outcomes were. If it's a scam, you're likely to surface evidence of this, if it's not a scam, you're likely to surface both people saying it went well for them, and debunking sites that explain what's going on, where the letters came from, etc. If you get no informa...
I'm not an expert Bayesian, and it's not part of my identity which I would feel the need to defend by going "here's why I wouldn't get scammed", but I know how I would answer from a "modify your expectations in light of new evidence" lens, which I understand to be the core of Bayesianism if put into plain English.
The key thing is, what are your priors?
If you were a very naive Bayesian reasoner, say a 5 year old of average intelligence, and your experience was extremely sheltered, skewed towards a very kind world where everyone was always nice to you and yo...
Some pushback seems warranted, so I upvoted and agree-voted. On the other hand, if you're giving to a registered charity anyway, you can get tax credits, which means you can give more for the same net cost to you.
in Canada, you can get a 50% tax credit on donations to registered charities (meaning basically you get half back and can choose to donate twice as much for the same after tax cost, up to 75% of your annual income), and RC Forward lets you donate to GiveWell recommended charities (they forward on to various EA charities that aren't registered in C...
You seem to be describing a situation where there is a temporary absence of sufficient funds for a UBI (the "big gap in the middle") after which there's plenty of money to fund the UBI, potentially at a higher level than people's original income.
The generic solution for a temporary lack of necessary funds with lots of funds being available in the future is getting a loan to be paid off when the money comes in. This consumption-smoothing would be good from the perspective of the AI companies as well, as "everyone is out of work and has no money to spend", i...
Ok, now you've gone on to "modern culture is worse than earlier culture". I don't feel like I have a good handle on the culture to which you refer, so I can't really comment, in the sense of going "you think modern culture is like X, but I think modern culture is like Y, let's discuss". You seem quite sure of your opinions, but I don't know what your evidence is.
I will disagree with this, though:
The reason we feel less shame is because we removed weight from sexual relationships.
When I spoke about people feeling less shame around having experienced sexual ...
Indeed, but that does not make such regulations objectively good to me. And try asking older people who did things which are now considered bad or unsafe if they regret their actions or if they're happy that one cannot have their experiences and memories anymore. The overwhelming majority of people I've spoken with prefer the past (random example - many have fond memories of playing multiplayer games back when harsh insults were a core part of the experience, and find themselves repulsed by modern over-regulation)
I think you'd get different answers from o...
...I find it hard to disagree with anything you wrote, and yet, technological advancements does not really seem to improve life for people. We've getting quite efficient at making food, but it doesn't seem to be getting cheaper over time. I don't think car prices or phone bills are decreasing meaningfully either. Bus and train fares have been steadily increasing over time. The internet makes it possible to communicate with people far away which I value a lot, but it does not seem to have improved socialization in general.
The pattern of "benefits erase themsel
...Those who make technology worth 4000$ are not going to sell it sold at 2000$, making everyone better off. The gap between the value we find in things, and the value we pay for it, will be exploited by companies until it almost disappears. They will sell it at 3950$, or sell it at 3500$ and fill it with advertisements (whatever makes it just barely worth it for the buyer) For as long as it's a good deal, there's more value to extract! And once 95% of the value has been extracted, the new technology only benefits everyone 5% of what it could have. (This patt
Generalizing, this looks like the gambler's ruin (even positive EV bets can be bad bets, if the losses would be unrecoverable - "quadruple or nothing at 50%, but you're betting all you have" predictably ends with you having nothing if you keep playing long enough). Except not with units of money, but units of motivation, or feeling like a good person.
If the bet is "amazingly good impact or ruinously bad impact", you probably shouldn't take that bet unless you're pretty certain it's not going to turn out ruinously bad. And more generally, you shouldn't take...
Retracted, because after conversation on Ben's blog, it's not a matter of thinking marginal costs will go down with scale - rather, he treats "funding gap" more strictly than I would, something like "the amount of money charities can absorb and deploy at approximately present marginal costs per life saved". And if you define the funding gap as "the amount of money that can be deployed at approximately present marginal costs, or better", then "assume marginal cost does not go down at all" is indeed the most generous assumption.
Thanks. Side note (I posted another comment about this just now, because it just clicked for me this morning): I think Ben Hoffman thinks (or thought, when he wrote his blog posts) that when you treat more malaria cases or do other philanthropy, marginal cost goes down. He says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
When in fact it's the least generous, under the assumption that marginal cost goes up. If you think marginal costs w...
I think I've found a crux that makes things make sense today, that didn't make sense to me yesterday as I was reading the first linked blog post. When trying to think about the existence or nonexistence of a funding gap, Ben says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
And my brain skipped a beat, and went "no, the opposite, that's the least generous possible assumption. As we treat more, the next treatment becomes more expen...
An analogy:
We are in the "you can save a drowning child for an affordable price" world. In this world (or a hypothetical one for the purpose of this analogy), 1,000 infants are being dumped in a large lake per day. Some of them are right by the shore, easy to get to like the drowning child thought experiment postulates, some are out in deeper water. I'm a strong swimmer, and could save any of those infants, but I can't save all of them by myself, and if I try to save as many as I can today, I will exhaust and potentially injure myself, meaning I can save f...
It seems like 3 things are simultaneously true:
1) It's not possible to eradicate malaria for $5,000/life saved (that's the marginal cost, approximately, ballpark estimate with lots of wiggle room in the number). This generalizes to all other currently known interventions to make people's lives better at low cost - it's relatively cheap now, but one should expect that saving the last life that would otherwise have died from malaria, or helping the last person who can be helped with some other intervention that is currently near the best marginal cost, will ...
If most failures of rationality are adaptively self-serving motivated reasoning
I would say that most failures of rationality were adaptive in the ancestral environments, but I wouldn't say they all count as "motivated reasoning".
Simple example: Seeing a snake in the grass, and responding as if there is a snake in the grass, in the presence of ambiguous stimuli that have only a 10% chance of being a snake, could well result in more surviving offspring than a more nuanced, likely slower, and closer-to-correct estimation of the probability there is a snake. B...
The key intuition about the future might be simply that humans being around is an incredibly weird state of affairs. We shouldn't expect it to continue by default.
I mean, yes this seems right. In which case, taking it as a premise that this weird state doesn't last long, it follows that there's no point trying to plan for a future where human-like things continue to exist. BUT: from where we stand right now, we do actually have some control over whether everybody dies and nothing human-like continues into the future. The simplest plan to avoid extinc...
Something else that is relevant to real-life Bayesians occurs to me. "Strictly adhering to Bayesian epistemology" is doing some work here. And in real life, if my reasoning or math leads me off a cliff/to some absurd conclusion, I have to put some weight on the possibility I've made an error somewhere, which I ha... (read more)