Wiki Contributions

Comments

Correction my apologies. Apparently 30 respondents were snowballed from 17 others. We'll look into how this affects the results

There were six people of 168 respondents who came via people some of the authors already knew or snowballing. I don't think this would have much effect on the overall picture of the results

It was an anonymous poll so this would not have an effect

This is a misunderstanding as I discuss I my comment below. The respondents were selected from a list according to the criteria set out. This doesn't mean the authors emailed their friends

The personalised outreach mentioned just means that the respondents were initially sent a stock email and then when they didn't respond, they were sent a more personalised message. It doesn't meant that the surveyors emailed their friends. The survey was based on mass outreach from a list from professional societies

Snowballing contacts does introduce a risk of bias but that is mitigated by the disciplinary and geographic spread in the target sample. Respondents in non developed countries gave a higher chance of zoonosis, so the prospect that the survey was biased because it was sent to eg Kristian andersen who then recommended people who knew favoured his opinion seems low.

It is true that the survey showed low familiarity with the relevant literature. First, this is an interesting finding in itself. Second, in many expert polls the field experts may not have read much of the literature on some specific question.eg this is likely true of the igm poll of economists, which is nevertheless useful.

Competing claims have been made about what virologists in general actually think about this topic. We now have some information on this

Hello, 

  1. It seems I misunderstood sorry
  2. My point in raising the philosophy literature was that you seemed to be professing anger at the idea that subjective experience is all that matters morally - it drives you 'bonkers' and is 'nuts'. I was saying that people with a lot more expertise than you in philosophy think it is plausible and you haven't presented any arguments, so I don't think it should drive you bonkers. I think the default position would be to update a bit towards that view and to not think it is bonkers. 
    1. Similarly, if I wrote a piece saying that a particular reasonably popular view on quantitative trading was bonkers, I might reasonably get called out by someone (like you) who has more expertise than me in it. I also don't think me saying "this is why I never have online discussions with people who have expertise in quantitative trading" should reassure you. Your response confirms my sense that much of the piece was not meant to persuade but more to use your own reputation to decrease the status of various opinions among your readers without offering any arguments.
    2. In the passage I quote, you also make a couple of inconsistent statements. You say "Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea". Then you say "Also you get people worried about wild animal or electron suffering". If you think suffering is bad, then why would you not think wild animals can suffer? Or if you do think that they can suffer, then aren't you committed to the view that preventing wild animal suffering is good? Same for digital suffering. I think mistakes like this should lead to some humility about labelling reasonably popular philosophical views as nuts without argument. 
    3. I also don't understand why you think the view that subjective wellbeing is all that matters implies you get FTX. FTX seems to have stemmed from naive consequentialism, which is distinct from the view that subjective experience is all that matters. Indeed, FTX was ex ante and ex post very very bad from the point of view of a worldview which says that subjective experience is all that matters (hedonistic total utilitarianism).This dynamic has happened repeatedly in various different places since the fall of FTX. 'Here is some idiosyncratic feature f of FTX, FTX is bad, therefore f is bad' is not a good argument but keeps coming up, cf arguments that FTX wasn't focused on progress, wasn't democratic, they believed in the total view, they think preventing existential risks is a good idea, etc. Again, I also don't see an argument for why you think this, you just assert it. 
  3. Can you say more about how they could have left in a more cooperative fashion? My default take would be that, as long as you give notice, from the point of view of common sense morality, there is nothing wrong with leaving a company. In the case of Jane Street, I think from the point of view of common sense morality, since the social benefits of the company are small, most people would think that a dozen people leaving at once just doesn't matter at all. It might be different if it was a refugee charity or something. Is there some detail about how they left that I am missing?
    1. Why do you think they weren't honest?
    2. This passage: "It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good" strongly reads as saying the EAs who left Jane Street for Alameda did so out of greed. If you didn't intend this, I would suggest editing the main text to clarify that when you said "it was the effective altruists who were the greedy ones" you meant they were actually doing it for the common good.  A lot of the people who read this forum will know a lot of the people who left to join Alameda, so if you are unintentionally calling them greedy, dishonest and disloyal, that could be quite bad. Unless you intend to do that, in which case fair enough.

Thanks for taking the time to do this. I'm not really a fan of the way you approach writing up your thoughts here. The post seems high on snark, rhetoric and bare assertion, and low on clarity, reasoning transparency, and quality of reasoning. The piece feels like you are leaning on your reputation to make something like a political speech, which will get you credit among certain groups, rather than a reasoned argument designed to persuade anyone who doesn't already like you. For example, you say:

But at least the crazy kids are trying. At all. They get to be wrong, where most others are not even wrong.

Also, future children in another galaxy? Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future.

But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now. They are for people alive today. You, yes you, and your loved ones and friends and if you have them children, are at risk of dying from AI or from a pandemic. Nor are these risks so improbable that one needs to cite future generations for them to be worthy causes.

I fight the possibility of AI killing everyone, not (only or even primarily) because of a long, long time from now in a galaxy far, far away. I fight so I and everyone else will have grandchildren, and so that those grandchildren will live. Here and now.

As I understand it, this is meant to be a critique of longtermism. The claims you have made here just seem to be asserting that longtermism is not true, without argument, which is what pretty much every journalist does now that every journalist doesn't like EA. But EA philosophers are field leaders in population ethics, and have published papers in leading journals on it, and you can't just dismiss it by saying things which look on the face of it to be inconsistent such as "Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future. But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now." In what sense is the potential value of humanity in the future if the benefits are not in the future?

Similarly, on whether personhood intrinsically matters, you say:

"This attitude drives me bonkers. Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea. But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality. Combine that with all the other EA beliefs, set this as a maximalist goal, and you get… well, among other things, you get FTX. Also you get people worried about wild animal or electron suffering and who need hacks put in to not actively want to wipe out humanity.

If you do not love life, and you do not love people, or anything or anyone within the world, and instead wholly rely on a proxy metric? If you do not have Something to Protect? Oh no.

Again, you are just asserting here without argument and with lots of rhetoric that you believe personhood matters independently of subjective experience. I don't see why you would think this would convince anyone. A lot of EAs I know have actually read the philosophical literature on personal identity, and your claims seem highly non-serious by comparison. 

On Alameda, you say

"It was the flood of effective altruists out of the firm that was worrisome. It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good. They proved themselves neither honest nor loyal. Neither was ‘part of their utility function.’

I agree that setting up Alameda was a very bad idea for lots of reasons. However, you claim here that the people who joined Alameda aside from Sam weren't actually doing it for the common good. From my knowledge, this is false - they did honestly believe they were doing it for the common good and were going to give the money away. Do you have evidence that they didn't actually donate the money they made? 

When you say they proved that they were not loyal, are you saying they should have been loyal to SBF, or that they should have been loyal to Jane Street? Both claims seem false. Even if they should have stayed at Jane Street, loyalty is not a good reason to do so, and they shouldn't have been loyal to SBF because he was a psychopath. 

These general points aside, I agree that management of bad actors and emphasis on rule following are extremely important and should receive much more emphasis than they do. 

As mentioned in my other comment, unless the people you are visiting are hiding at home all the time, you are not going to have much effect on the chance they get covid over any six month period. you might just bring it forward a bit time. but if they are living a relatively normal life eg going to shops (as I think they should) then it's not going to make much difference since covid has been let rip in the US. 

Re (1) I think you could do that with lateral flow testing rather than taking a vaccine that may be net harmful to your health. The false negative rate of a LFT is much lower than the protection against transmissible infection you would get at any point after having the vaccine. 

I meant to say under 40. Given that the ratio of severe adverse events for 20s-30s is >18.5:1, I would also expect it to be bad to get the vaccine aged 30-40 given the extremely low health risks of getting covid in that age group. 

I don't think that makes much difference because I don't think it has much effect on the total number of infections - you would really be changing the time at which someone gets the virus given that we're not trying to contain it anymore. 

One way round the concern about visiting the retirement home would be to do a lateral flow test before you go in. If you're seeing extremely vulnerable people a lot, then it might be worth getting the vaccine. But the IFR is now lower than the flu for all ages and I think should be treated accordingly

Load More