"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future"
Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample.
I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (thoug... (read more)
Fyi - this series of posts caused me to get a blood test for nutritional deficiencies, learn that I have insufficient vitamin D and folic acid, and take supplements on a bunch of days that I otherwise would not have (though less often than I should given knowledge of a deficiency). Thanks!
Whoops - thanks!
I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/
Thanks for this! I found this much more approachable than other writing on this topic, which I've generally had trouble engaging with because it's felt like it's (implicitly or explicitly) claiming that: 1) this mindset is right for ~everyone; and 2) there are ~no tradeoffs (at least in the medium-term) for (almost?) anyone.
Had a few questions:
Your goals and strategies might change, even if your values remain the same.
Have your values in fact remained the same?
For example, as I walked down the self-love path I felt my external obligations start to drop awa
Someone's paraphrase of the article: "I actually think they're worse than before, but being mean is bad so I retract that part"
Weyl's response: "I didn’t call it an apology for this reason."
First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.
I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover. This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignm... (read more)
"As far as I can tell, it does not net profits against losses before calculating these fees."
I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.
On the documents:
Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.
(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.
Thanks! Agree that it'd would've been useful to push on that point some more.
I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.
The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.Expansiveness. There are a lot of arguments presented, and man
The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:
Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.
Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.
Expansiveness. There are a lot of arguments presented, and man
Do you still think there's a >80% chance that this was a lab release?
[I'm not an expert.]
My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.
Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is th... (read more)
I used to play Innovation online here - dunno if it still works. https://innovation.isotropic.org/
Also looks like you can play here: https://en.boardgamearena.com/gamepanel?game=innovation
Thanks for confirming!
How ill do they have to be? If a contact is feeling under the weather in a nonspecific way and has a cough, is that enough for them to get tested?
Do you feel like you have any insight into whether underreporting of mild/minimally symptomatic/asymptomatic cases?
I was able to buy hand sanitizer after going through security at JFK on Sunday but I wouldn't count on that. Fwiw, Purell bottles small enough to take through security seem pretty common.
Seems possible but I don't really understand where China's claims about asymptomatic cases are coming from so I've been hesitant about putting too much weight on them. Copying some thoughts on this over from a FB comment I wrote (apologies that some of it doesn't make total sense w/o context).
tl;dr I'm pretty unsure whether China actually has so few minimally symptomatic/asymptomatic cases.
Those 320,000 people were at fever clinics, so I think none of them should be asymptomatic.
The report does say "Asymptomatic infection h
Some more suggestive evidence that Singapore might not be testing asymptomatic/minimally symptomatic people:
The COVID-19 swab test kit deployed at [travel] checkpoints allows us to test beyond persons who are referred to hospitals, and extend testing to lower-risk symptomatic travellers as an added precautionary measure. This additional testing capability deployed upfront at checkpoints further increases our likelihood of detecting imported cases at the point of entry. As with any test, a negative result does not completely rule out the possibility of infe
Yes as part of a team on standby briefed on contact tracing protocol in Singapore I can confirm, we only call and inform potential contacts . They are not tested unless I'll.
Connect these dots, along with the fact that Singapore has been doing extremely aggressive contact tracing and has been successful enough to almost stop the spread, I think Singapore can't have many uncounted mild or asymptomatic cases, and their severely ill rate is still 10% to 20%.
Do you have a citation for the claim that Singapore can't have many mild or asymptomatic cases? The article you cite says:
Close contacts are identified and those individuals without symptoms are quarantined for 14 days from last exposure. As of February 19, a total
[I'm not a lawyer and it's been a long time since law school. Also apologies for length]
Sorry - I was unclear. All I meant was that civil cases don't require *criminal intent.* You're right that they'll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).
tl;dr: It's complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a "negligent misrepresentation" is enough for a ci... (read more)
Thanks! forgot about that post.
I'm not sure I understand what you mean by "something to protect." Can you give an example?
[Answered by habryka]
[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.]
Do you think these confusions are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")?
Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously." (B) I'm very uncertain but they ... (read more)
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.
[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]
Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?
This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.
Things like PJ EBY's excellent ebook.
FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to?
The best links I could find quickly were:
I think I also damaged something psychologically, which took 6 months to repair.
I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.
I expect, though, that this is too sensitive/personal so please feel free to ignore.
Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.
"For example, we spent a bunch of time circling for a while"
Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.
CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.
This doesn't look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it's odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn't seem like there's a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob's statement that "the EA Funds are usually... (read more)
Here's a potentially more specific way to get at what I mean.
Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.
You're saying that she should reduce her estimate because Open Phil may change its strategy or the... (read more)
I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren't high precision so we can't rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.
That doesn't strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it's "odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies."
What's the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell's recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.
In particular, to me this seems in tension with Open Phil's last public writing on it's current thinking about how ... (read more)