All of Howie Lempel's Comments + Replies

"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future"

Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample.

I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (thoug... (read more)

Fyi - this series of posts caused me to get a blood test for nutritional deficiencies, learn that I have insufficient vitamin D and folic acid, and take supplements on a bunch of days that I otherwise would not have (though less often than I should given knowledge of a deficiency). Thanks!

I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/

[This comment is no longer endorsed by its author]Reply
2Alan E Dunne7mo
This seems to have stopped in July 2022.

Thanks for this! I found this much more approachable than other writing on this topic, which I've generally had trouble engaging with because it's felt like it's (implicitly or explicitly) claiming that: 1) this mindset is right for ~everyone; and 2) there are ~no tradeoffs (at least in the medium-term) for (almost?) anyone.

Had a few questions:

Your goals and strategies might change, even if your values remain the same.

Have your values in fact remained the same?

For example, as I walked down the self-love path I felt my external obligations start to drop awa

... (read more)
8Charlie Rogers-Smith2y
Hey Howie, thanks for writing this! I'm really happy that you found it more approachable, and I think your questions are awesome! One thing I wanna say off the bat is that this is all quite new to me. I don't have much data yet, and I'm not stable--things seem to be improving at an increasing rate right now.  Yeah, my guess is that that's not true, but I'm uncertain. I think you need some risk tolerance, and people in a great place might (rightly) not have that. Idk. When I look back, it seems like a lot of what drove me was desperation. OTOH, I think if the stuff succeeds then it can be really awesome, and maybe there's an impact argument there based on heavy-tail impact stuff. That argument has definitely influenced my actions. This one I'm confident is not true haha, depending on how you define medium-term. This post talks about this a bit. It seems like I used to "care" about only one thing: saving the world. I use quotes because at some point I really did care, and then quite quickly lost touch with that and was left with obligations, and because I shut down all the parts of me that didn't care about saving the world. I viewed myself almost entirely instrumentally. The biggest shift in values is that I care about myself a lot more than I ever have. But it feels more like a framework shift than a values shift, because I was an obligation machine, and now everything is internal. I'm now much more operating from the position of 'what does Charlie want to do?' and then doing that. And very often the thing I really want to do is try to save the world. That said, if I want to do other things, I'll do those instead. Another way of putting this is that saving the world used to be at the centre of my universe, and now my wants are at the centre of my universe. Idk, maybe this doesn't answer your question. Yeah, they deffo exist. They're substantially less strong. It seems like there's a part of my brain that still turns 'oh this would be so cool to do' into an oblig

Someone's paraphrase of the article: "I actually think they're worse than before, but being mean is bad so I retract that part"

 

Weyl's response: "I didn’t call it an apology for this reason."

https://twitter.com/glenweyl/status/1446337463442575361

First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.

I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover.[1] This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignm... (read more)

8Ben Pace3y
Consider this quote from Robin Hanson: The post was in Feb 2017, which meant he had to wait another 10 months for it to come out. Overall that means the book came out at least 2 years and 3 months after he began writing it, and 1 year and 6 months after it was finalized and finished. I don't know if Oxford University Press is always this slow. But I don't think that if someone read the book then heard about it, they'd feel especially upset to find out it didn't represent Robin's ideas on date-of-publishing but in fact was 1.5 years out-of-date. The essays in the book were the best new essays on LW at the time we decided to make it into a book, which is 2 years ago, so we're a little slower than OUP (in large part because we have a self-imposed 1-year break in the middle_), and I think next time I'll just do the whole thing quicker (now that we've learned how to use all the software, how to interface with the printers, how to interface with Amazon, how to interface with editors, etc).
7habryka3y
I tried actually pretty hard to fit it on the front cover somewhere, but it was actually quite hard design wise (the way I phrased the design challenge is that if you have 5 books, each book can only kind of be 1/5th as complex as a normal book cover). It does say it in the first sentence on the back, and I think we tried to mention 2018 almost everywhere in the first sentence we promote the book, and also “new essays from LessWrong” a lot, so that people don’t get confused about it having really old content.  My current guess is that the right place to emphasize the 2018 year is in all the marketing materials and communication, as well as the book cover, and not super prominently on the front cover itself.

"As far as I can tell, it does not net profits against losses before calculating these fees."
 

I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and man

... (read more)
1Søren Elverlin3y
Hi Howie, Thank you for reminding me of these four documents. I had seen them, but I dismissed them early in the process. That might have been a mistake, and I'll read them carefully now. I think you did a great job at the interview. I describe one place where you could have pushed back more here: https://youtu.be/_kNvExbheNA?t=1376 You asked: "...Assume that among the things that these narrow AIs are really good at doing, one of them is programming AI...", and Ben Garfinkel made a broad answer about "doing science".

Do you still think there's a >80% chance that this was a lab release?

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is th... (read more)

I used to play Innovation online here - dunno if it still works. https://innovation.isotropic.org/

Also looks like you can play here: https://en.boardgamearena.com/gamepanel?game=innovation

Thanks for confirming!

How ill do they have to be? If a contact is feeling under the weather in a nonspecific way and has a cough, is that enough for them to get tested?

Do you feel like you have any insight into whether underreporting of mild/minimally symptomatic/asymptomatic cases?

I was able to buy hand sanitizer after going through security at JFK on Sunday but I wouldn't count on that. Fwiw, Purell bottles small enough to take through security seem pretty common.

Seems possible but I don't really understand where China's claims about asymptomatic cases are coming from so I've been hesitant about putting too much weight on them. Copying some thoughts on this over from a FB comment I wrote (apologies that some of it doesn't make total sense w/o context).

tl;dr I'm pretty unsure whether China actually has so few minimally symptomatic/asymptomatic cases.
---
Those 320,000 people were at fever clinics, so I think none of them should be asymptomatic.
The report does say "Asymptomatic infection h
... (read more)

Some more suggestive evidence that Singapore might not be testing asymptomatic/minimally symptomatic people:

The COVID-19 swab test kit deployed at [travel] checkpoints allows us to test beyond persons who are referred to hospitals, and extend testing to lower-risk symptomatic travellers as an added precautionary measure. This additional testing capability deployed upfront at checkpoints further increases our likelihood of detecting imported cases at the point of entry. As with any test, a negative result does not completely rule out the possibility of infe
... (read more)

Yes as part of a team on standby briefed on contact tracing protocol in Singapore I can confirm, we only call and inform potential contacts . They are not tested unless I'll.

Connect these dots, along with the fact that Singapore has been doing extremely aggressive contact tracing and has been successful enough to almost stop the spread, I think Singapore can't have many uncounted mild or asymptomatic cases, and their severely ill rate is still 10% to 20%.

Do you have a citation for the claim that Singapore can't have many mild or asymptomatic cases? The article you cite says:

Close contacts are identified and those individuals without symptoms are quarantined for 14 days from last exposure. As of February 19, a total
... (read more)
6Wei Dai4y
I think you're right, I was just mistaken in assuming that Singapore tested everyone rather than only people with symptoms. However WHO has reported that 75% of asymptomatic cases detected in China develop symptoms later, so asymptomatic cases seemingly won't reduce the global fatality rate much.
9Howie Lempel4y
Some more suggestive evidence that Singapore might not be testing asymptomatic/minimally symptomatic people: https://www.moh.gov.sg/news-highlights/details/additional-precautionary-measures-in-response-to-escalating-global-situation If they were already testing lots of asymptomatic cases, it would be odd to say testing *symptomatic travelers* is allowing them to test beyond people referred to hospitals." I wonder if people are assuming that intense contact tracing means that contacts will be tested by default even if asymptomatic. I'm not an expert but my understanding is that this isn't necessarily the default (and particularly not in a situation where they presumably don't have an infinite supply of kits or healthcare workers to do the diagnostics). Depends on how close the contact was, the specific disease, etc, but I think default is to call the contact every day to check if they've developed symptoms. Would be great if an actual doctor/epidemiologist chimed in. Singapore's description of their contact tracing is vague but consistent with my understanding: https://www.moh.gov.sg/news-highlights/details/two-new-cases-of-covid-19-infection-confirmed If they were administering tests to asymptomatic contacts, I think it's likely they'd have said so here.

[I'm not a lawyer and it's been a long time since law school. Also apologies for length]

Sorry - I was unclear. All I meant was that civil cases don't require *criminal intent.* You're right that they'll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).

---

tl;dr: It's complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a "negligent misrepresentation" is enough for a ci... (read more)

3habryka4y
Thank you, this was a good clarification and really helpful!

I'm not sure I understand what you mean by "something to protect." Can you give an example?

[Answered by habryka]

[This comment is no longer endorsed by its author]Reply
4habryka4y
Presumable it's a reference to: https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect 

[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.]

Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")?

Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously."[2] (B) I'm very uncertain but they ... (read more)

My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.

[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]

Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?

This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)

The need to coordinate in this way holds just as much for consequentialists or anyone else.

I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.

Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.

Things like PJ EBY's excellent ebook.

FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to?

The best links I could find quickly were:

9Eli Tyre4y
A Minute to Unlimit You
I think I also damaged something psychologically, which took 6 months to repair.

I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.

I expect, though, that this is too sensitive/personal so please feel free to ignore.

7Eli Tyre4y
It's not sensitive so much has context-heavy, and I don't think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people's experiences of things like Circling better.

Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.

2habryka4y
Can you say more about this? I've been searching for a while about the differences between civil and criminal fraud, and my best guess (though I am really not sure) is that both have an intentional component. Here for example is an article on intent in the Texas Civil Law code:  https://www.dwlawtx.com/issue-intent-civil-litigation/

"For example, we spent a bunch of time circling for a while"

Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.

CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.

This doesn't look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it's odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn't seem like there's a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?

I guess I interpreted Rob's statement that "the EA Funds are usually... (read more)

Here's a potentially more specific way to get at what I mean.

Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.

You're saying that she should reduce her estimate because Open Phil may change its strategy or the... (read more)

9Benquo6y
Update: Nick's recent comment on the EA Forum sure suggests there is a high level of funging, though maybe not 100%, and that giving a very large amount of money to EA Funds to some extent may cause him to redirect his attention from allocating Open Phil money to allocating EA Funds money. (This seems basically reasonable on Nick's part.) So it's not obvious that an extra dollar of giving to EA Funds corresponds to anything like an extra dollar of spending within that focus area. Overall I expect *lots* of things like that, not just in the areas where people have asked questions publicly.
7Benquo6y
I'd say that if you're competent to make a judgement like that, you're already a sufficiently high-information donor that abstractions like "EA Funds" are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the "EA Funds" mechanism and need for public accountability might increase or reduce the benefits of his information advantage. If you use the "EA Funds" abstraction, then you're treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org's priorities make more sense, and insofar as it doesn't to you I'd like to hear why.

I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren't high precision so we can't rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.

That doesn't strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it's "odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies."

6Howie Lempel6y
Here's a potentially more specific way to get at what I mean. Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund. You're saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil's strategy so there's some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund. In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness? Personally, I'd guess it's lower than 15% and I'd be quite surprised to hear you say you think it's as high as 33%. This would still leave a difference that easily clears the bar for "large enough to pay attention to." Fwiw, to the extent that donors to GW are getting funged, I think it's much more likely that they are funging with other developing world interventions (e.g. one recommended org's hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead). I'm guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven't had a chance to reread them). Is it possible that funging with GW top charities isn't really your true objection?

What's the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell's recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.

In particular, to me this seems in tension with Open Phil's last public writing on it's current thinking about how ... (read more)

7Benquo6y
I don't think that statements like this are reliable guides to future behavior. GiveWell / Open Phil changes its strategic outlook from time to time in a way that's upstream of particular commitments. Even if Open Phil's claims about its strategic outlook are accurate representations of its current point of view, this point of view seems to change based on considerations not explicitly included. This is basically what I'd expect from a strategic actor following cluster thinking heuristics. In any case, despite Holden's apparent good-faith efforts to disclose the considerations motivating his actions, such disclosures don't really seem like they're high-precision guides to Open Phil's future actions, and in many cases they aren't the best explanation of present actions. The actual causal factors behind allocation decisions by GiveWell and OpenPhil continue to be opaque to outsiders, including me, even though I too used to work there. ETA: I too used to work at GW/OpenPhil