Aryeh Englander

I work on applied mathematics and AI at the Johns Hopkins University Applied Physics Laboratory (APL). I am also currently pursuing a PhD in Information Systems at the University of Maryland, Baltimore County (UMBC). My PhD research focuses on decision and risk analysis under extreme uncertainty, with a particular focus on potential existential risks from very advanced AI.

Wiki Contributions

Comments

Sorted by

I agree that the first can be framed as a meta-crux, but actually I think the way you framed it is more of an object-level forecasting question, or perhaps a strong prior on the forecasted effects of technological progress. If on the other hand you framed it more as conflict theory vs. mistake theory, then I'd say that's more on the meta level.

For the second, I agree that's for some people, but I'm skeptical of how prevalent the cosmopolitan view is, which is why I didn't include it in the post.

One final thing is that I typically didn't emphasize loss of control//superintelligence//recursive self-improvement. I didn't hide it, but I included it in a longer list of threat models

I'd be very interested to see that longer threat model list!

[Cross-commenting from the EA Forum.]

[Disclaimers: My wife Deena works with Kat as a business coach. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]

A few random thoughts related to this post:

1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka and others for why it might be good to not delay this kind of post. I think those concerns and factors are serious and should definitely not be ignored. That said, I want to point out that there's a different type of retaliation in the other direction that posting this kind of thing without waiting for a response can cause: Reputational damage. As others have pointed out, many people seem to update more strongly on negative reports that come first and less on subsequent follow up rebuttals. If it turned out that the accusations are demonstrably false in critically important ways, then even if that comes to light later the reputational damage to Kat, Emerson, and Drew may now be irrevocable.

Reputation is important almost everywhere, but in my anecdotal experience reputation seems to be even more important in EA than in many other spheres. Many people in EA seem to have a very strong in-group bias towards favoring other "EAs" and it has long seemed to me that (for example) getting a grant from an EA organization often feels to be even more about having strong EA personal connections than for other places. (This is not to say that personal connections aren't important for securing other types of grants or deals or the like, and it's definitely not to say that getting an EA grant is only or even mostly about having strong EA connections. But from my own personal experience and from talking to quite a few others both in and out of EA, this is definitely how it feels to me. Note that I have received multiple EA grants in the past, and I have helped other people apply to and receive substantial EA grants.) I really don't like this sort of dynamic and I've low-key complained about it for a long time - it feels unprofessional and raises all sorts of in-group bias flags. And I think a lot of EA orgs feel like they've gotten somewhat better about this over time. But I think it is still a factor.

Additionally, it sometimes feels to me that EA Forum dynamics tend to lead to very strongly upvoting posts and comments that are critical of people or organizations, especially if they're more "centrally connected" in EA, while ignoring or even downvoting posts and comments in the other direction. I am not sure why the dynamic feels like this, and maybe I'm wrong about it really being a thing at all. Regardless, I strongly suspect that any subsequent rebuttal by Nonlinear would receive significantly fewer views and upvotes, even if the rebuttal were actually very strong.

Because of all this, I think that the potential for reputational harm towards Kat, Emerson, and Drew may be even greater than if this were in the business world or some other community. Even if they somehow provide unambiguous evidence that refutes almost everything in this post, I would not be terribly surprised if their potential to get EA funding going forward or to collaborate with EA orgs was permanently ended. In other words, I wouldn't be terribly surprised if this post spelled the end of their "EA careers" even if the central claims all turned out to be false. My best guess is that this is not the most likely scenario, and that if they provide sufficiently good evidence then they'll be most likely "restored" in the EA community for the most part, but I think there's a significant chance (say 1%-10%) that this is basically the end of their EA careers regardless of the actual truth of the matter.

Does any of this outweigh the factors mentioned by @Habryka? I don't know. But I just wanted to point out a possible factor in the other direction that we may want to consider, particularly if we want to set norms for how to deal with other such situations going forward.

2. I don't have any experience with libel law or anything of the sort, but my impression is that suing for slander over this kind of piece is very much within the range of normal responses in the business world, even if in the EA world it is basically unheard of. So if your frame of reference is the world outside of EA then suing seems at least like a reasonable response, while if your frame of reference is the EA community then maybe it doesn't. I'll let others weigh in on whether my impressions on this are correct, but I didn't notice others bring this up so I figured I'd mention it.

3. My general perspective on these kinds of things is that... well, people are complicated. We humans often seem to have this tendency to want our heroes to be perfect and our villains to be horrible. If we like someone we want to think they could never do anything really bad, and unless presented with extremely strong evidence to the contrary we'll look for excuses for their behavior so that it matches our pictures of them as "good people". And if we decide that they did do something bad, then we label them as "bad people" and retroactively reject everything about them. And if that's hard to do we suffer from cognitive dissonance. (Cf. halo effect.)

But the reality, at least in my opinion, is that things are more complicated. It's not just that there are shades of grey, it's that people can simultaneously be really good people in some ways and really bad people in other ways. Unfortunately, it's not at all a contradiction for someone to be a genuinely kind, caring, supportive, and absolutely wonderful person towards most of the people in their life, while simultaneously being a sexual predator or committing terrible crimes.

I'm not saying that any of the people mentioned in this post necessarily did anything wrong at all. My point here is mostly just to point out something that may be obvious to almost all of us, but which feels potentially relevant and probably bears repeating in any case. Personally I suspect that everybody involved was acting in what they perceived to be good faith and are / were genuinely trying to do the right thing, just that they're looking at the situation through lenses based on very different perspectives and experiences and so coming to very different conclusions. (But see my disclaimer at the beginning of this comment about my personality bias coloring my own perspective.)

The more I think about this post, the more I think it captures my frustrations with a large percentage of the public discourse on AI x-risks, and not just this one debate event.

You should make this a top level post so it gets visibility. I think it's important for people to know the caveats attached to your results and the limits on its implications in real-world dynamics.

When you say that you'd give different probability estimates on different days, do you think you can represent that as you sampling on different days from a probability distribution over your "true" latent credence? If yes, do you think it would be useful to try to estimate what that distribution looks like, and then report the mean or perhaps the 90% CI or something like that? So for example, if your estimate typically ranges between 33% and 66% depending on the day with a mean of say 50%, then instead of reporting what you think today (the equivalent of taking a single random sample from the distribution), maybe you could report 50% because that's your mean and/or report that your estimate typically ranges from 33% to 66%.

From a Facebook discussion with Scott Aaronson yesterday:

Yann: I think neither Yoshua nor Geoff believe that AI is going kill us all with any significant probability.

Scott: Well, Yoshua signed the pause letter, and wrote an accompanying statement about what he sees as the risk to civilization (I agree that there are many civilizational risks short of extinction). In his words: “No one, not even the leading AI experts, including those who developed these giant AI models, can be absolutely certain that such powerful tools now or in the future cannot be used in ways that would be catastrophic to society.”

Geoff said in a widely-shared recent video that it’s “not inconceivable” that AI will wipe out humanity, and didn’t offer any reassurances about it being vanishingly unlikely.

https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/

https://twitter.com/JMannhart/status/1641764742137016320

Yann: Scott Aaronson he is worried about catastrophic disruptions of the political, economic, and environmental systems. I don't want to speak for him, but I doubt he worries about a Yuddite-style uncontrollable "hard takeoff"

Sometimes it's better in the long run to take a good chunk of time off to do things for fun and write or work less. Sometimes less is more. But this is very much a YMMV thing.

Load More