Aryeh Englander

I work on applied mathematics and AI at the Johns Hopkins University Applied Physics Laboratory. I also do AI safety related work for the Johns Hopkins Institute for Assured Autonomy. I am currently doing a CS PhD focused on AI safety at the University of Maryland, Baltimore County.

Wiki Contributions


Sometimes it's better in the long run to take a good chunk of time off to do things for fun and write or work less. Sometimes less is more. But this is very much a YMMV thing.

This is actually another related area of my research: To the extent that we cannot get people to sit down and agree on double cruxes, can we still assign some reasonable likelihoods and/or uncertainty estimates for those likelihoods? After all, we do ultimately need to make decisions here! Or if it turns out that we literally cannot use any numbers here, how do we best make decisions anyway?

I have now posted a "Half-baked AI safety ideas thread" (LW version, EA Forum version) - let me know if that's more or less what you had in mind.

Just putting in my vote for doing both broader and deeper explorations of these topics!

My impression - which I kind of hope is wrong - has been that it is much easier to get an EA grant the more you are an "EA insider" or have EA insider connections. The only EA connection that my professor has is me. On the other hand, I understand the reluctance to some degree in the case of AI safety because funders are concerned that researchers will take the money and go do capabilities research instead.

Honestly I suspect this is going to be the single largest benefit from paying Scott to work on the problem. Similarly, when I suggested in an earlier comment that we should pay other academics in a similar manner, in my mind the largest benefit of doing so is because that will help normalize this kind of research in the wider academic community. The more respected researchers there are working on the problem, the more other researchers start thinking about it as well, resulting (hopefully) in a snowball effect. Also, researchers often bring along their grad students!

Hopefully. I have a feeling it won't be so easy, but we'll see.

Yes! I actually just discussed this with one of my advisors (an expert on machine learning), and he told me that if he could get funding to do it he would definitely be interested in dedicating a good chunk of his time to researching AGI safety. (For any funders who might read this and might be interested in providing that funding, please reach out to me by email I'm going to try to reach out to some potential funders next week.)

I think that there are a lot of researchers who are sympathetic to AI risk concerns, but they either lack the funding to work on it or they don't know how they might apply their area of expertise to do so. The former can definitely be fixed if there's an interest from funding organizations. The latter can be fixed in many cases by reaching out and talking to the researcher.

It also depends on your target audience. (Which is basically what you said, just in slightly different words.) If you want to get Serious Researchers to listen to you and they aren't already within the sub-sub-culture that is the rationality community and its immediate neighbors, then in many (most?) cases ranting and freaking out is probably going to be actively counterproductive to your cause. Same if you're trying to build a reputation as a Serious Researcher, with a chance that decision makers who listen to Serious Researchers might listen to you. On the other hand, if your target audience is people who already trust you or who are already in your immediate sub-sub-tribe, and you don't mind risking being labeled a crackpot by the wider world, then I can see why visibly freaking out could be helpful.

[Also, it goes without saying that not everybody agrees with Eliezer's probability-of-doom estimates. Depending on your relative probabilities it might make perfect sense to work in a random startup, have a 401k, not visibly freak out, etc.]

I'm pretty sure that's the whole purpose of having province governors and sub-kingdoms, and various systems in place to ensure loyalty. Every empire in history did this, to my knowledge. The threat of an imperial army showing up on your doorstep if you fail to comply has historically been sufficient to ensure loyalty, at least while the empire is strong.

Load More