LESSWRONG
LW

533
Wikitags
You are viewing version 1.9.0 of this page. Click here to view the latest version.

Carl Shulman

Edited by joaolkf, ignoranceprior, et al. last updated 11th Sep 2023
You are viewing revision 1.9.0, last edited by Wei Dai

Carl Shulman is a Research Fellow at the Machine Intelligence Research Institute who has authored and co-authored several papers on AI risk, including:

  • “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” 1, a analysis of the implications of the Observation selection effect on the Evolutionary argument for human-level AI
  • ”Whole Brain Emulation and the Evolution of Superorganisms”2, argues for the existence of pressures favoring the emergence of increased coordination between emulated brains, in the form of superorganisms.
  • ”Implications of a Software-Limited Singularity”3, argues for the high probability of a human-level AI before 2060.

Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.

See Also

  • Timeline of Carl Shulman publications

    More up-to-date and comprehensive timeline of publications

  • 80,000 Hours Carl Shulman’s profile
Discussion
Discussion