The host has requested RSVPs for this event
2 Going2 Maybe0 Can't Go
delton137
mishka
nikola
Simon Mendelsohn

SPACE IS LIMITED -- PLEASE MAKE SURE TO RSVP HERE TO GUARANTEE A SPOT AND GET YOUR NAME ON THE GUEST LIST.

An AICamp event sponsored by Boston Astral Codex Ten and the Mind First Foundation.

Join us for a special night where we ask some of the most important questions facing us as we consider the future of AI and humanity.

How can we best guide the transition to superintelligence? Should we try to pause all AI or AGI research to keep extinction risk low? Should we let AIs become our "mind children" and our descendants, to populate the galaxy in our stead? Or should we try to merge with AI?

Agenda for the evening:

5:30 - 6:15 pm Socializing with pizza and light refreshments
6:15 - 7:15 pm Talks from our speakers
7:15 - 7:30 pm Short Break 
7:30 - 8:15 pm Panel discussion with audience Q&A
8:15 - 9:00 pm Socializing and wind-down


Tentative list of speakers and panelists:

--------- Preston Estep, Ph.D. -------- 
Talk title: "Will humanized AI be humanity’s savior or successor ... or both?"

Dr. Estep is Chief Scientist and co-founder of the Mind First Foundation (https://mindfirst.foundation/) and the RaDVaC project, which received an Astral Codex Ten grant. He studied neuroscience at Cornell University and holds a Ph.D. in Genetics from Harvard University, where he worked in the lab of George Church. He has been a founder or advisor to the Harvard Personal Genome Project, and to many biotech startup companies. He will be talking about the desirability and possibility of human-AI merger.

-------- Brian M. Delaney -------------
Talk title: "Childhood's end and the 'AI alignment problem' problem"

Brian M. Delaney is Chief Philosophy Officer at the Mind First Foundation and Clinical Trials Liaison at RaDVaC. He has founded and run several nonprofit and not-for-profit research organizations focused on health and longevity, with a particular emphasis on mental and neurological health. He did AI research long ago with Eugene Charniak at Brown University and recently has been thinking and writing extensively about the co-evolution of AI and humanity (upcoming unpublished work).

--------- Daniel Faggella ---------------
Talk title: "Posthumanism and AGI: Exploring the Intelligence-trajectory Political Matrix"

Dan is CEO of Emerj Intelligence Research, a leading AI research and advisory firm. He is host of the "AI in Business" podcast and is frequently called upon to speak in front of large audiences about AI. He will be speaking about decels vs accels and his intelligence trajectory political matrix.

We are looking for a fourth speaker, preferably to give a more AI safety-focused perspective.. if you have ideas or would like to speak, please reach out to Dan Elton on Facebook, LinkedIN, or Twitter.

For more AI events in Boston, check out the Boston in-person events listing: https://docs.google.com/document/d/1qH32tJa3q4wvCkIdmANR-MOfJU4hE3wz9c9ckJBhcZ4/edit?usp=sharing

New to LessWrong?

9

New Comment
Everyone who RSVP'd to this event will be notified.
5 comments, sorted by Click to highlight new comments since: Today at 10:23 AM

AI futurists ... We are looking for a fourth speaker

You should have an actual AI explain why it doesn't want to merge with humans. 

hah... actually not a bad idea... too late now. BTW the recording will be available eventually if you're interested.

If anyone at Microsoft New England is interested in technical AI alignment research, please ask them to ping me or Kyle O'Brien on teams.

Hi, organizer here. I just saw your message now right after the event. There were a couple people from Microsoft there but I'm not sure if they were interested in alignment research. This was mostly a general audience at this event, mostly coming through the website AIcamp.ai. We also had some people from the local ACX meetup and transhumanist meetup. PS: I sent you an invitation to connect on LinkedIN, let's stay in touch (I'm https://www.linkedin.com/in/danielelton/).