On The New York Times's Ezra Klein Show podcast, Klein interviews OpenAI CEO Sam Altman on the future of AI (archived with transcript). (Note the nod to Slate Star Codex in the reading recommendations at the end.)
My favorite ("favorite") part of the transcript was this:
EZRA KLEIN: Do you believe in 30 years we're going to have self-intelligent systems going off and colonizing the universe?
SAM ALTMAN: Look, timelines are really hard. I believe that will happen someday. I think it doesn't really matter if it's 10 or 30 or 100 years. The fact that this is going to happen, that we're going to help engineer, or merge with or something, our own descendants that are going to be capable of things that we literally cannot imagine. That somehow seems way more important than the tax rate or most other things.
"Help engineer, or merge with, or something." Indeed! I just wish there were some way to get Altman to more seriously consider that ... the specific details of the "help engineer [...] or something" actually matter? Details that Altman himself may be unusually well-positioned to affect?