General long term focused on topics such as x-risk and the how to make a good future for humanity/ life, conscious beings and technology. I work in tech: Augment Reality, interactive 3D graphics and web dev, currently independent consultant, but had a go at an AR startup that still is too early to scale beyond tech demos. I worry mostly about AI x-risk and the insane suicidal AI arms race.
I was not aware of this conference. But when I listened to the description it sounded like a super high value gathering. I saw no link to YouTube or Vimeo or any other such sites in the conference page or the Golden Gate Institute page, so I started looking for recordings of previous conferences on the big old interweb (google / YouTube) but did not find anything anywhere. Does anyone know where to find previous conferences recordings?
Why aren´t more AI-pilled people talking about risks of short timelines to "Drop-in onsite worker replacements" (humanoid robots with human blue-collar worker equivalent fine and course motor skills and physical situational awareness ). Consider Chinese Unitree humanoids doing way beyond average human acrobatics
and Chinese Wuji Techs "Wuji Hand" showing equivalent to average human fine hand motor skills and dexterity.
Western humanoid robotics companies are seemingly not far behind either (Boston Dynamics, Tesla, IX, Figure AI)
Paired with exponential progress in situational awerness building via lots of major "Physcial AI" frontier efforts (NVIDIA, Meta, Google, Disney) it seems plausible that humanoid robots with superhuman motor skills, dexterity and real world on-site situational awareness could become a reality within the next couple of years, enabling "Drop-in onsite worker replacements" with massive transformative potential and associated risks.
My proposed list of "Minimum Required Core Lethalities" contains just two items.
1) Paradigmatic level tech and science AI capability: Systems with the ability to autonomously make new paradigmatic level technological inventions or scientific discoveries.
2) "Industrial Singularity": AI systems including robotics are able to autonomously build run operate and develop an entire technological infrastructure with complete value chains from mining and energy extraction to high tech industrial manufacture with no human-in-the-loop.
(not claiming that lesser capabilities might not cause AI x-risk, but those two "lethalities" exist before we have solved alignment i see it as unlikely that we don´t get extinct - I also realize the plausible short path from having item 1 to then such AI system soon being able to "solve robotics"... so item 1 is the first on the list for a reason)
I also claim that the AI safety research field seems too not care sufficiently about the second item on the list, even though it is a key part of both the doom scenario in the Yudkowsky/Soares "If Anyone Builds it Everyone Dies" book and in the AI 2027 scenario (Kokotajlo et. al.) and is discussed in Aschenbrenner "Situational Awareness".
We need to have more people in the AI Safety community starting to focus on and investing in physical AI safety research efforts - even though it is more cumbersome.