The majority of my response is in reducing our systems exposure vulnerabilities. As a believer in the power of strong cryptography no matter the intelligence involved, I am going to explain the value of removing or spinning down of the NSA/CIA program of back-doors, zero day exploits, and intentional cryptographic weaknesses that have been introduced into our hardware and software infrastructure.
Do you still hold this belief?
Language is one defining aspect of intelligence in general and human intelligence in particular. That an AGI wouldn't utilize the capability of LLM's doesn't seem credible. The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers. The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM's.
If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial. I think one can then argue that after ignoring the circus around sentience means we should focus on the new current capabilities: reasoning, and intricate knowledge of how humans think and communicate. From that perspective there is a now a much stronger argument for manipulation. Outwitting humans prior to having human level intelligence is a major coup. PaLM's ability to explain a joke and reason it's way to understand... (read more)
It is bizarre watching people like him and Gary Marcus embarrass themselves. It's like watching Boomers saying they'll never see another shooter as good as Larry Bird, all while Steph Curry is blasting through every NBA record of note.
As autonomous agents become more widespread, the risk becomes more obvious. Dyson and Tesla will be pumping out androids in 8 years. Starlink will provide the bandwidth to control any internet connected IoT device. Security Zero day back doors are discovered every month, that don't even include the intentional nation state added ones. I personally thought the wake-up call would happen with the deployment of Ghost Drones (not fully autonomous but easily could be). Max Tegmark's #banslaughterbots campaign is going nowhere as dr... (read more)
Is anyone at MIRI or Anthropic creating diagnostic tools for monitoring neural networks? Something that could analyze for when a system has bit-flip errors versus errors of logic, and eventually evidence of deception.
It's unfortunate we couldn't have a Sword of Damocles deadman switch in case of AGI led demise. A world ending asteroid positioned to go off in case of "all humans falling over dead at the same time." At least that would spare the Milky Way and Andromeda possible future civilizations. A radio beacon warning about building intelligent systems would be beneficial as well. "Don't be this stupid" written in the glowing embers of our solar system.
An AGI could aquire a few tons of radioactive cobalt and disperse micro granules into the stratosphere in general and over populated areas in specific. Youtube videos describe various forms of this "dirty bomb" concept. That could plausibly kill most humanity over the course of a few months. I doubt an AGI would ever go for the particular scheme as bit flips are more likely to occur in the presence of radiation.
I agree with this line of thought regarding iterative developments of proto-AGI via careful bootstrapping. Humans will be inadequate for monitoring progress of skills. Hopefully, we'll have a slew of diagnostic of narrow minded neural networks whose sole purpose is to tease out relevant details of the proto-super human intellect. What I can't wrap my head around is whether super (or sub) human level intelligence requires consciousness. If consciousness is required, then is the world worse or better for it? Is an agent with the... (read more)
Recently this paper has discovered a prompt that can unlock a form of reasoning process for large language models (LLM). https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners. These models are used for natural language processing and typically use a transformer architecture (https://paperswithcode.com/paper/the-annotated-transformer).
"Let's think step by step" style prompts seems to unlock the ability for text generation that can give logically consistent answers to questions that have been out of the reach of most A.I. neural ne... (read more)
It's too bad we couldn't just have the proverbial box be an isolated simulation and have you brain interface into it. The A.I. keeps winning in the Matrix, and afterwards we just reset it until we see improvements in alignment.
I think it's possible the nutritional diet improved during this time as well. Additional vitamins and minerals at critical developmental stages are shown to be a standard deviation above average. https://www.adelaide.edu.au/news/news55161.html Another aspect in regards to the genetic diseases is that those sphingolipid issues involves electric signaling. Imagine a copper wire without a protective sheath. Not only does the signal degrade, it's also slower. https://www.nature.com/scitable/topicpage/myelin-a-specialized-membrane... (read more)