One of the earliest speculations about machine intelligence was that, because it would be made of much simpler components than biological intelligence, like source code instead of cellular tissues, the machine would have a much easier time modifying itself. In principal, it would also have a much easier time improving itself, and therefore improving its ability to improve itself, thereby potentially leading to an exponential growth in cognitive performance—or an 'intelligence explosion,' as envisioned in 1965 by the mathematician Irving John Good.
Recently, this historically envisioned objective, called recursive self-improvement (RSI), has started to be publicly pursued by scientists and openly discussed by AI corporations' senior leadership. Perhaps the most visible signature of this trend is that a group of academic and corporate researchers will be hosting, in April, a first formal workshop explicitly focused on the subject, located at the International Conference on Learning Representations (ICLR), a premier conference for AI research. In their workshop proposal, organizers state they expect over 500 in attendance.
However, prior to recent discussions of the subject, RSI was often—but not always—seen as posing serious concerns about AI systems that executed it. These concerns were typically less focused on RSI, itself, and more focused on the consequences of RSI, like the intelligence explosion it might (hypothetically) generate. Were such an explosion not carefully controlled, or perhaps even if it were, various researchers argued that it might not secure the values or ethics of the system, even while bringing about exponential improvements to its problem solving capabilities—thereby making the system unpredictable or dangerous.
Recent developments have therefore raised questions about whether the topic is being treated with a sufficient safety focus. David Scott Krueger of the University of Montreal and Mila, the Quebec Artificial Intelligence Institute, is critical of the research. "I think it's completely wild and crazy that this is happening, it's unconscionable," said Krueger to Foom in an interview. "It's being treated as if researchers are just trying to solve some random, arcane math problem ... it shows you how unserious the field is about the social impact of what it's doing."
However, questions about the safety profile of RSI are complicated by several contemporary aspects of current developments. First, although RSI was historically assessed to be problematic, those assessments have largely not been updated with respect to changes in the modern AI development process.
Second, while some researchers like Krueger strongly object to current research, others see RSI safety as either optional or not as important. The organizers of the upcoming ICLR RSI workshop, when contacted via email, acknowledged safety considerations, while also defending the fact that safety was given little mention in their workshop proposal or website.
"I agree that we could make the “safety” emphasis clearer, because when [AI] is becoming stronger, no one wants it [to go] out of control," said Mingchen Zhuge of King Abdullah University of Science and Technology (KAUST) to Foom via email; Zhuge was listed as the primary workshop contact. "But at the moment, we see RSI as being at an early stage [and we are] keen to encourage a broad range of methodologies aimed at skill improvement. At the same time [research focused on RSI safety] would be very welcome and strongly self-motivated."
Third, while many different methods have been put forward as putative methods for achieving RSI, or at least self-improvement, these methods often present very different technical characteristics. This makes analysis of their safety issues (or non-issues) complex.
Regardless, the picture presented by historical work, public statements, and more recent research all suggest that questions about RSI safety are in need of being revisited.
One of the earliest speculations about machine intelligence was that, because it would be made of much simpler components than biological intelligence, like source code instead of cellular tissues, the machine would have a much easier time modifying itself. In principal, it would also have a much easier time improving itself, and therefore improving its ability to improve itself, thereby potentially leading to an exponential growth in cognitive performance—or an 'intelligence explosion,' as envisioned in 1965 by the mathematician Irving John Good.
Recently, this historically envisioned objective, called recursive self-improvement (RSI), has started to be publicly pursued by scientists and openly discussed by AI corporations' senior leadership. Perhaps the most visible signature of this trend is that a group of academic and corporate researchers will be hosting, in April, a first formal workshop explicitly focused on the subject, located at the International Conference on Learning Representations (ICLR), a premier conference for AI research. In their workshop proposal, organizers state they expect over 500 in attendance.
However, prior to recent discussions of the subject, RSI was often—but not always—seen as posing serious concerns about AI systems that executed it. These concerns were typically less focused on RSI, itself, and more focused on the consequences of RSI, like the intelligence explosion it might (hypothetically) generate. Were such an explosion not carefully controlled, or perhaps even if it were, various researchers argued that it might not secure the values or ethics of the system, even while bringing about exponential improvements to its problem solving capabilities—thereby making the system unpredictable or dangerous.
Recent developments have therefore raised questions about whether the topic is being treated with a sufficient safety focus. David Scott Krueger of the University of Montreal and Mila, the Quebec Artificial Intelligence Institute, is critical of the research. "I think it's completely wild and crazy that this is happening, it's unconscionable," said Krueger to Foom in an interview. "It's being treated as if researchers are just trying to solve some random, arcane math problem ... it shows you how unserious the field is about the social impact of what it's doing."
However, questions about the safety profile of RSI are complicated by several contemporary aspects of current developments. First, although RSI was historically assessed to be problematic, those assessments have largely not been updated with respect to changes in the modern AI development process.
Second, while some researchers like Krueger strongly object to current research, others see RSI safety as either optional or not as important. The organizers of the upcoming ICLR RSI workshop, when contacted via email, acknowledged safety considerations, while also defending the fact that safety was given little mention in their workshop proposal or website.
"I agree that we could make the “safety” emphasis clearer, because when [AI] is becoming stronger, no one wants it [to go] out of control," said Mingchen Zhuge of King Abdullah University of Science and Technology (KAUST) to Foom via email; Zhuge was listed as the primary workshop contact. "But at the moment, we see RSI as being at an early stage [and we are] keen to encourage a broad range of methodologies aimed at skill improvement. At the same time [research focused on RSI safety] would be very welcome and strongly self-motivated."
Third, while many different methods have been put forward as putative methods for achieving RSI, or at least self-improvement, these methods often present very different technical characteristics. This makes analysis of their safety issues (or non-issues) complex.
Regardless, the picture presented by historical work, public statements, and more recent research all suggest that questions about RSI safety are in need of being revisited.
Continue reading at foommagazine.org ...