Why is regulation ungood? I want to understand the thoughts of other LWers why regulation is not wanted. Safe algorithms can only be evaluated if they are fully disclosed. There are many arguments against regulation - I know:

  • Nobody wants to disclose algorithms and test data.
  • Nobody wants projects being delayed.
  • Nobody wants to pay extra costs for external independent safety certifcation.
  • Developers do not want to "waste" their time with unproductive side issues.
  • Nobody wants to lose against a non-regulated competitor.
  • Safety concepts are complicated to understand and complex to implement.
  • Safety consumes performance at extra costs.

BUT: We ALL are facing an existential risk! Once algorithms manage to influence political decision making we do not even have the chance to lay down such regulations in law. We have to prepare the regulatory field by now! We should start this by starting a public debate. Like Nick Bostrum, Stephen Hawking, Elon Musk and many others already did. Today only a few ppm of the population know about these issues. And even top researchers are unaware of. At least a lecture on AI safety issues should become compulsory for IT, engineering, mathematics and physics students all over in the world.

In biotechnology Europe and especially Germany imposed strict regulations. The result was that even German companies joined or created subsidiary research companies in the US or UK, where regulations are minimal. This is no prototype solution for the Control Problem.

Local separation might work for GMOs - for AGI definitively not. AGI will be a game changer. Who is second has lost. If the US and EU would impose AI regulations and China and Israel not - where would the game winner come from? We have to face the full complexity of our world, dominated by multinational companies and their agendas. We should prepare a way how effective regulation can be made effective and acceptable for 192 countries and millions of companies. The only binding force among us all is the existential risk. There are viable methods to make regulation work: Silicon chip manufacturing luckily needs fabs that cost billions of dollars. It is a centralised point where regulation could be made effective. We could push hardware tripwires and enforce the use of certificated AI safeguard tools that interact compulsory with this special hardware. We can do it similarly like the content industry that pushed hardware manufactures to implement DRM hard- and software.

The trouble is: Nobody to this point has a clear idea how a globally acceptable regulation could look like; could work technically; could be made effective and could be monitored.

To lay out a framework how global regulation could be designed is to me one core element of AI safety engineering. The challenge is to find a high level of abstraction to include all thinkable developments. A body of AI safety engineers should derive from this detailed regulations that can be applied by AI developers, testers and AI safety Institutions.

The TÜV "Technischer Überwachungs-Verein" was founded in Germany after several incidents of exploded steam engine boilers with severe casualties. On the background of newspaper articles about these accidents and public pressure the manufacturers of boilers accepted the enforcement of technical steam boiler regulations and time and money consuming test procedures.

We cannot try out two or three Singularities and then change our mind on regulation.

As there are so many reasons why nobody in the development process wants regulation the only way is to enforce it trough a political process. To start this we need professionals with AI experience.

Meta: Whenever I ask for regulation I got downvoted. Therefore i disconneced this point from my previous one. Please downvote only including comment.

Why is regulation ungood?

Because all regulation does is redistribute power between fallible humans.

We should prepare a way how effective regulation can be made effective and acceptable for 192 countries and millions of companies.

Who is that "we"?

We can do it similarly like the content industry that pushed hardware manufactures to implement DRM hard- and software.

LOL. So, do you think I have problems finding torrents of movies to watch?

the only way is to enforce it trough a political process. To start this we need professionals with

... (read more)

To contribute to AI safety, consider doing AI research

by Vika 1 min read16th Jan 201639 comments

26


Among those concerned about risks from advanced AI, I've encountered people who would be interested in a career in AI research, but are worried that doing so would speed up AI capability relative to safety. I think it is a mistake for AI safety proponents to avoid going into the field for this reason (better reasons include being well-positioned to do AI safety work, e.g. at MIRI or FHI). This mistake contributed to me choosing statistics rather than computer science for my PhD, which I have some regrets about, though luckily there is enough overlap between the two fields that I can work on machine learning anyway. I think the value of having more AI experts who are worried about AI safety is far higher than the downside of adding a few drops to the ocean of people trying to advance AI. Here are several reasons for this:

  1. Concerned researchers can inform and influence their colleagues, especially if they are outspoken about their views.
  2. Studying and working on AI brings understanding of the current challenges and breakthroughs in the field, which can usefully inform AI safety work (e.g. wireheading in reinforcement learning agents).
  3. Opportunities to work on AI safety are beginning to spring up within academia and industry, e.g. through FLI grants. In the next few years, it will be possible to do an AI-safety-focused PhD or postdoc in computer science, which would hit two birds with one stone.

To elaborate on #1, one of the prevailing arguments against taking long-term AI safety seriously is that not enough experts in the AI field are worried. Several prominent researchers have commented on the potential risks (Stuart Russell, Bart Selman, Murray Shanahan, Shane Legg, and others), and more are concerned but keep quiet for reputational reasons. An accomplished, strategically outspoken and/or well-connected expert can make a big difference in the attitude distribution in the AI field and the level of familiarity with the actual concerns (which are not about malevolence, sentience, or marching robot armies). Having more informed skeptics who have maybe even read Superintelligence, and fewer uninformed skeptics who think AI safety proponents are afraid of Terminators, would produce much needed direct and productive discussion on these issues. As the proportion of informed and concerned researchers in the field approaches critical mass, the reputational consequences for speaking up will decrease.

A year after FLI's Puerto Rico conference, the subject of long-term AI safety is no longer taboo among AI researchers, but remains rather controversial. Addressing AI risk on the long term will require safety work to be a significant part of the field, and close collaboration between those working on safety and capability of advanced AI. Stuart Russell makes the apt analogy that "just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, issues of control and safety will become central to AI as the field matures". If more people who are already concerned about AI safety join the field, we can make this happen faster, and help wisdom win the race with capability.

(Cross-posted from my blog. Thanks to Janos Kramar for his help with editing this post.)

26