Hide table of contents

Context

This is a short post to illustrate and start testing an idea that I have been playing around with in the last couple of days after listening to a recent 80,000 hours podcast with Mustafa Suleyman. In the podcast Suleyman as well as Rob Wiblin expressed concerns about Open Source AI development as a potential risk for our societies while implying that closed source development would be the only reasonable alternative. They have not delved deeper into the topic to examine their own assumptions about what makes reasonable alternatives in this context, or to look for possible alternatives beyond the "standard" open source/closed source dichotomy. With this post, I want to encourage our community to join me in the effort to reflect our own discourse and assumptions around responsible AI development to not fall into the trap of naively reifying existing categories, and develop new visions and models that are better able to address the upcoming challenges which we will be facing. As a first step, I explore the notion of Regulated Source as a model for responsible AI development.

Open Source vs. Closed Source AI Development

Currently, there are mainly two competing modes for AI development, namely, Open Source and Closed Source (see Table for comparison):

  • Open Source “is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized software development model that encourages open collaboration. A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public” (Wikipedia).
  • Closed Source “is software that, according to the free and open-source software community, grants its creator, publisher, or other rightsholder or rightsholder partner a legal monopoly by modern copyright and intellectual property law to exclude the recipient from freely sharing the software or modifying it, and—in some cases, as is the case with some patent-encumbered and EULA-bound software—from making use of the software on their own, thereby restricting their freedoms.” (Wikipedia)  

Table 1. Comparison Table for Open Source vs. Closed Source inspired by ChatGPT 3.5.

CriteriaOpen SourceClosed Source
AccountabilityCommunity-driven accountability and transparencyAccountability lies with owning organization
Accessibility of Source CodePublicly available, transparentProprietary, restricted access
CustomizationHighly customizable and adaptableLimited customization options
Data PrivacyNo inherent privacy features; handled separatelyMay offer built-in privacy features, limited control
InnovationEnables innovationLimits potential for innovation
LicensingVarious open-source licensesControlled by the owning organization's terms
MonetizationMonetization through support, consulting, premium featuresMonetization through licensing, subscriptions, fees
Quality AssuranceQuality control depends on communityCentralized control for quality assurance and updates
TrustTransparent, trust-building for usersPotential concerns about hidden biases or vulnerabilities
SupportReliant on community or own expertise for supportReliant on owning organization for support

 

If we look at these modes of software development, they have both been argued to have positive and negative implications for AI development. For example, Open Source has often been suggested as a democratizing force in AI development, acting as a powerful driver of innovation by making AI capabilities accessible to a broader segment of the population. This has been argued to be potentially beneficial for our societies, preventing or at least counteracting the centralization of control in the hands of a few, which poses the threat of dystopian outcomes (e.g., autocratic societies run by a surveillance state or a few mega corporations). At the same time, some people worry that the democratization of AI capabilities may increase the risk of catastrophic outcomes because not everyone can be trusted to use them responsibly. In this view, centralization is a good feature because it makes it easier to control the situation as a whole since fewer parties need to be coordinated. A prominent analogy used to support this view is with our attempts to limit the proliferation of nuclear weapons, where strong AI capabilities are viewed as similar in their destructive potential to nuclear weapons.

Against this background, an impartial observer may argue that both Open Source and Closed Source development models point to potential failure modes for our societies: 

  • Open Source development models can increase the risk of catastrophic outcomes when irresponsible actors gain access to powerful AI capabilities, creating opportunities for deliberate misuse or catastrophic accidents.
  • Closed Source development models can increase the risk of dystopian outcomes when control of powerful AI capabilities is centralized in the hands of a few, creating opportunities for them to take autocratic control over our societies.

This leads to a dilemma that Tristan Harris and Daniel Schmachtenberger have illustrated with the metaphor of a bowling alley, where the two gutters to the left and right of the alley represent the two failure modes of catastrophic or dystopian outcomes.[1] In this metaphor, the only path that can lead us to existential security is a middle path that acknowledges but avoids both failure modes (see Fig. 1). Similarly, given the risk-increasing nature of both Open Source and Closed Source AI development approaches, an interesting question is whether it is possible to find a middle ground AI development approach that avoids their respective failure modes.

Fig. 1. The path to existential safety requires avoiding catastrophic and dystopian outcomes.

Regulated Source as a Model for Responsible AI Development

In this section, I begin to sketch out a vision for responsible AI development that aims to avoid the failure modes associated with Open Source and Closed Source development by trying to take the best and leave behind the worst of both. I call this vision a “Regulated Source AI Development Model” to highlight that it aims to establish a regulated space as a middle ground between the more extreme Open Source and Closed Source models (c.f., Table 2). 

As visualized in Fig. 2 and summarized in Table 2, the core idea of the Regulated Source model is to establish a trustworthy and publicly accountable regulating body which defines transparent standards that not only regulate AI use cases but also govern the behavior of organizations that want to implement these use cases. In particular, such standards could mandate the sharing of code and other knowledge assets relating to the implementation of AI use cases to level the playing field between the regulated organizations and reduce the chance of AI capability development races by lowering the expected benefit of unilateral actions. Importantly, such sharing of code and knowledge assets would be limited to organizations (or other actors), who have demonstrated that they can meet the transparent standards set by the regulating body, thus, balancing the risks associated with the proliferation of potentially dangerous capabilities on the one hand (i.e., the failure mode of Open Source), and the centralization of power on the other hand (i.e., the failure mode of Closed Source). 

Fig. 2. A Sketch of the Regulated Source AI Development Model.

A real life example that already comes close to the envisioned Regulated Source model is the International Atomic Energy Agency (IAEA). The IAEA was founded in 1957 as an intergovernmental organization to monitor the global proliferation of nuclear resources and technology and serves as a forum for scientific and technical cooperation on the peaceful use of nuclear technology and nuclear power worldwide. For this, it runs several programs to encourage the safe and responsible development and use of nuclear technology for peaceful purposes and also offers technical assistance to countries worldwide, particularly in the developing world. It also provides international safeguards against the misuse of nuclear technology and has the authority to monitor nuclear programs and to inspect nuclear facilities. As such, there are many similarities between the IAEA and the envisioned Regulated Source model, the main difference being the domain of regulation and the less strong linkage to copyright regulation. As far as I am aware the IAEA does not have regulatory power to distribute access to privately developed nuclear technology, whereas the Regulated Source model would aim to compell responsible parties to share access to AI development products in an effort to counteract race dynamics and the centralization of power.

Table 2. Characteristics of a Regulated Source AI Development Model.

CriteriaRegulated Source
AccountabilityAccountability and transparency regulated by governmental, inter-governmental, or recognized professional bodies (c.f., International Atomic Energy Agency (IAEA)).
Accessibility of Source CodeRestricted access to an audience that is clearly and transparently defined by regulating bodies; all who fulfill required criteria are eligible for access
CustomizationHighly customizable and adaptable within limits set by regulating bodies
Data PrivacyMinimum standards for privacy defined by regulating bodies
InnovationEnables innovation within limits set by regulating bodies
LicensingTechnology or application specific licensing defined by regulating bodies
MonetizationMandate to optimize for public benefit. Options include support, consulting, premium features but also licensing, subscriptions, fees
Quality AssuranceMinimum standards for quality control defined by regulating bodies
TrustTransparent for regulating bodies, trust-building for users
SupportReliant on regulated organizations for support

Concluding Remarks

I wrote this post to encourage discussion about the merits of the Regulated Source AI Development Model. While many people may have had similar ideas or intuitions before, I still miss a significant engagement with such ideas in the ongoing discourse on AI governance (at least as far as I am aware). Much of the discourse has touched on the pros and cons of open source and closed source models for AI development, but if we look closely, we should realize that focusing only on this dichotomy has put us between a rock and a hard place. Neither model is sufficient to address the challenges we face. We must avoid not only catastrophe, but also dystopia. New approaches are needed if we're going to make it safely to a place that's still worth living in.

The Regulated Source AI Development Model is the most promising approach I have come up with so far, but more work is certainly needed to flesh out its implications in terms of opportunities, challenges, or drawbacks. For example, despite its simplicity, Regulated Source seems to be suspiciously absent from the discussion of licensing frontier AI models, so perhaps there are reasons inherent in the idea that can explain this? Or is it simply that it is still such a niche idea that people do not recognize it as potentially relevant to the discussion? Should we do more to promote this idea, or are there significant drawbacks that would make it a bad idea? Many questions remain, so let's discuss them!

P.S.: I am considering to write the ideas expressed in this post up for an academic journal, reach out if you would want to contribute to such an effort.

  1. ^

    Listen to Tristan Harris and Daniel Schmachtenberger on the Joe Rogan Experience Podcast.

5

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The post discusses the open source and closed source models for AI development, arguing that both have failure modes of enabling catastrophic or dystopian outcomes. It proposes regulated source as an alternative model.

Key points:

  1. Open source risks irresponsible use of AI, while closed source risks centralized control and dystopia. Both have concerning failure modes.
  2. The post proposes regulated source as an alternative model, with transparent standards and sharing of code/knowledge among approved organizations.
  3. This aims to balance open proliferation and centralized control, avoiding the failure modes of both.
  4. The IAEA provides a real-world model of regulated technology sharing among approved parties.
  5. Much discussion focuses just on open vs closed source, but we need new approaches like regulated source.
  6. The idea needs more development and analysis of opportunities, challenges, and drawbacks.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities