Posts

Sorted by New

Wiki Contributions

Comments

On reflection these were bad thresholds, should have used maybe 20 years and a risk level of 5%, and likely better defined transformational. The correlation is certainly clear here, the upper right quadrant is clearly the least popular, but I do not think the 4% here is lizardman constant.

Wait, what? Correlation between what and what? 20% of your respondents chose the upper right quadrant (transformational/safe). You meant the lower left quadrant, right?

I appreciate that you are putting thought into this. Overall I think that "making the world more robust to the technologies we have" is a good direction.

In practice, how does this play out?

Depending on the exact requirements, I think this would most likely amount to an effective ban on future open-sourcing of generalist AI models like Llama2 even when they are far behind the frontier. Three reasons that come to mind:

  1. The set of possible avenues for "novel harms" is enormous, especially if the evaluation involves "the ability to finetune [...], external tooling which can be built on top [...], and API calls to other [SOTA models]". I do not see any way to clearly establish "no novel harms" with such a boundless scope. Heck, I don't even expect proprietary, closed-source models to be found safe in this way.
  2. There are many, many actors in the open-source space, working on many, many AI models (even just fine-tunes of LLaMA/Llama2). That is kind of the point of open sourcing! It seems unlikely that outside evaluators would be able to evaluate all of these, or for all these actors to do high-quality evaluation themselves. In that case, this requirement turns into a ban on open-sourcing for all but the largest & best-resourced actors (like Meta).
  3. There aren't incentives for others to robustify existing systems or to certify "OK you're allowed to open-source now", in the way as there are for responsible disclosure. By default, I expect those steps to just not happen, & for that to chill open-sourcing.

If we are assessing the impact of open-sourcing LLMs, it seems like the most relevant counterfactual is the "no open-source LLM" one, right?

Noted! I think there is substantial consensus within the AIS community on a central claim that the open-sourcing of certain future frontier AI systems might unacceptably increase biorisks. But I think there is not much consensus on a lot of other important claims, like about for which (future or even current) AI systems open-sourcing is acceptable and for which ones open-sourcing unacceptably increases biorisks.

(explaining my disagree reaction)

The open source community seems to consistently assume the case that the concerns are about current AI systems and the current systems are enough to lead to significant biorisk. Nobody serious is claiming this

I see a lot of rhetorical equivocation between risks from existing non-frontier AI systems, and risks from future frontier or even non-frontier AI systems. Just this week, an author of the new "Will releasing the weights of future large language models grant widespread access to pandemic agents?" paper was asserting that everyone on Earth has been harmed by the release of Llama2 (via increased biorisks, it seems). It is very unclear to me which future systems the AIS community would actually permit to be open-sourced, and I think that uncertainty is a substantial part of the worry from open-weight advocates.

I agree that they are related. In the context of this discussion, the critical difference between SGD and evolution is somewhat captured by your Assumption 1:

Fixed 'fitness function' or objective function mapping genome to continuous 'fitness score'

Evolution does not directly select/optimize the content of minds. Evolution selects/optimizes genomes based (in part) on how they distally shape what minds learn and what minds do (to the extent that impacts reproduction), with even more indirection caused by selection's heavy dependence on the environment. All of that creates a ton of optimization "slack", such that large-brained human minds with language could steer optimiztion far faster & more decisively than natural selection could. This what 1a3orn was pointing to earlier with

evolution does not grow minds, it grows hyperparameters for minds. When you look at the actual process for how we actually start to like ice-cream -- namely, we eat it, and then we get a reward, and that's why we like it -- then the world looks a a lot less hostile, and misalignment a lot less likely.

SGD does not have that slack by default. It acts directly on cognitive content (associations, reflexes, decision-weights), without slack or added indirection. If you control the training dataset/environment, you control what is rewarded and what is penalized, and if you are using SGD, then this lets you directly mold the circuits in the model's "brain" as desired. That is one of the main alignment-relevant intuitions that gets lost when blurring the evolution/SGD distinction.

An attempt was made last year, as an outgrowth of some assorted shard theory discussion, but I don't think it got super far:

Load More