Note: This is about the IDE extension software that runs on your computer, not about the backend service running behind the API.

Context

I'm a developer at MS. Copilot Internals have been reverse engineered before, but the code remains closed source. I support this decision on grounds of "differentially advance safety/alignment over capabilities", but there was a thread on twitter arguing otherwise.

Ask

I don't have a rigorous model for the impact on x-risks. If someone has thought about this in more depth, I would appreciate some help reducing my confusion. I looked at previous discussions but couldn't find a thorough analysis of the costs and benefits.

On one hand, better code writing assistants accelerate the self-improvement loop which shortens timelines. On the other, open sourcing only the frontend part of the interface allows other engineers to look at its internals and add extra safety guards or testing.

New Answer
New Comment

2 Answers sorted by

kithpendragon

Jun 30, 2023

2-9

Short answer: Yes.

One of the key powers of open source code is that it can (and will) be reviewed by thousands of extra pairs of eyes compared with its proprietary counterpart. Each reviewer will have a slightly different approach and philosophy from all the others. As a result, deeper and more obscure issues are naturally exposed (and therefore made available for correction) sooner with open source than they are with any program whose code cannot be freely examined.

The point you make is not wrong, but it is swamped by stronger effects. In this case one of the stronger effects is the fact that making it easier to create and maintain complex software artifacts tends to decrease how much time humanity has till AI research wipes us out (because the dangerous kinds of AI research programs entail creating and maintaining complex software), so Microsoft should not open-source the extension. (Faster hardware and better compilers have the same effect.)

This effect that I just described that swamps the effect you described is i... (read more)

1 comment, sorted by Click to highlight new comments since: Today at 9:44 AM

On the other, open sourcing only the frontend part of the interface allows other engineers to look at its internals and add extra safety guards or testing.

I have been unable to imagine a situation in which the frontend requires safety guards.

All tools that help organizations develop complicated software artifacts make the danger we are in worse unless there were some practical way (which there probably is not) to prevent AI labs from using the tool. That seems like the dominant consideration here, but then maybe I just do not know enough about Copilot-like software artifacts.

The current version of Copilot is based on GPT-3, which has been put in enough risky situations (given access to the internet, to compute resources to run any code it writes, to gullible humans) that if it were capable of doing us harm, it would have done so already. Is the danger you worry about that the core foundational model will be replaced by a more potent one, and this time (miraculously) the AI research community won't put the model in risky situations, but then the AI gains access to compute resources or gullible humans through the IDE extension?