Michaël Trazzi

Wiki Contributions

Comments

Phil Trammell on Economic Growth Under Transformative AI

Among other things, Phil's literature review studies to what extent will human labor be a bottleneck for economic growth as AI substitutes for labor. I agree with you that AI-coding-AIs would have weird effects... but do you agree with the point that it won't be enough to sustain growth, or are you thinking about other paths where certain bottlenecks might not really be a problem?

The Codex Skeptic FAQ

Thanks for natural language stochastic compiler explanation, makes a lot of sense. I broadly get a sense of what you mean by "context window" since people have been mentioning that quite a lot when talking about GPT-3. As for whether it makes sense to write docstrings for trivial things, I think this is only pointing at the Codex demo examples where people write docstrings and get results, but for most of my use cases, and when it gets really interesting, is when it auto-completes 1) while I'm writing 2) when I'm done writing and it guesses the next line 3) when I start a line by "return " or "x = " and wait for his auto-completion. Here, I would have no idea how to formulate it in the docstring, I just generally trust its ability to follow the logic of the code that precedes it (and I find it useful most of the time).

The Codex Skeptic FAQ

1. if you want a longer init, write a doctring for it

natural language stochastic compiler

I don't get what you mean here. I'm also not an expert on the Codex' "Context windows".

1) in my experience, even if not specified in your prompt, the model still goes or your depency graph (in different files in your repo, not Github) and picks which functions are relevant for the next line 2) if you know which function to. use, then add these function or "API calls" in the docstring;

The Codex Skeptic FAQ

I created a class initializing the attributes you mentioned, and when adding your docstring to your function signature it gave me exactly the answer you were looking for. Note that it was all in first try, and that I did not think at all about the initialization for components, marginalized or observed—I simply auto-completed.

class Distribution:
def __init__(self):
self.components = []
self.marginalized = None
self.observed = None


def unobserved(self) -> Set[str]:

"""Returns a set of all unobserved random variable names inside this Distribution -- that is,

those that are neither observed nor marginalized over.

"""
return set(self.components) - set(self.observed) - set(self.marginalized)
The Codex Skeptic FAQ

Wait, they did plain forbid you to use at all during work time, or they forbid to use its outputs for IT issues? Surely, using Codex for inspiration, given a natural language prompt and looking at what function it calls does not seem to infringe any copyright rules?

  • 1) If you start with your own variable names, it would auto-complete with those, maybe using something he learned online. would that count as plagiarism in your sense? How would that differ from copy-pasting from stack overflow changing the variable names (I'm not an expert in SO copyright terms but you should probably quote SO if doing so and there might be some rules about distributing it commercially).
  • 2) imagine you are using line-by-line auto-complete, and sometimes you re-arrange the ordering of the lines, adding your own code, even modifying it a bit. At one point does it become your own code?
  • 3) In the cases 1. and 2. that I mentioned above, even if some of the outputs were verbatim (which apparently happens a tiny fraction of the time) and had exactly the same (probably conventional) variable names, would "I have some line of code with exact the same normal naming of variables on the internet" be enough for going to court?
  • 4) Assuming that developers are, or will be, more productive using such tools, don't you think they would still use Copilot-like software to a) get inspiration b) copy-paste code that they would later modify to bypass IP infringements if they are smart enough about it, even though their companies "forbids" them from using it?
The Codex Skeptic FAQ

The problem with arguing against that claim is that nobody knows whether transformers/scaling language models are sufficient for full code automation. To take your nootropics example, an analogy would be if nootropics were legal, did not have negative side effects, with a single company giving "beta access" (for now) to a new nootropic in unlimited amount at no cost to a market of tens of millions of users, that the data from using this nootropic was collected by the company to improve the product, that there actually were 100k peer-reviewed publications per year in the field of nootropics, where most of the innovation behind the tech came from a >100B-parameters model trained on open-source nootropic chemistry instructions. Would such advancements be evidence for something major we're not certain about (e.g. high bandwidth brain computer interface) or just evidence for increased productivity that would be reinjected into more nootropic investments?

The Codex Skeptic FAQ

The fastai blog is linked in my post (it's the url for "outdated") since I tried some of the prompts from his blog (especially the first one when reading a file) and ended up with different results. It's worth mentioning that he only talks about Copilot, not Codex, the latter being supposedly from a more advanced model.

On the amount of code generated, you could make the similar argument for Stack Overflow. If I were a SO skeptic I would say "back in my day people used to read manuals and use the right options for functions, now that they just copy-paste many paragraphs of code". Codex is just SO on steroids, it's the engineers' responsibility to refactor, although I agree having auto-complete doesn't help solve bad habits.

The Codex Skeptic FAQ

I buy that "generated code" will not add anything to the training set, and that Copilot doesn't help for having good data or (directly) better algorithms. However, the feedback loop I am pointing at is when you accept suggestions on Copilot. I think it is learning from human feedback on what solutions people select. If the model is "finetuned" to the specific dev's coding style, I would expect Codex to suggest even better code (because of high quality of finetuning data) to someone at OAI than me or you.

How much of this is 'quality of code' vs. 'quality of data'?

I'm pointing at overall gains in dev's productivity. This could be used for collecting more data, which, AFAIK, happens by collecting automatically data from the internet using code (although possibly the business collaboration between OAI and github helped). Most of the dev work would then be iteratively cleaning that data, running trainings, changing the architecture, etc. before getting to the performance they'd want, and those cycles would be a tiny bit faster using such tools.

To be clear, I'm not saying that talented engineers are coding much faster today. They're probably doing creative work at the edge of what Codex has seen. However, we're using the first version of something that, down the line, might end up giving us decent speed increases (I've been increasingly more productive the more I've learned how to use it). A company owning such model would certainly have private access to better versions to use internally, and there are some strategic considerations in not sharing the next version of its code generating model to win a race, while collecting feedback from millions of developers.

How will OpenAI + GitHub's Copilot affect programming?

Summary of the debate

1. jim originally said that copilot produces code with vulnerability, which, if used extensively, could generate loads of vulnerabilities, giving more opportunities for exploits overall. jim mentions it worsening "significantly" infosec

2. alex responds that given that the model tries to produce the code it was trained on, it will (by def.) produce average level code (with average level of vulnerability), so it won't change the situation "significantly" as the % of vulnerabilities per line of code produced (in the world) won't change much

3. vanessa asks if the absence of change from copilot results from a) lack of use b) lack of change in speed/vulnerability code production from using (ie. used as some fun help but without strong influence on the safety on the code as people would still be rigorous) c) change in speed/productivity, but not in the % of vulnerability

4. alex answers that indeed it makes users more productive and it helps him a lot, but that doesn't affect overall infosec in terms of % of vulnerability (same argument as 2). He nuances his claim a bit saying that a) it would moderatly affect outputs b) some stuff like cost will limit how much it affect those c) it won't change substantially at first (conjunction of two conditions).

What I think is the implicit debate

i) I think jim kind of implicitly assume that whenever someone writes code by himself, he would be forced to have good habits for security etc., and that whenever the code is automatically generated then people won't use their "security" muscles that much & assume the AI produced clean work... which apparently (given the examples from jim) does not by default. Like a Tesla not being safe enough at self-driving.

ii) I think what's missing from the debate is that the overall "infosec level" depends heavily on what a few key actors decide to do, those being in charge of safety-critical codebases for society-level tools (like nukes). So one argument could be that, although the masses might be more productive for prototyping etc., the actual infosec people might just still be as careful / not use it, so the overall important infosec won't change, and thus the overall infosec won't change.

iii) I think vanessa point kind of re-states i) and disagrees with ii) by saying that everyone will use this anyway? Because by definition if it's useful it will change their code/habits, otherwise it's not useful?

iv) I guess alex's implicit points are that code generation with Language Models producing average human code was going to happen anyway & that saying it is a significant change is an overstatement, & we should probably just assume no drastic change in %vulnerability distribution at least for now.

Load More