All of Julian Bradshaw's Comments + Replies

AI Training Should Allow Opt-Out

Hence, GitHub is effectively using the work others made - for personal or non-commercial use, without having GitHub in mind, and without any way to say 'no' - to sell a product back to them, for their own profit.

How different is this from a human being learning how to code from reading public code on the internet, and then selling their own software? I suspect Copilot directly copies code less often than humans do! GitHub claims "about 1% of the time, a suggestion may contain some code snippets longer than ~150 characters that matches the training set", an... (read more)

1Gerald Monroe1mo
How is it different? Obviously a person wanting to opt-out has a way - keep their source closed. Copilot and all the hundreds of AIs that will come after it can only see source they can reach. Simplicity, Precedent - I don't see an argument here. Anyone trying to make a more competitive coding AI is going to ignore any of these flags not enforced by law. Ethics - Big companies obviously are in favor of them becoming even bigger, they didn't get that way from holding back opportunities to win due to their scale. Risk/Risk Compensation - these don't apply to the current level of capabilities for coding agents.
2M. Y. Zuo1mo
Good point. This exposes the true opposing argument, which is that people fear the possibility of the strong (i.e. Microsoft) becoming even stronger, via profits, influence, etc... But they don't mind people like themselves, presumably quite weak in comparison to Microsoft, becoming stronger. Same as with punching up versus punching down and so on.
Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc

Humans can currently improve uninterpretable AI, and at a reasonably fast rate. I don't see why an AI can't do the same. (i.e. improve itself, or a copy of itself, or design a new AI that is improved)

4avturchin2mo
One difference is that gpt-n doesn’t know its goals in explicit form, as they are hidden inside uninterpretable weights. And it will need to solve alignment problem for each new version of itself, which will be difficult for uninterpretable next versions. Therefore, it will have to spend more efforts on self-improvement. Not just rewrite own code, but train many new versions. Such slow self-improvement will be difficult to hide.
Save Humanity! Breed Sapient Octopuses!

Why not breed for compatibility with human values as well? We could then study the differences between various degrees of "aligned" cephalopods and wild cephalopods.

It might be easier than selecting for intelligence too; humanity has successfully modified dogs, cats, and livestock to provide more positive utility to humans than their wild counterparts, but hasn't drastically increased the intelligence of any wild animal to my knowledge, despite there plausibly being benefits for doing so with ex. horses or dogs.

Breeding for human values also limits some of the downsides of this plan, like the chance of ending up conquered by unaligned octopuses.

4Yair Halberstadt4mo
Save Humanity! Breed Sapient Octopuses!

Re: level of effort, some brief googling tells me that there has been some interest in breeding octopuses for food, but it's been quite difficult, particularly handling the larvae. BBC claims the current state-of-the-art is a promise that farmed octopus will be on the market in 2023.

Book Review: Being You by Anil Seth

That's easier to understand for me than Aaronson's, thanks. Interestingly, the author of that blog post (Jake R. Hanson) seems to have just published a version of it as a proper scientific paper in Neuroscience of Consciousness this past August... a journal whose editor-in-chief is Anil Seth, the author of the book reviewed above! Not sure if it comes up in the book or not, considering the book was published just this September—it's probably too recent unfortunately.

7Alexander9mo
I love the title of that paper. Formalising falsification for theories of consciousness is exactly what the consciousness space needs to maximise signal and minimise noise. Thank you for sharing it! I’m going to give that paper a read. I’m very curious about how J R Hanson defines “consciousness”. To falsify a theory, we first need to be precise about what it must predict. I am fairly certain that Anil Seth did not mention either of these incisive knock-downs of IIT in the book but I could’ve missed it. The reason I’m so certain is because the way Seth spoke about IIT was of admiration and approval. I’m sure he would’ve updated.