Aiyen

Posts

Sorted by New

Wiki Contributions

Comments

Taking Clones Seriously

Depends on the price of creating and raising clones.

Taking Clones Seriously

Not putting unreasonable expectations on the children is very important, and you’re right that not telling the caretakers is potentially a good way to achieve this. Excellent point.

Taking Clones Seriously

How could it be wrong to clone him without his consent? He’s dead, and thus cannot suffer. Moreover, the right to your likeness is to prevent people from being harmed by misuse of said likeness; it doesn’t strike me as a deontological prohibition on copying (or as a valid moral principle to the extent that it is deontological), and he can’t be harmed anymore.

Also, how could anyone have a right to their genome that would permit them to veto others having it? If that doesn’t sound absurd to you prima facie, consider identical twins (or if they’re not quite identical enough, preexisting clones). Should one of them have a right to dictate the existence or reproduction of the other? And if not, how can we justify such a genetic copyright in the case of cloning?

Cloning, at least when the clone is properly cared for, is a victimless offense, and thus ought not be offensive at all.

Taking Clones Seriously

This is very true, and a good reason to clone millions of von Neumanns, rather than just a handful.

Taking Clones Seriously

Shouldn’t the ethics of cloning be the same as the ethics of having children normally? If you could have kids with von Neumann’s capabilities, it’s ethical as long as you raise them right and don’t abuse them. Presumably it’s the same with a clone army of von Neumanns: don’t neglect them, beat them or send them to public schools and it should be fine.

What would it look like if it looked like AGI was very near?

Thanks. The assertiveness was deliberate; I wanted to take the perspective of someone in a post-AGI world saying, “Of course it worked out this way!” In our time, we can’t be as certain; the narrator is suffering from a degree of hindsight bias.

There were a couple of fake breakthroughs in there (though maybe I glossed over them more than I ought?). Specifically the bootstrapping from a given model to a more accurate one by looking for implications and checking for alternatives (this actually is very close to the self-play that helped build AlphaGo as stated, but making it work with a full model of the real world would require substantial further work), and the solution of AI alignment via machine learning with multiple agents seeking to more accurately model each other’s values (which I suspect might actually work, but which is purely speculative).

What would it look like if it looked like AGI was very near?

Why would QC be irrelevant?  Quantum systems don't perform well on all tasks, but they generally work well for parallel tasks, right? And neural nets are largely parallel. QC isn't to the point of being able to help yet, but especially if conventional computing becomes a serious bottleneck, it might become important over the next decade. 

What would it look like if it looked like AGI was very near?

Wouldn't that imply that the trajectory of AI is heavily dependent on how long Moore's Law lasts, and how well quantum computers do?  

Is your model that the jump to GPT-3 scale consumed the hardware overhang, and that we cannot expect meaningful progress on the same time scale in the near future?  

What would it look like if it looked like AGI was very near?

Trillions of dollars for +6 OOMs is not something people are likely to be willing to spend by 2023. On the other hand, part of the reason that neural net sizes have consistently increased by one to two OOMs per year lately is due to advances in running them cheaply. Programs like Microsoft’s ZeRO system aim explicitly at creating nets on the hundred trillion-parameter scale at an acceptable price. Certainly there’s uncertainty around how well it will work, and whether it will be extended to a quadrillion parameters even if it does, but parts of the industry appear to believe it’s practical.

What would it look like if it looked like AGI was very near?

That’s why I had it that general intelligence is possible at the cat level. That said, it doesn’t seem too implausible that there’s a general intelligence threshold around human-level intelligence (not brain size), which raises the possibility that achieving general intelligence becomes substantially easier with human-scale brains (which is why evolution achieved it with us, rather than sooner or later).

This scenario is based on the Bitter Lesson model, in which size is far more important than the algorithm once a certain degree of generality in the algorithm is attained. If that is true in general, while evolution would be unlikely to hit on a maximally efficient algorithm, it might get within an order of magnitude of it.

Load More