I think it won't be easy to modify the genome of individuals to achieve predictable outcomes even if you get the machinery you describe to work.
Is this because of factors like the almost-infinite number of interactions between different genes, such that even with a hypothetical magic technology to arbitrarily and perfectly change the DNA in every cell in the body, it wouldn't be possible to predict the outcome of such a change? Or is it because you don't think that any machinery will ever be precise enough to make this work well enough? Or some other issue entirely?
What I meant is changing the genetic code in ~all of the cells in a human body. Or some sort of genetic engineering which has the same effect as that.
Here's one model I have as to how you could genetically engineer a living human:
Many viruses are able to reverse-transcribe RNA to DNA and insert that DNA into cells. This causes a lot of problems for cells, but there are (probably) large regions of the genome where insertions of new DNA wouldn't cause problems. I don't think it would be difficult to target insertion of DNA to those regions, as DNA binding proteins could be attached to DNA insertion proteins.
This sort of technology requires only the insertion of RNA into a cell. There are a number of ways to put RNA into cells at the moment, such as "edited" viruses, lipid droplets, and more might be developed.
I also believe targeting somatic stem cells for modification via cell-specific surface proteins is possible. If not we could also cause the modified cells to revert to stem cells (by causing them to express Yamanaka Factors etc.).
The stem cells will differentiate and eventually replace (almost all) unmodified cells.
The resulting technology would allow arbitrary insertion of genetic code into most somatic cells (neurons might not be direct targets but perhaps engineering of glia or whatever could do them). Using CRISPR-like technologies rather than reverse transcription we could also do arbitrary mutation, gene knockout, etc.
I guess this is still somewhat handwavey. Speculating on future technology is always handwavey.
I think cultural evolution will be the greater factor by a large margin. I think the technology for immortality is possible but that it will either directly involve genetic engineering of living humans, or be one or two steps away from it. People who are willing to take an immortality drug are very likely to also be willing to improve themselves in other ways. If the Horde is somehow going to outcompete them due entirely to beneficial mutations, the Imperium could simply steal them.
Thanks! I get your arguments about "knowledge" being restricted to predictive domains, but I think it's (mostly) just a semantic issue. I also don't think the specifics of the word "knowledge" are particularly important to my points which is what I attempted to clarify at the start, but I've clearly typical-minded and assumed that of course everyone would agree with me about a dog/fish classifier having "knowledge", when it's more of an edge-case than I thought! Perhaps a better version of this post would have either tabooed "knowledge" altogether or picked a more obviously-knowledge-having model.
This is a pretty strong indication of immune escape to me, if it persists in other outbreaks. If this was purely from increased infectiousness in naive individuals it would imply an R-value (in non-immune populations) of like 40 or something, which seems much less plausible than immune escape. I don't know what the vaccination/infection rates are in these communities though.
The UK has just switched their available rapid Covid tests from a moderately unpleasant one to an almost unbearable one. Lots of places require them for entry. I think the cost/benefit makes sense even with the new kind, but I'm becoming concerned we'll eventually reach the "imagine a society where everyone hits themselves on the head every day with a baseball bat" situation if cases approach zero.
My current belief on this is that the greatest difficulty is going to be finding the "human values" in the AI's model of the world. Any AI smart enough to deceive humans will have a predictive model of humans which almost trivially must contain something that looks like "human values". The biggest problems I see are:
1: "Human values" may not form a tight abstracted cluster in a model of the world at all. This isn't so much conceptual issue as in theory we could just draw a more complex boundary around them, but it makes it practically more difficult.
2: It's currently impossible to see what the hell is going on inside most large ML systems. Interpretability work might be able to allow us to find the right subsection of a model.
3: Any pointer we build to the human values in a model also needs to be stable to the model updating. If that knowledge gets moved around as parameters change, the computational tool/mathematical object which points to them needs to be able to keep track of that. This could include sudden shifts, slow movement, breaking up of models into smaller separate models.
(I haven't defined knowledge, I'm not very confused about what it means to say "knowledge of X is in a particular location in the model" but I don't have space here to write it all up)
Very good point. Perhaps there just intrinsically is no way of doing something that this community perceives as "burning" money, without upsetting people.
Having now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.
I'm confident your model of Eliezer is more accurate than mine.
Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind.
His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I'm confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn't share it in high detail.