This is interesting and especially relevant to AI risk if we are nearing automation of the research process.
That said, I am more interested in what fraction of all code bein deployed is being written by AI. That would be more representative of AGI as it relates to mass unemployment or other huge economic shifts but not necessarily human disempowerment.
It's not that I want to be fully conjoined with another person, so much as I might prefer that to death in the medium term to get me over the hump to longevity escape velocity. Also, I kind of always imagined it more like a dialysis machine. We don't grow a whole person so much as a big pile of organs sans brain that is genetically you (or just compatible enough to not cause rejection issues) and get hooked up to that for a while. Maybe medically induce a coma for a few months out of the year, maybe it will be quick and easy to connect/disconnect and you can be plugged in while sleeping or doing desk work. People always object to my life support clone/flesh pile idea but again, better than dying imo.
Hijacking this to pick your brain. Do you think head transplants on to repeatedly cloned bodies could work as life extension? Even without genetic improvements to increase longevity, I can imagine switching bodies every 20-50 years becoming mundane with nearly modern surgical techniques provided we can reconnect the nervous system. Related to this, do you think parabiosis would work without all the body switching? I don't know if this is in your wheelhouse exactly but you mentioned mentioned a replacement body and this has been on my mind for a while.
perhaps both from you
Minor critique but I'm pretty sure this is inbreeding at worst and a clone at best which is not really what you seem to be after.
As to the title, I would give a naive "yes" and would broadly be in support of this idea, technical limitations aside of course. That said, if we actually had this level of control I feel like we could probably explicitly select the best genes from both parents and not mess around with the randomization.
I have noticed you posting daily and I appreciate this post along with several others. I has encouraged me to try more new things. While I am only slowly doing that, this is on the list now.
I think you're right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don't know why this is.
I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
They certainly act weird but not universally so and no weirder than you act in your own dreams, perhaps not even weirder than someone drunk. We might characterize those latter states as being unconscious or semi-conscious in some way but that feels wrong. Yes, I know that dreams happen when you're asleep and hence unconscious but I think that is a bastardization of the term in this case. Also, my intuition is that if a someone in real life acted as weirdly as a the weirdest dream character did, that would qualify them as mentally ill but not as a p-zombie.
I am curious if the people you encounter in your dreams count as p-zombies or if they contribute anything to the discussion. This might need to be a whole post or it might be total nonsense. When in the dream, they feel like real people and from my limited reading, lucid dreaming does not universally break this. Are they conscious? If they are not conscious can you prove that? Accepting that dream characters are conscious seems absurd. Coming up with an experiment to show they are not seems impossible. Therefore p-zombies?
Does anyone here feel like they have personally made substantial contributions to AI safety? I don't mean converting others such that they worry (although that is important!) I mean more of technical progress in alignment matching progress in AI capability. Top posts seemed to be skewed toward stating the problem as opposed to even partial solutions or incremental progress.
I don't support building even aligned super intelligence. I am in huge support of cybernetic and genetic enhancements to humans as well as uploaded minds. Based on your definition of super intelligence, I guess some of those may be considered such. It feels wrong to hand off the keys of the universe to something with no human lineage whatsoever even if it had something recognizable as human ethics and took care of us. It feels very much like being a kid with doting parents and that is bad in my eyes.