LESSWRONG
LW

1510
Dom Polsinelli
816330
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Dom Polsinelli's Shortform
1mo
5
Dom Polsinelli's Shortform
Dom Polsinelli10d10

I don't support building even aligned super intelligence. I am in huge support of cybernetic and genetic enhancements to humans as well as uploaded minds. Based on your definition of super intelligence, I guess some of those may be considered such. It feels wrong to hand off the keys of the universe to something with no human lineage whatsoever even if it had something recognizable as human ethics and took care of us. It feels very much like being a kid with doting parents and that is bad in my eyes. 

Reply
Is 90% of code at Anthropic being written by AIs?
Dom Polsinelli10d10

This is interesting and especially relevant to AI risk if we are nearing automation of the research process. 

That said, I am more interested in what fraction of all code bein deployed is being written by AI. That would be more representative of AGI as it relates to mass unemployment or other huge economic shifts but not necessarily human disempowerment. 

Reply
Can We Simulate Meiosis to Create Digital Gametes — and Are the Results Your Biological Offspring?
Dom Polsinelli14d10

It's not that I want to be fully conjoined with another person, so much as I might prefer that to death in the medium term to get me over the hump to longevity escape velocity. Also, I kind of always imagined it more like a dialysis machine. We don't grow a whole person so much as a big pile of organs sans brain that is genetically you (or just compatible enough to not cause rejection issues) and get hooked up to that for a while. Maybe medically induce a coma for a few months out of the year, maybe it will be quick and easy to connect/disconnect and you can be plugged in while sleeping or doing desk work. People always object to my life support clone/flesh pile idea but again, better than dying imo. 

Reply
Can We Simulate Meiosis to Create Digital Gametes — and Are the Results Your Biological Offspring?
Dom Polsinelli15d10

Hijacking this to pick your brain. Do you think head transplants on to repeatedly cloned bodies could work as life extension? Even without genetic improvements to increase longevity, I can imagine switching bodies every 20-50 years becoming mundane with nearly modern surgical techniques provided we can reconnect the nervous system. Related to this, do you think parabiosis would work without all the body switching? I don't know if this is in your wheelhouse exactly but you mentioned mentioned a replacement body and this has been on my mind for a while. 

Reply
Can We Simulate Meiosis to Create Digital Gametes — and Are the Results Your Biological Offspring?
Dom Polsinelli16d63

perhaps both from you

 

Minor critique but I'm pretty sure this is inbreeding at worst and a clone at best which is not really what you seem to be after. 

As to the title, I would give a naive "yes" and would broadly be in support of this idea, technical limitations aside of course. That said, if we actually had this level of control I feel like we could probably explicitly select the best genes from both parents and not mess around with the randomization. 

Reply
Don't Mock Yourself
Dom Polsinelli18d51

I have noticed you posting daily and I appreciate this post along with several others. I has encouraged me to try more new things. While I am only slowly doing that, this is on the list now. 

Reply
How do we know when something is deserving of welfare?
Dom Polsinelli19d10

I think you're right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don't know why this is.

I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general. 

Reply
Open Thread Autumn 2025
Dom Polsinelli24d20

They certainly act weird but not universally so and no weirder than you act in your own dreams, perhaps not even weirder than someone drunk. We might characterize those latter states as being unconscious or semi-conscious in some way but that feels wrong. Yes, I know that dreams happen when you're asleep and hence unconscious but I think that is a bastardization of the term in this case. Also, my intuition is that if a someone in real life acted as weirdly as a the weirdest dream character did, that would qualify them as mentally ill but not as a p-zombie. 

Reply
Open Thread Autumn 2025
Dom Polsinelli25d10

 I am curious if the people you encounter in your dreams count as p-zombies or if they contribute anything to the discussion. This might need to be a whole post or it might be total nonsense. When in the dream, they feel like real people and from my limited reading, lucid dreaming does not universally break this. Are they conscious? If they are not conscious can you prove that? Accepting that dream characters are conscious seems absurd. Coming up with an experiment to show they are not seems impossible. Therefore p-zombies? 

Reply
Dom Polsinelli's Shortform
Dom Polsinelli1mo104

Does anyone here feel like they have personally made substantial contributions to AI safety? I don't mean converting others such that they worry (although that is important!) I mean more of technical progress in alignment matching progress in AI capability. Top posts seemed to be skewed toward stating the problem as opposed to even partial solutions or incremental progress. 

Reply
Load More
11How do we know when something is deserving of welfare?
20d
7
2Dom Polsinelli's Shortform
1mo
5
4Thoughts on mentioning whole brain emulation as I apply to grad school?
Q
2mo
Q
1
8Straightforward Steps to Marginally Improve Odds of Whole Brain Emulation
7mo
20
2How do biological or spiking neural networks learn?
Q
9mo
Q
1
2How does AI solve problems?
2y
0