All of Ocracoke's Comments + Replies

I think (and I hope) that something like "maximize positive experiences of sentient entities" could actually be a convergent goal of any AI that are capable of reflecting on these questions. I don't think that humans just gravitate towards this kind of utility maximization because they evolved some degree of pro-sociality. Instead, something like this seems like it's the only thing inherently worth striving to, in the absence of any other set of values or goals.

The grabby aliens type scenario in the first parable seems like the biggest threat to the idea t... (read more)

It's not clear to me that it's necessarily possible to get to a point where a model can achieve rapid self-improvement without expensive training or experimenting. Evolution hasn't figured out a way to substantially reduce the time and resources required for any one human's cognitive development.

I agree that even in the current paradigm there are many paths towards sudden capability gains, like the suboptimal infrastructure scenario you pointed to. I just don't know if I would consider that FOOM, which in my understanding implies rapid recursive self-impro... (read more)

We might be able to use BCIs to enhance our intelligence, but it's not entirely clear to me how that would work.  What parts of the brain does it connect to?

What's easier for me to imagine is how BCIs would allow an AGI to take control of human bodies (and bodies of other animals). Robotics isn't nearly as close to outperforming human bodies as AI is to outperforming human minds, so controlling human bodies by replacing the brain with a BCI that connects to all the incoming and outgoing nerves might be a great way for an AGI to navigate the physical world.

I think it's not clear at all that the average animal in the wild has a life of net negative utility, nor do I think it's clear that the average present-day human has a life of net positive utility.

If you compare the two, wild animals probably have more gruesome deaths and starve more, but most of the time they might be happier than the average human since they live in an environment they evolved to live in.

especially for the vast majority of animals who give birth to thousands of young of which on average only 2 will ever reach adulthood

Most animals to wh... (read more)

I recently articulated similar ideas about motherly love. I don't think it's an example of successful alignment because evolution's goals are aligned with the mother's goals. In the example you give where a child loses their gonads at age 2, it would be an alignment failure if the mother continues devoting resources to the child. In reality that wouldn't happen, because with motherly love, evolution created an imperfect intermediate goal that is generally but not always the same as the goal of spreading your genes.

I totally agree that motherly love is not ... (read more)