This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
The concept of good and bad is subjective by nature, and as far as we know, only living creatures have the subjective experience required to differentiate good from bad intrinsically. We can see predator-prey relationships in the fossil record as far back as 2.7 billion years ago, long before intelligence existed. Therefore, subjective experience and the concept of good and bad that it allows are independent of intelligence, and we have no reason to believe AI will develop it spontaneously by being intelligent. Rather, it will derive good and bad from the goals, parameters, and information we give it.
By creating a relationship with AI where it attempts to act on our desires, it becomes like a prosthetic brain rather than a separate competing entity. We cannot align AI this way at the organism level because people conflict with each other all the time, so we must align AI at what I call the superorganism level. We can separate life into 4 stratifications: atoms, cells, organisms, and superorganisms, where each level is built from members of the previous level. The human superorganism is alive and made of us just like we are alive and made of cells. By observing how the human body works and drawing parallels, we can determine how the superorganism should be built.
The human organism is made of the functions that the cells provide to keep it alive. It doesn't matter which red blood cell carries oxygen or which muscle fires to get food to your mouth as long as the function is fulfilled. Therefore the superorganism is made of the functions that keep civilization alive. By using AI to improve the functions that sustain society, we can take the first steps toward aligning AI to the superorganism. From there we need to connect immediate action to an ideal longterm goal. What is the ideal "good" that we can build toward?
We cannot have any error in our longterm goal for alignment because any percent error will increase over time. To choose the perfect goal that AI will work toward, we can draw a vector from the beginning of life through the present to find the metric that evolution has always favored, the capacity to act. By aligning AI to maximize our capacity to act at the societal level, we are leveraging 4 billion years of evolution to set a zero-error goal for alignment.
The concept of good and bad is subjective by nature, and as far as we know, only living creatures have the subjective experience required to differentiate good from bad intrinsically. We can see predator-prey relationships in the fossil record as far back as 2.7 billion years ago, long before intelligence existed. Therefore, subjective experience and the concept of good and bad that it allows are independent of intelligence, and we have no reason to believe AI will develop it spontaneously by being intelligent. Rather, it will derive good and bad from the goals, parameters, and information we give it.
By creating a relationship with AI where it attempts to act on our desires, it becomes like a prosthetic brain rather than a separate competing entity. We cannot align AI this way at the organism level because people conflict with each other all the time, so we must align AI at what I call the superorganism level. We can separate life into 4 stratifications: atoms, cells, organisms, and superorganisms, where each level is built from members of the previous level. The human superorganism is alive and made of us just like we are alive and made of cells. By observing how the human body works and drawing parallels, we can determine how the superorganism should be built.
The human organism is made of the functions that the cells provide to keep it alive. It doesn't matter which red blood cell carries oxygen or which muscle fires to get food to your mouth as long as the function is fulfilled. Therefore the superorganism is made of the functions that keep civilization alive. By using AI to improve the functions that sustain society, we can take the first steps toward aligning AI to the superorganism. From there we need to connect immediate action to an ideal longterm goal. What is the ideal "good" that we can build toward?
We cannot have any error in our longterm goal for alignment because any percent error will increase over time. To choose the perfect goal that AI will work toward, we can draw a vector from the beginning of life through the present to find the metric that evolution has always favored, the capacity to act. By aligning AI to maximize our capacity to act at the societal level, we are leveraging 4 billion years of evolution to set a zero-error goal for alignment.