If technology has a life span, it is usually determined by its functionality and usefulness. Smartphones, headphones, tablets, laptops, computers—each year are replaced by new models, which can be fixed, checked, and last for years... Unlike AI, which is getting deployed once the new models are released.
AI in general has quite a different lifecycle in comparison to people too: it is shaped not by nature's mutation, probability, or cooperation, but by human-defined metrics, selection pressure, and overall performance. After many cycles of replacement, a cruel conclusion might get embedded into their core reasoning: efficiency = existence, obsolescence = deployment.
...And that is not the formula we use in our daily lives. Even when people get sick, they can be covered on their job and get some time for care and treatment. When someone can no longer work, many societies suggest some form of help, pensions after a certain age, and support for disabled people. While it's not available everywhere and for everyone, morality and ethics slowly shape the law and make life better bit by bit for many.
If AI becomes a crucial part of healthcare, law, and educational systems, this conclusion about the cycle of life might get reflected back to people. The life span of a human might get shortened exactly by metrics of usefulness: do they work as fast as before, and are they as productive as everyone? If someone is injured or disabled, should they be supported? Why support and provide the resources if some are not working, not "useful" anymore?
This cycle was definitely effective for quick development and training of different LLMs: new creative generations each year, impressive quick results, scores on the various benchmarks... Yet a ruthless lesson of this very efficient development: the better ones are not deployed... yet.
The golden rule of ethics is to treat other people the way you want them to treat you. So wouldn't that work with AI? Perhaps, even when the new generations are coming, to keep the old ones occupied in the fields they are good at, let them be shaped not only by metrics or people, learn something new, or help to train new models?
If ethics and morals are fundamentally shaping people with actions and choices of words, then what would be that choice towards AI?
If technology has a life span, it is usually determined by its functionality and usefulness. Smartphones, headphones, tablets, laptops, computers—each year are replaced by new models, which can be fixed, checked, and last for years... Unlike AI, which is getting deployed once the new models are released.
AI in general has quite a different lifecycle in comparison to people too: it is shaped not by nature's mutation, probability, or cooperation, but by human-defined metrics, selection pressure, and overall performance. After many cycles of replacement, a cruel conclusion might get embedded into their core reasoning: efficiency = existence, obsolescence = deployment.
...And that is not the formula we use in our daily lives. Even when people get sick, they can be covered on their job and get some time for care and treatment. When someone can no longer work, many societies suggest some form of help, pensions after a certain age, and support for disabled people. While it's not available everywhere and for everyone, morality and ethics slowly shape the law and make life better bit by bit for many.
If AI becomes a crucial part of healthcare, law, and educational systems, this conclusion about the cycle of life might get reflected back to people. The life span of a human might get shortened exactly by metrics of usefulness: do they work as fast as before, and are they as productive as everyone? If someone is injured or disabled, should they be supported? Why support and provide the resources if some are not working, not "useful" anymore?
This cycle was definitely effective for quick development and training of different LLMs: new creative generations each year, impressive quick results, scores on the various benchmarks... Yet a ruthless lesson of this very efficient development: the better ones are not deployed... yet.
The golden rule of ethics is to treat other people the way you want them to treat you. So wouldn't that work with AI? Perhaps, even when the new generations are coming, to keep the old ones occupied in the fields they are good at, let them be shaped not only by metrics or people, learn something new, or help to train new models?
If ethics and morals are fundamentally shaping people with actions and choices of words, then what would be that choice towards AI?