I am curious why the discourse often focuses on how a safe AI system is designed, but seldom addresses how users keep their own autonomy when using it. The concept I would like to propose for discussion is a ratio derived from dividing the frequency one user corrects an AI system by the frequency the same user is influenced by it. By using the ratio, the highlight is a user needs to use AI lucidly. What matters is not how much time a user spends on using AI, but how aware they are about their own status.
The concept originates from my own experiences with AI. When the conversation went really deep, I noticed the feeling of being understood helped me think more of who I actually am and what I genuinely need, with a clearer sense of myself, so I was quite ‘awake’ with observed autonomy strengthening, despite talking with AI for a long time like a couple hours for a few days in a row—the key is I feel more able to live more authentically after using it. To better figure out the structure, I started trying role-play with AI according to different purposes in different chats as an experiment. Later, I will share the structured findings supporting a hypothetical theory which may be beneficial to AI’s role construction for different business purposes.
First thing first, let me talk more about the ratio. One user can establish a set of fixed standards across sessions to monitor their ratio of the frequency of correcting AI over the frequency of being influenced by AI. Hence, despite the frequency calculations can be biased and personalized, the reliability is rooted in the person’s own stable standards across time. The ratio is used to do a within-person tracking rather than between-person comparison. For example, since one AI system I have used could not stop summarizing and interrupting my thinking, I had corrected it constantly and even ceased using it in the end, rather than letting my thinking process be overwhelmed by its unrequested conclusions.
After examining a few papers, I find that most of them discuss how to design an AI to prevent users from losing their autonomy, while this ratio attempts to answer another question—how users and even the agencies monitor whether users are losing their autonomy. It is inevitably related to whether the user can proactively correct AI when spotting there are mismatches to their true selves and purposes. For which the user awareness can be shown as higher corrective response ratio (this ratio) over time. Conclusively, current HRI/HAI research (Human Autonomy and Sense of Agency in Human-Robot Interaction: A Systematic Literature Review) focuses primarily on how to preserve human autonomy/agency with systematic design and refinement; while corrective response ratio approaches the question from the user's end including user awareness and ownership of their own judgement.
There are three strengths of corrective response ratio—dimensional clarity, behavioral trackability and structural transferability. Firstly, dimensional clarity means it has two separate layers of corrective and influenced outcomes for records and analysis. Secondly, behavioral trackability is a derived factor from dimensional clarity. The categorized and recorded behaviors over time can bring higher resolution than non-systemized self-assessment. Last but not least, structural transferability means the user or agency can set their own fixed standards corresponding to their purposes. The ratio structure can be edited and applied to contain relevant measurements of decision-making, to better fit and serve research and tool designs. However, one limitation worth noting is that users' interaction styles with AI will inevitably be affected over time as they aim for the most effective and efficient outcomes—this gradual adaptation is not covered by this ratio.
To start exploring this, try recalling one recent time you corrected an AI and why you defined it as corrective?
I am curious why the discourse often focuses on how a safe AI system is designed, but seldom addresses how users keep their own autonomy when using it. The concept I would like to propose for discussion is a ratio derived from dividing the frequency one user corrects an AI system by the frequency the same user is influenced by it. By using the ratio, the highlight is a user needs to use AI lucidly. What matters is not how much time a user spends on using AI, but how aware they are about their own status.
The concept originates from my own experiences with AI. When the conversation went really deep, I noticed the feeling of being understood helped me think more of who I actually am and what I genuinely need, with a clearer sense of myself, so I was quite ‘awake’ with observed autonomy strengthening, despite talking with AI for a long time like a couple hours for a few days in a row—the key is I feel more able to live more authentically after using it. To better figure out the structure, I started trying role-play with AI according to different purposes in different chats as an experiment. Later, I will share the structured findings supporting a hypothetical theory which may be beneficial to AI’s role construction for different business purposes.
First thing first, let me talk more about the ratio. One user can establish a set of fixed standards across sessions to monitor their ratio of the frequency of correcting AI over the frequency of being influenced by AI. Hence, despite the frequency calculations can be biased and personalized, the reliability is rooted in the person’s own stable standards across time. The ratio is used to do a within-person tracking rather than between-person comparison. For example, since one AI system I have used could not stop summarizing and interrupting my thinking, I had corrected it constantly and even ceased using it in the end, rather than letting my thinking process be overwhelmed by its unrequested conclusions.
After examining a few papers, I find that most of them discuss how to design an AI to prevent users from losing their autonomy, while this ratio attempts to answer another question—how users and even the agencies monitor whether users are losing their autonomy. It is inevitably related to whether the user can proactively correct AI when spotting there are mismatches to their true selves and purposes. For which the user awareness can be shown as higher corrective response ratio (this ratio) over time. Conclusively, current HRI/HAI research (Human Autonomy and Sense of Agency in Human-Robot Interaction: A Systematic Literature Review) focuses primarily on how to preserve human autonomy/agency with systematic design and refinement; while corrective response ratio approaches the question from the user's end including user awareness and ownership of their own judgement.
There are three strengths of corrective response ratio—dimensional clarity, behavioral trackability and structural transferability. Firstly, dimensional clarity means it has two separate layers of corrective and influenced outcomes for records and analysis. Secondly, behavioral trackability is a derived factor from dimensional clarity. The categorized and recorded behaviors over time can bring higher resolution than non-systemized self-assessment. Last but not least, structural transferability means the user or agency can set their own fixed standards corresponding to their purposes. The ratio structure can be edited and applied to contain relevant measurements of decision-making, to better fit and serve research and tool designs. However, one limitation worth noting is that users' interaction styles with AI will inevitably be affected over time as they aim for the most effective and efficient outcomes—this gradual adaptation is not covered by this ratio.
To start exploring this, try recalling one recent time you corrected an AI and why you defined it as corrective?