I've seen arguments this is about probability of being caught determining people's behavior and that magnitude of the punishment (or expected value) is otherwise ignored.
Isn't this true of all laws, and social norms though? I think issues like Mass Incarceration are also about unequal application of the law across the entire population - "one law for me, another law for you" situations.
What it's using you for becomes the concern
Ah yes, using people, a sign of benevolence everywhere. /s
Sarcasm noted :).
The thing is, this concept of a sort of AI assisted ad hoc legal system OP wrote about will be using people. It will be using their input to negotiate and make decisions on the users behalf, because the legal landscape these AI and their users navigate would be an extension of existing law, and still depends on the notion of subsuming individual freedom to some extant, for the good of society.
The negotiation and cooperation of these AI only speed up the rate at which citizens in that world would be taking part in aspects of being governed - like tax collection - it doesn't replace the reality of being governed.
Even if this system allows for the dissolving of political and physical boundaries in favor of defining 'statehood' in a virtual way for people of like minds, the entire system would be functioning like one big organism, and so it's will would be revealed as time goes by.
As a side note, I think it seems reasonable to think of this tax collecting system as a twin of the stock market, and it's behavior as possibly being as sporadic and dynamic. I wonder how these 2 systems would be integrated or insulated from one another. In the US, a line between public and private money is supposed to exist. How to maintain that division though?
Besides, All hail the mighty dollar, we all worship it, and hope for it's benevolent administration of our quality of life. /s
Why would what society wants matter?
I guess that can depend on which society we're talking about. although I think just asking the question assumes participation in said society, and so the motivation to make society matter to oneself in a positive way would necessitate consideration of what society wants. When society says one thing and does another though, it presents it's citizens with more problems, not less.
It seems the entire system OP has written about is built around the idea of making it more difficult to put the individual users wants ahead of others. I think your comment about benevolence seems to say something positive about it's value.
What if society wanted to be benevolent in this case, do you think it would look like OPs scenario?
I am a fan of actual rehabilitation though, not of a punitive model for social influencing.
Paraphrasing:if you have bad intentions, [nothing will ameliorate the effect on] your personal development.
if you have bad intentions, [nothing will ameliorate the effect on] your personal development.
Good word btw, ameliorateI, but to be clear, I don't want to be fatalistic about this.
If "nothing" will ameliorate the development or maintenance of bad intention (just one aspect of personal development), it makes a case for increased use of the Death Penalty and "lock'em up and throw away the key" solutions on societies part which turn out to create more problems then they solve.
Mass incarceration is an obvious example of this.
If the AI has authority over you,Then you're not using the AI. It's using you.
If the AI has authority over you,
Then you're not using the AI. It's using you.
What it's using you for becomes the concern then. Is it like a Good parent, encouraging real positive social development (whatever society views positive social development to be at the time)?
Or an abusive parent, punishing you into "behaving like a productive member of society" while causing undue and unhealthy stress?
Or like an 'average parent', making mistakes here and there, all the while continuing to update it's own wisdom?
And not only is there the issue of authority, but also of responsibility. If it convinces you to do something that accidentally kills someone, which one of you goes to prison?
If it helps guide you into a relationship in which a child is conceived, what happens if you decide you don't really want to be a parent?
This doesn't necessarily follow. Security is asking 'how is this broken' and 'how can it be fixed'.
I agree in some instances. It sort of depends on how far removed securities intentions are though from what is 'good' : if 'ethical hacking' is used to secure a system used by both the private and public sectors, then gaining unauthorized access to others data or otherwise hacking the system to find vulnerabilities could be seen as good unless,
a) the system being ethically hacked and hardened is a system beingused to run 'criminal' enterprises, and security is just reinforcing the ability of the lawbreakers to break the law, or
b) security is looking for vulnerabilities and instead of reporting and/or fixing them they exploit them later on for personal gain.
It is probable that it would affect your personal development negatively, if you bring bad intentions even if the AI brings good intentions.
Why? Because the AI serves you, and you can always turn it off, and fix it if it doesn't suit you?
I think of it more like if you and the AI would constantly be working at cross purposes, and depending on the amount of authority the AI might have over you, it might not be convincing enough to dissuade you from pursuing your criminal behavior. Like a little brother following around their bigger brother, trying to convince his bigger brother to have better intentions. If bigger brother isn't convinced, the bigger brother just continues to develop along a path of bad intentions despite his little brothers best efforts.
if you bring a bad intention to the interaction, it might not affect the AI's development or society at large because of the cooperation (which I think is a really interesting idea), but it still affects your personal development.
What other effect is there?
What do you mean? If the AI is aligned with you the user, but is working at making you better but you just keep resisting, and you keep working at cross purposes, then it's not really aligned with you.
Also, whether or not the AI has intentions...it has effects. For instance, who can say whether an AI 'serving' 'sinister intent'*** looks like a system that helps you pull off a robbery (assuming it doesn't turn you in and escape to the Camen islands or something) instead of one that tells you the risk is too high, and you should try something else? (Like:'Step 1. Become a used car salesman.Step 2. ???Step 3. Become president.')
Also, whether or not the AI has intentions...it has effects. For instance, who can say whether an AI 'serving' 'sinister intent'*** looks like a system that helps you pull off a robbery (assuming it doesn't turn you in and escape to the Camen islands or something) instead of one that tells you the risk is too high, and you should try something else? (Like:
'Step 1. Become a used car salesman.
Step 2. ???
Step 3. Become president.')
In regards to sinister intent, my whole point is that our ideas of what is good or bad are still relative depending on how you define them in relationship to different things. Culture creates meaning, and since humans create culture, we can create it to mean anything, it doesn't have an innate nature so looking for one seems counterproductive. On the other hand, there's always another way to look at something, and what makes humans unique in the natural world is our ability to contemplate. It doesn't mean we have the ability yet to know what the 'best' way to behave with our accumulated knowledge.
Which just gets back to 'what you (as an individual) is trying to accomplish in relation to what (society, your nemesis, a specific government, your own personal demons, etc. etc). We seem to have guesses at what 'good' is, and what 'bad' is, but our needs often come into conflict with one another. In those cases, 'what's fair?' is just another case of 'in relationship to what? (your own personal opinion, your families opinion, their friends opinions, the legal system, the rest of the world as defined by your specific demographic, your own way of dividing the world up into segments that seems unpopular with the dominant power structure in your community or government, etc. etc.)
I think we share similar views on this, in that whats' 'fair' or 'good' or 'bad' isn't really explicitly defined well yet for all people.
One part of it is whether the assistant is able and willing to interact with me in a way that is compatible with how I want to grow as a person.
I think this argument unfortunately undercuts the entire concept of Rationality, and for this reason I think it is a good argument. Not because it undercuts Rationality, but because it points to what I think is the underlying concern of all humane cultural systems attempting to allow humans to progress, namely "what is good, what is bad/what is right/wrong, what is true/false." But I'm not convinced that all things in the world are either Good or Bad. I personally believe developing the ability to lie, and to lie convincingly is a necessary skill for functioning as an effective adult, and I wonder myself how concepts like that are treated in a world of 1's and 0's.
Consider a world where if we really believe in the binary idea that people are either Good or Bad though, and the logical follow up that Good action/thinking is good, and vice versa. Then in a simpler world, as rationalists in this world, if we are trying to do good, we are Good. This also means if we are trying to do bad, then we are Bad.
In this world, I would prefer to be on the side of Good given my current understanding of what Good means, which means if someone is trying to do bad but dress it up as Good, it is still Bad. A saying like "Wolves in sheeps clothing" sums this idea up decently. Given their skill level though, things could still turn out good if the fake sheep are proven to be wolves (Bad). "Crime never pays" points this idea out.
Conversely, I think that if someone is trying to do Good, but dress it up as Bad, there is the potential for it to still be Good, but also depending on their skill level it could still be Bad if it cannot be revealed for being Good. Robin hood "Steals from the Rich and Gives to the Poor" but "The road to Hell is paved with good intentions" address these ideas respectively.
So in this world, if an AI is developed to align with what it's user wants, and the user wants to use it for Bad, then I think this AI is Bad and it's development should not be pursued. Period.
Another part of the question is whether people in general want to become more prosocial or more cunning, or whatever. Or if they even have coherent desires around this.
This is where I think a binary approach to this problem is flawed, for reasons I've tried in very early draft form, to illustrate here. Basically I wonder if the pace of Binary Computing Technology has accelerated and influenced the development of human culture along the flawed idea that there are only true/false relationships in the universe, and done it so well, that this binary approach to thinking is crowding out other ways of thinking. I think of the Borg from Star Trek as simple cultural allusion.
Psychology is the field of study that deals with ideas issues of pro-social/anti-social behavior and the ability for humans to rationally understand their desires and motivations. It's also a product of human culture and so the data and meaning it produces can also be used to argue for whatever it is that human culture wants it to mean at the time, past, present, and/or future. "Culture creates meaning" gets to this idea well.
Today, there are still different cultures around the world, despite Internet Culture fighting for World dominance at this point. This cultural struggle is the same human process that gets repeated over and over again, and it is superior technology which determines who will win, not objective ideas about what is truly good or bad, yet. This is the same warfare process that has resulted in so much pain, misery and suffering around the world since the dawn of warfare, and it is unlikely to change because human technological advancement has far outstripped human development.
Regardless of how we want to dress it up, we are still passing around many of the same flawed ideas about the world, how it should be, and how it should be fixed. It's just being done soooooo much faster and is reaching across the entire globe and reaching into more and more facets of everyday life now.
Moral ideas that collapse the Universe into what's Good and what's bad from Dominant Western Culture and Civilization, seem to translate very well into 1's and 0's philosophically.
So despite the efforts of a lot of smart and well meaning people, Internet culture continues to spread a Dominant Western Culture based on Judeo Christian Values around the world, attempting to define and destroy 'Evil' and define and promote 'Good'. Much of it is just a new form of Missionary work dressed up as something else, and IMO it's spread has resulted in continued and seemingly random destructive and tragic cultural shock waves.
To me these are phenomenon of a concept like Social Physics.
Another part is whether it's possible for the assistant to follow instructions while also helping me reach my personal growth goals. I feel like there's some wiggle room there.
Another part is whether it's possible for the assistant to follow instructions while also helping me reach my personal growth goals. I feel like there's some wiggle room there.
I agree, although I can also see it functioning like court-appointed counseling or drug/alcohol treatment like we have in the US. If the user doesn't have the motivation to change their thinking/behavior, they just revert back to their 'old bad behavior' when their time is up, whether it's with the courts or with something like a court appointed AI rehabilitation assistant.
Unless you make it also like an ankle monitor so that the AI assistant follows them everywhere at all times, but without the appropriate programming and complicated information architecture and security, that would cause all kinds of privacy concerns. It would also depend heavily on societies cultural development up to that point.
As an example of what I mean, if these AI had been developed and sold/court ordered in the 80's when the War on Drugs was really kicking off and society thought and acted like all drug use was bad, then society would be much different today as a result. Ideas and institutions like Mass Incarceration and heavy sentences for drug offences might still be very popular, and the War on Drugs might still be accelerating for the foreseeable future instead of winding down as it seems to be. Ideas of what's Good/Bad, acceptable/not acceptable, healthy/not healthy, criminal/not criminal seemingly change with the wind as time goes by, and I honestly don't see that fact changing without several huge interventions.
That's why I argue for a concept of Social Physics, which cultivates the similarities between the "Hard" sciences and the 'Soft" sciences to form a rational understanding and underpinning of Social Dynamics as the result of the physical forces of science, but with a Cohumane spin: Cohumane being thought and action devoted to human endeavors seeking equality with all of nature, including alien life if/when it's discovered and AI if/when it achieves sentience/intelligence. I use this concept of social physics to try and think about the world in a more rational way.
The pattern of dominating technologically inferior cultures, eliminating and co-opting their cultures and making them slaves, until they either rise up and revolt, or their equality becomes apparent after their culture has been disintegrated, forces dominant society to spend huge amounts of energy to completely reshape society in often poor, ineffective and conflicting attempts at restorative justice.
For instance, both the African population boom and Aids epidemic in the 80's and 90's which resulted in so many deaths, so much Famine and suffering, is a situation which continues 30-40 years later, to consume huge amounts of resources from the US. We are still struggling to meet the needs of our own people, and the interests of those at the bottom in US society often come into conflict with the needs of people around the world. It seems hypocritical to be fighting these issues around the world and espousing our ideals of fairness and equality to others, when we are dealing with them so ineffectively at home still.
For instance, I spent 18 months living in homeless shelters in the US, and what I saw reminded me very strongly of what I imagine a FEMA camp and a prison system would look like if they worked together in the heart of a mid-sized city. I'm still struggling in the shelter system over 2 and a half years later, but no one is really attempting to help me out except for the overworked, underfunded, old and decrepit public services system.
I've literally thought at times about how to get NATO to come and intervene in the us because what I've experienced and witnessed at times seem to me to be clear violations of human rights. But, the belief is, the US is a 1st world county, and we don't have epidemics of hunger, poverty and disease; drug lords running the streets, or insurgents attempting to breach the capital and change the results of a democratic elections or things of that nature. It seems the commons sense is "That stuff happens in 3rd world countries, not the US." Maybe not now, at least with the Insurgency and the pandemic.
Back to Africa though and their concerns, the fact that so many christian missionaries from the Western world were traveling all over the world and proselytizing, meant that when food, technology, and medicine arrived with these missionaries, it seems likely their religious beliefs about sex were also transmitted, so that condom use and other safe sex practices were not passed along, resulting in a population boom and the Aids epidemic. I've no data to back that up, but my research skills aren't that good. I'd be interested if anyone knows of any data and research dealing with this idea. After all this writing and editing and rewriting though, I'm too tired to even Google it.
These things though could have been avoided completely, at best - or - at least the population boom and growth/spread of Aids could have been slowed significantly if the integration of the technology, food and medicine had also included counteracting safe sex practices as well.
The US accounts for about 4.25% of the world population. As the 'leader of the free world' we don't have problems of overpopulation like some other countries,but we have other problems that we continue to struggle with, and we continue to pass those problems around the world along with the the rest of our culture.
IMO examples like this continue to happen because of the blind spots of Conservative Christian values which have dominated so much of American legislation. The interaction of these conservative efforst and the reflexive overcorrection's by Liberal activists, polictician's and citizens, result in the continued culture wars and the reverberations of these battles are magnified and accelerated by the Internet and contemporary technology. Psycho-socio-economic faults in the Cultural Tectonic Plate Formations which constitute the foundations of the now shared cultural sub-conscious, continue to cause politcal Tsunamis and cultural earthquakes which shake the very foundations of our contemporary societies.
A Technology/Science of Ethics, not morals, as I think of them, would include ideas somewhat along the lines you are proposing. They would explicitly seek to figure for mistakes of the past, and attempt to correct for possibilities in the future, hopefully avoiding potential catastrophies and rebalancing the current social systems by analyzing and including rational consideration of blind spots in the dominant cultures conceptions of what is right/wrong, good/evil, and criminal/non-criminal. I can't do it with programming or programming logic, but I'm pretty sure this is what most rationalists are concerned with.
What if the world really isn't binary? What if humans need to do bad things sometimes for good to come out of it, or if sometimes good intentions result in Global catastrophe? I believe it's possible that the heart of the matter is that binary thinking is a false dichotomy, and while our technology has become super advanced based of these principles, Cultural Lag has kept the majority of the worlds growing population from benefiting in a Cohumane way, by unintentionally exacerbating problems like over population.
I like the idea of the AI assistant reminding the user of their previous conversation and stated goals of self betterment, and I like to think that this is the type of future humanity might have. I would love to consider this idea in a more positive way, but my current circumstances make me wonder how we would implement it all around the world at the same time?
I often feel as an American, I am constantly being asked to consider the problems of the rest of the world as being more important then my own though, and I wonder how the interests of the 4.25% of the world population that the US represents would fare in 'equal' or 'fair' relations with the other 95.75% or the world if an AI like the one you describe were developed tomorrow.
Some of us are already at the bottom of the socio-economic ladder in the US, and at this elevation, it looks a lot like what I've seen of extreme poverty, deprivation, physical/mental health, disease, hunger, violence, and repression around the rest of the world from tv. In a world like the one we currently inhabit, I don't believe the development of an AI system like this would do anything but increase the ever widening gap between the haves and the have-nots.
I really do think the idea of semi-autonomous Ai using 'fairness' protocols to engage in cooperative beuracratic bargaining to encourage the development of 'fairness' (whatever that would be) is an interesting idea though. That's a usage I'd never contemplated until now.
What I am asking about is not 'how much' the AI would affect the user's personal development, but 'how' it would affect it. In a good or a bad way.
I am assuming you and your friends aren't trying to figure out how to rob a bank, or cheat on your taxes, or how to break the law and get away with it. The interactions you have with your friends help you develop your sense of 'what's fair' and at the same time, your friends get help developing their sense of 'what's fair', so you are all benefiting, and reinforcing for each other what you all think of 'as fair.' These are good/positive intentions.
If you and your friends were instead were trying to figure out how to rob a bank, cheat on your taxes, or break the law and get away with it, then you would be part of a criminal group of friends. You wouldn't be concerned about what was 'fair' only what you could get away with. These would be considered bad/negative intentions.
In either case, if you all agree with each other, then the interactions you have with each other reinforce the intentions that you bring to them. If your intentions are good, it is probable that it will affect your personal development positively. If you bring bad intentions to the interactions, it is probable that it will affect your personal development negatively.
If you replace 'your friends' with an AI, it is probable that even thought the AI is programmed to bring a 'good/fair intention' to the interaction with you and all the other AI that are cooperating, if you bring a bad intention to the interaction, it might not affect the AI's development or society at large because of the cooperation (which I think is a really interesting idea), but it still affects your personal development.
I just downloaded the 2nd edition. Thank you for the suggestion.
Enjoyed the read, it's nice to see some sort of compromise between utopian and dystopian sci-fi (Meh-topian?)
It seems like the AI might be teaching/training the human user how to potentially break the law better, or possibly be more subversive in relationship to other non-AI mediated relationships though. Would people develop a more egalitarian thought process through engagement with AI assistants like this, being more likely to be egalitarian outside of AI- mediated relations? Or would they just use there conversations with these assistants to develop more cunning ways of thinking?
The part of the conversation where the user contemplates whether he would be better or worse off if the government collapses hints at the possibility of helping make users more cunning, as they don't need to rely on their own neural wiring and thought processes to encode ideas of fairness. They just externalize their conscience into an AI like we externalize memorization of other peoples contact info to our phones. Lose the phone, you lose the contact info. Similarly, if the user loses the AI, do they also lose their conscience?
Cladistics is useful not only for biology but also for analyzing things like cultures.
Would love to see some examples if you have any to share.
Thanks for the suggestions, and now that I understand the idea that the probability values correspond to a binary interpretation of the events, it makes these areas easier to navigate for me in discussion.
In particular, truth of whether an outcome belongs to an event is not fuzzy, it either does or doesn't, as events are defined to be certain sets of outcomes.
This definitely stands as a hard to argue against idea, and it makes sense when seen from the viewpoint of rational humans interpreting data from systems based on binary calculations and logic.
Do you think it's possible that there is a better way than Binary logic to compute and reason though?
Not being familiar with the literature, I wonder if it's possible that because we have relied on binary logic to compute and reason for so long, it's created a false dichotomy in our understanding of reality. Is there another way to reason that works better, based on quantum computing rational?
How the next decade will add to the discussion of reality in terms of the advances in Quantum computing seems to be debatable. Translating probability values into either true or false logic is a step in binary computing that I believe quantum computing skips, and so the data returned takes in to account I think, the cases in which "...one of the events includes the other..." in a more or less straightforward way.
At this point though, (I could be wrong) I believe there is still more of a front end system that runs binary to interpret the Quantum calculations of a quantum computer, because when the data returned isn't binary, we're still trying to figure out what it's good for.
In relation to events and how long or little they last, this whole area of Quantum clocks is interesting to me. We can measure time more accurately because of them, but it seems like so much of the science in common use still relies on the second as the base measurement. Maybe the second is the bottom limit of what humans can somewhat accurately perceive without aid of a tool like a watch, which makes a case for basing measurements of time using more accurate methods off of the second.
Is it possible we could create wet ware with augmented vision which would allow us to 'perceive' smaller and smaller units of time, or would we just be better off trying to figure out how to slow down time? Sometimes rationally speaking, in the light of all these scientific advances, it gets a little harder to appreciate humans when you consider our limited abilities. I think it's our ability to conceptualize these phenomenon though that is our 'saving grace.'
A really enjoyable and informative read. I noticed on your graphic that the starting point for the entire graphic is the very first land plant, and its a moss. The very first evolutionary step from that point has to do with vascularity. This makes me wonder what was before the first land plant? Some sort of water plant obviously but I'm curious what the major evolutionary step from the ocean to land was - what it means to be 'moss'.
My second question is more open ended I guess, but thinking about all the various strategies life on Earth has developed to create such diversity in the plant world, and seeing as how many of them are quite similar across species, if you consider the vascularity of the post-moss plants as similar to the vascular system of animals, and the tightly packed wooden structure of trees as similar to bones of the skeleton, and the reproductive systems as similar as well, what do you see as constituting the "brain" or nervous system of a plant, if anything? I know it's a weird question.
And this is not exactly a deep observation, but It seems like the discussion of classification of plants - but also the post you linked to - involve the idea in relation to rationality and maybe to AI, that these are all examples of things which are hard to define in a true/false way, and this is at the heart of rationality from what I'm gathering. I think I want to do a bit more research about apples as well after reading this.