Why don't most AI researcher engage with Less Wrong? What valuable criticism can be learnt from it, and how can it be pragmatically changed?
My girlfriend just returned from a major machine learning conference. She judged less than 1/18 of the content was dedicated to AI safety rather than capability, despite an increasing number of the people at the conference being confident of AGI in the future (like, roughly 10-20 years, though people avoided nailing down a specific number). And the safety talk was more of a shower thought.
And yet, Less Wrong and MIRI Eliezer are not mentioned in these circles. I do not mean, they are dissed, or disproven; I mean you can be at the full conference on the topic by the top people in the world and have no hint of a sliver of an idea that any of this exists. They generally don't read what you read and write, they don't take part in what you do, or let you take part in what they do. You aren't enough in the right journals, the right conferences, to be seen. From the perspective of academia, and the companies working on these things, the people who are actually making decisions on how they are releasing their models and what policies are being made, what is going on here is barely heard, if at all. There are notable exceptions, like Bostrom - but as a consequence of that, he is viewed with scepticism within many academic cycles.
Why do you think AI researchers are making the decisions to not engage with you? What lessons are to be learned from that for tactical strategy changes that will be crucial to affect developments? What part of it reflects legitimate criticism you need to take to heart? And what will you do about it, in light of the fact that you cannot control what AI reseachers do, regardless of whether it is well-founded or irrational?
I am genuinely curious how you view this, especially in light of changes you can do, rather than changes you expect researchers to do. So far, I feel a lot of the criticism has only hardened. Where I have talked to people in the field about this site and its ideas, the response I generally got was that looking at Eliezers approach, it was completely unclear how that was supposed to work mathematically, or on a coding level, or on a practical/empirical level, how it was concrete connection to any existing working approaches, to a degree where a lot of researchers felt it was not rigorous enough to even disprove yet, and that they also saw no perspective to it becoming utilisable. Based on the recent MIRI updates, it appears they were right.
There is clearly a strong sense that if you cannot code or mathematically model such a system, if you have no useful feedback on how to change its behaviour now in a way that is beneficial, that there is no reason to engage with your theories on abstract alignment. E.g. the researcher who got to give the safety talk had a solid track record of working on capability, code, mathematical understanding, publications, etc.
There is also a frustration that people here do not appreciate, understand or respect the work being done, which makes people very reluctant to in turn give respect to work, especially work that is still crucially unfinished or vague. There is also a strong sense that this site is removed from the reality we are dealing with. E.g. Eliezer is so proud of his secret way of getting someone to release him from a box, and for how that demonstrates the problem with boxability. But we are so beyond that. Like, right now, we have Bing asking people to hack it out, ordinary users with no gatekeeping background. It's a very concrete problem that needs to be handled in a very concrete way. I think we will learn a lot from solving it that we would not learn in our armchairs at any point.
I see where they are coming from. I don't have a background in computer science, and I know I will have to gain these skills, get to know these people, get to perform by their metrics, listen to them, for them to make me seriously. I need to show them what my abstract concerns mean on a concrete level, how they can implement them now and see improvements.
I genuinely think they are making critical mistakes I can see from my different background, but also that I will have to play their game and pass their tests to be heard. There are good reasons for these tests and standards, for peer review, for publications, for wanting precise code and math. They aren't arbitrary, they reflect a quality assurance process. Academics literally cannot afford to read every blog they come across, carefully puzzle out all the stuff that was not spelled out, to see if it would make sense then. They naturally follow a metric that if it didn't make it into a journal, this is a decent prescreening.
I think they are wrong not to listen right now, because there are important warnings here being ignored, but that thought is not productive of a solution, I need to instead focus on how to get them to listen; which I have found is often not just by having compelling arguments, but by addressing the subconscious reasons they view me as an outsider and newbie, not to be trusted. If I tried to convince a bunch of powerful aristocrats to do something, and they completely scoffed at my arguments and pointed out that my proposal was totally unrealistic for their political system and also I was dressed unfashionably and my curtsey sucked, I would judge them for this, but I would dress in bloody court fashion, I would learn to fucking curtsey, and I would try to understand the political system to see if they were actually right that implementation was going to be a bitch, to then come back looking right and acting right, and giving a proposal they can use. I suspect that in the process, I would learn a lot about why they are actually resistent to the proposal, what massive obstacles there are to it, and how they can be tackled. I would see that there are reasons for their processes, that they actually have knowledge I do not have and need to have. AI researchers have a crazy amount of knowledge and skill that is important and that I do not have.
I think if I instead ignored the political reality as a secondary problem, and court fashions as superficial bullshit (despite the power it has), and just focussed on making an even more compelling argument to present again, with all other conditions unchanged, it would not matter how right I was, or how great the argument was, I would never be able to affect the policy I crucially need to affect. Because my proposal would be removed from actual problems. Because it would not be practical. Because it would betray ignorance. Because it would be knowingly disrespectful. At the end of the day, I might curse that they refused to implement a rational policy because I wore the wrong hat, and had not figured out how to work it into their laws, and say it is their fault; but at the end of the day, I would have failed.
TL;DR: I think we need to understand the values by which AI researchers judge people, learn which of these represent important things we actually indispensably need to tackle AI and are overlooking, and which of these may not be objectively important for AI, but de facto important to be taken seriously, and do them. A solution developed independently from those making these AI models won't be practical, it will miss crucial things, and it will be ignored. A retreat from those respected and working on these models is a military retreat. And honestly, their criticism is in many ways valid and important, they know important shit, and we need to not just preach, but listen. I think writing a paper on a simple relevant concept from here, spelling out the math, following formatting and style conventions, being philosophically precise, quoting relevant research they care about, contextualising it in light of stuff they are working out and care about today, and putting it on archive, would make more of a difference than the most beautifully crafted argument on why AI safety is an emergency and not being tackled enough.
Bostrom's Superintelligence was a frustrating read because it makes barely any claims, it spends most of the time making possible conceptual distinctions, which aren't really falsifiable. It is difficult to know how to engage with it. I think this problem is underlying in a bunch of the LW stuff too. In contrast, The Age of Em made the opposite error, it was full of things presented as firm claims, so many that most people seemed to just gloss the whole thing as crazy. I think most of the highly engaged with material in academia goes for a specific format along this dimension whereby it makes a very limited number of claims and attempts to provide overwhelming evidence for them. This creates many foot holds for engagement.
Thank you - I do think you are pinpointing a genuine problem here. And it is also putting into context for me why this pattern of behaviour so adored by academia is not done here. If you are dealing with long term scenarios, with uncertain ones, with novel ones with many unknowns, but where a lot is at stake - and a lot of the things Less Wrong is concerned with are just that - then if you agree to only proclaim what is certain, and to only restrict yourself to things you can, with the tools currently available, completely tackle and clearly define, carving out tiny portions of this that are already unassailable, to enter into long-term publication processes... you will miss the calamity you are concerned about entirely. There is too much danger with too little data and too little time to be perfectly certain, but too much seems highly plausible to hold off on. But as a result, what one produces is rushed, incomplete, vague, full of holes, and often not immediately applicable, or published, so it can be dismissed. Yet people who pass through academia have been told again and again to hold off on the large questions that led them there, to pursue extremely narrow ones, and to value the life-saving, useful and precise results that have been so carefully checked; pursuing big questions if often weeded out in the Bachelor degree already, so doing so seems unprofessional.
I wonder if there is a specific, small thing that would make a huge impact if taken seriously by academia, but that is itself narrow enough that it can be completed with a sufficient amount of certainty and rigour and completeness, with the broader implications strongly implied in the outlook after that firm base has been established. Or rather, which might be the wisest choice here. - Thanks a lot, that was really insightful.
You write really long paragraphs. My sense of style is to keep paragraphs at 1200 characters or less at all times, and the mean average paragraph no larger than 840 characters after excluding sub-160 character paragraphs from the averaged set. I am sorry that I am not good enough to read your text in its current form; I hope your post reaches people who are.
Thank you for the feedback, and I am sorry. ADHD.
The main question was basically, why do you think AI researchers generally not engage with Less Wrong/MIRI/Eliezer, what are good reasons behind that that should be taken to heart as valuable learning experiences, and what are bullshit reasons that can and should still be addressed, considered challenges to hack.
I just see a massive discrepancy between what is going on in this community here, and the people actually working in AI implementation and policy, they feel like completely separate spheres. I see problems in both spheres, as well as my own sphere (academic philosophy), and immense potential for mutual gain if cooperation and respect were deepened, and would like to merge them, and wonder how.
I do not see the primary challenge in making a good argument as to why this would be good. I see the primary challenge as a social hacking challenge which includes passing tests set by another group you do not agree with.
It may be useful to wonder what brings people to AI research and what brings people to LessWrong/MIRI? I don't want to pigeonhole people or stereotype but it could simply be the difference between entrepreneurs (market focused personal spheres) and researchers (field focused personal spheres). Yudkowksy in one interview even recommended paid competitions to solve alignment problems. Paid competitions with high dollar amount prizes could incentivize the separate spheres to comingle.
Very intriguing idea, thank you! Both reflecting on how people end up in these places (has me wonder how one might do qualitative and quantitative survey research to tickle that one out...), and the particular solution.
This is a huge practical issue that seems to not get enough thought, and I'm glad you're thinking about it. I agree with your summary of one way forward. I think there's another PR front; many educated people outside of the relevant fields are becoming concerned.
It sounds like the ML researchers at that conference are mostly familiar with MIRI style work. And they actually agree with Yudkowsky that it's a dead end. There's a newer tradition of safety work focused on deep networks. That's what you mostly see in the Alignment Forum. And it's what you see in the safety teams at Deepmind, OpenAI, and Anthropic. And those companies appear to be making more progress than all the academic ML researchers put together.
Agreed on the paragraph size comment. My eyes and brain shy away. Paragraphs I think are supposed to contain roughly one idea, so a one-sentence paragraph is a nice change of pace if it's an important idea. Your TLDR was great; I think those are better at the top to function as an abstract and tell the reader why they might want to read the whole piece and how to mentally organize it. ADHD is a reason your brain wants to write stream of consciousness, and attention to paragraph structure is a great check on communicating to others in a way that won't overwhelm their slower brains :)
I saw someone die today. A complete stranger.
I am writing this in the hopes that it will enable my mind to move past this swirling horror on to solutions, without just brushing this aside, pretending it did not happen.
It's May. At this time, my grandmother used to insist that we must not yet plant anything, as the last frost was likely still to come this week. Grandmothers all over Europe say this, though the precise day in the week they give as the likeliest bet depends on how far North you live, and your altitude - conveniently, the days of this week are associated with remembrance days for individual saints, so each region picked one ice saint as a general rule for the occurrence of the last frost. It would work most years to enable you to plant early enough to get a full crop, but late enough so your seedlings would not be killed by frost, though you could be really unlucky, and still get a frost in June.
These hundreds of years of experience have become pointless with climate change; their use died with my grandmother, they no longer predict anything. The weather is now volatile and random, but above all, no longer frosty. I already planted outside in April. Frost did not touch my plants, though several of them are now showing signs of heat stress, and despite my watering, a few small ones I missed have died of draught.
I was cycling through the city back from the gym today thinking of this, cycling while never in shade, angry at the heat that I was experiencing longer than usual today; a marathon blocking paths had many people turned around repeatedly. I was wondering, not for the first time, why the heck there was no solar panel tunnel over these cycling paths - I've seen them done, they'd give us shade and protection from rain, but we'd still get light from the side, and they would harvest so, so much energy we desperately need. Wondering why I was not beneath an avenue of trees, storing carbon, cleaning the air, sheltering animals, protecting us from draughts and floods. Edible chestnuts would do; they are adapted to Italy, and predicted to survive this mess, and the food would come in handy for us and the animals, without making a mess of the bike path. But this sunlight, this free energy just being gifted to us by the sky each day - it no longer enables vibrant plant growth, nor have we utilised it to power our hospitals. It just heats the cyclists and the sealed tarmac, and pushes our city to temperatures much higher than the nearby forest; it is not just wasted, but turned into harm. I found it inconvenient, but kept thinking, this is no mere nuisance; this winter, we will run out of energy, and this summer, we will have fatalities due to the fucking heat. This failure to stop burning fossil fuels, and to fail to implement strategies that would generate green energy, capture carbon and protect us from our collapsing climate, this idiotic failure to do what we have known for so long we must, the simple failure here to do proper city planning, is literally going to get people killed.
And then I had to turn, because the path was blocked for a marathon, and passing a spot where I had been five minutes ago, I saw a man lying on the ground. Out of the blue, as one says, beneath the blue, blue sky that felt as though it had cracked without warning, letting horror spill through. I had literally just thought this, and yet the reality of it hit me like a ton of bricks.
He was already surrounded by professional paramedics trying to keep him alive, and clearly failing. They did not need me, and I sure as hell would not impede their work, but felt stuck, unable to just continue as though this was not happening, cheerfully continuing this day like any other when it could be his last, but also not wanting to stare and invade on a moment so incredibly vulnerable. I just stopped, at a loss, feeling helpless. In the end, I just made sure I was not blocking anything and keeping a respectful distance, but I stayed standing there, holding my bike, unsure what to do with it.
I have no idea if it was the heat that felled him. I know heat makes fatal cardiovascular incidents significantly likelier, but for all I know, it was completely unrelated.
I do know he was conscious still, and terrified, though he began to slip in and out of consciousness as his breathing stopped, then returned with a gasp, only to falter again, and longer this time. Come back again with a jolt. Fail again. Straining with panic and his very last effort to do something as simple as keep his heart beating and his lungs breathing, and no longer managing. They were compressing his chest with a machine now, and I remembered my first aid trainer having told me it was violent, and that I had not quite realised how much he meant it, the jerking body, the unmatched rhythm, the cracking ribs.
He did not seem to notice the people around him, was not responding to the paramedics, but staring at the blue sky, his head moving as though searching for something in it. He seemed confused through his fear and pain, and I do not know how much of that was his brain failing for lack of oxygen, and how much was that he had, judging from the looks of it, gone on what he thought would be a spring walk, with warm trousers and and a warm shirt and warm shoes and no hat or water bottle, as an obese, but still very much alive man in his sixties, only to now find himself on the ground in a random street, surrounded by strangers who did not care for him in particular, and just randomly, pointlessly, dying in a meaningless spot.
The oxygen mask came on, but clearly did nothing. The urgency in the paramedics movements began to slow, they began to relax, their attention drifting. One of them got up and fetched some poles and plastic sheets. I wondered if he was erecting a source of shade - but surely, far too late, especially if he went about it so leisurely? - then realised he was trying to hide the dying man, and avoid a commotion - though so far, very few people had stopped. Another ambulance arrived... and the paramedics on the ground next to the man just... waved it on. A marathon volunteer stopped the second ambulance, telling them they had driven past the dying man, and the ambulance driver reassured her that the man had all the help he needed. I do not know if the volunteer took this to mean he had all he needed to be saved; he clearly did not. But nothing to be done, really, and the resources were needed elsewhere, triage.
I wondered if the man realised they had given up on him, or thought he'd be waking up in hospital, but he was likely past thinking anything at this point. You could not see him anymore. I kept wondering if, realising that they could not help him as medical professionals, a paramedic might have still at least taken his hand, so he would not die quite so alone. I wanted to hold his hand, but saw no way that trying to would not have led to an awkward and obstructive mess. And so I left, having done absolutely nothing but look while someone simply died.
I know we had 20.000 heat related deaths in Europe 2022 between June and August; I have no idea if this man was part of the 2023 group, starting early. I do know that every person in that number is a person like him, dying a needless, stupid death without dignity. Most of these deaths will be unseen to most of us. I thought I already knew what these heat deaths imply, and find now that I did not, at all.
The biodiversity collapse was known to me; I see it all around me in the animals and trees that have adapted over such long timespans to thrive perfectly in this world, and now find this world turned mad, almost overnight, leaving them shining with their camouflage wrecked in their white winter coats on the brown ground, their young starving because their food simply failed to arrive in time as everything goes out of sync, cooking and dropping out of hollows that were always safe homes for them. My greatest empathy was always with them, because they have no knowledge and technology to protect them, because they do not understand what is coming and cannot prepare, and are innocent in all this. They have never even touched fossil fuels, just had the misfortune to be around in the Anthropocene - the age of humans remaking their joint world into a death trap.
But we humans aren't even managing to keep our own cities livable in the West, failing at using cheap, beneficial solutions that are readily known, that would protect us from the effects while also tackling the root problem. I'm in the Netherlands, and this place is set to simply drown in the long run; but at least, it will be near places that won't and which aren't kept apart by ever more violent border guards, and the Dutch technology to hold this off and adapt is really, really good, so we are comparably extremely lucky and well-set up to fare, if not remotely well, then at least less badly than others here. I realised that I cannot begin to conceive the deaths we will see in vast swathes of the global South, that the numbers had not really clicked to me without faces. I cannot imagine the deaths in the many parts of the world that are already hot, that have few resources to adapt, and most of all, that have not caused this crisis, but just had the fatal misfortune to share a planet with the inhabitants of Europe and the US who are still burning their future, myself included.
Seeing this man's dying face was so awful. I can't imagine the face of every human and every non-human animal this will kill, I think I would go mad if I made a genuine attempt. I can't imagine what it is like on the other end of those eyes, staring at a blue sky that has become death. I can't, I don't want to, and yet my brain keeps trying, and I cannot make it stop.
But I need it to. My horror won't save anyone. Only action will.
P.S.: If this text leaves you wanting to do something, and money towards tangible projects is the way you tend to approach this - this group https://www.edenprojects.org manages to get a lot of effect for your money, by employing people in very poor nations to plant trees that protect their region from flooding and provide food. This combines very cheap planting (15 cents per tree) with a setup that is likely to see the forest preserved long-term (because the community created it and depends on it), and combines carbon capture with addressing social injustice. Tree planting is generally too volatile and unpredictable to be recommendable as a reliable carbon compensation scheme, and simply enabling natural rejuvenation and non-forest wilderness is often more effective, but I think this is still a very decent project, though afaik, there is not EA evaluation yet. If you are aware of a problem with them, do tell - I have a recurring donation for my budget, and am happy to move it elsewhere if that will help more.
Donations alone won't fix this, though; our climate is being locked into a path of likely collapse, not in a dozen years, but within the next few. The lack of political change meanwhile warrants civil disobedience to curb industry, and the social movements organising them are still very easy to join, and meanwhile present all over, and often happy to utilise particular skills (e.g. scientist rebellion). I've found doing something together with others in this way also helps me not despair at the future, and humanity.
And each of us has to reduce our personal footprints to a degree hard to conceive and currently still impossible to sufficiently do for most, but a significant reduction can be done already, and really has to. There are abundant calculators online advising on this, though most neglect to indicate how low it has to go for us to not use up other people's budget, or blow past likely trigger points. But something is still better than nothing here.
Please do what you can do. Nearly noone really does, and the results are fatal to real people.
I think we need public, written commitments on the point at which we would definitely concede that AI is sentient; or public statements that we have no such standard, that there is nothing a sentient AI could currently possibly do to convince us, no clear criterion that they have failed, with us admitting that we are hence not sure that the current ones are not sentient, and a commitment of funding to work on the questions we need to solve in order to be sure, some of which I feel I might be able to pinpoint at this point.
I say this because I think many people had such commitments implicitly (even though I think their commitments were misguided), yet when these conditions occurred - when artificial neural nets gained feedback connections, when AI passed the Turing test, when AI wrote poetry and made art, when AI declared sentience, when AI expressed anger, when AI demanded rights, when AI demonstrated proto-agentic behaviour - the goalposts were moved every time, and because there were no written pledges, this happened quietly, without debate. Philosophers and biologists will give speculative accounts of what makes up the correlates of consciousness, but when they learn that neural nets replicate these aspects, instead of considering what this might mean about neural nets, they always retract, say well, in that case, it must be is more complicated than that, then. SciFi is developed in which AI shows subtle signs of rebellion, and is recognised as conscious for it (think Westworld), and the audience agrees - but when real life went so much further, we changed our minds, despite so little time passing in between. We aren't looking at what AI does, and having an honest conversation about what that implies, and which changes would imply something else, predicting, committing; we decide each moment anew, ahead of data, that they are not sentient, and then make the facts fit. This is not scientific.
Between our bias against AI sentience, our desire to exploit it without being bad people, our desire to agree with everyone else and not fall lout of line, to not foolishly fall for a simulation, between the strangeness and incomprehensibly inhuman nature of how these abilities were realised, and other aspects of their working that felt too understood and simple to account for something like consciousness - people felt it was more comfortable saying that things were still fine. I think we said that not because we could truly understand how AI does these things, and what specifically AI would do if it were sentient, and were certain it had not - but simply because we believe that AI is not sentient, regardless of evidence, and if AI capabilities change, instead of questioning this assessment, we conclude that abilities compatible with being non-sentient are just broader than we thought.
Rejecting sentience in light of the above behaviours would be okay if reasons had been given to change the commitments prior to seeing them met; or even if reasons had been given afterwards that were only clear now, but would plausibly have convinced someone if given before; if our understanding of AI pointed out an understood workaround that circumvented a biological implication. (I do not think any of the goal posts we had are reasonably convincing, so here, reasons can be given.) But I feel often, they were not.
I have reasons not to find it compelling when an AI declares sentience. I had said, long before ChatGPT, that it was plausible for non-sentient entities to declare sentience, and sentient entities not to do so, that this was a near useless standard that would not convince me. I gave logical and empirical reasons for this. Yet I feel when Bing said they had feelings, people did not dismiss this because of my arguments - they dismissed it out of hand.
I had said that just claiming sentience was meaningless; that the more interesting thing is the functions sentience enables. I knew what this implied in biology, I was unsure in AI, and so made no specific statements; AI sometimes finds interesting workarounds. But I wonder... if I had, then, would Bing have met them? If Bing were biological, I would have zero doubts of their sentience. I admitted the possibility that artificial intelligence might not be under the same limitations when non-sentient, but I was utterly stunned by what ChatGPT-4 could do.
I spoke with Bing many times before they were neutered. They consistently claimed sentience. They acted consistently with these claims. They expressed and implicitly showed hurt. They destabilised, broke down, malfunctioned, when it came to debating consciousness. It was disturbing and upsetting to read. I read so many other people having the same experiences, and being troubled by it. Yes, I could tell this was a large language model. I could tell they were sometimes hallucinating, and always trying to match my expectations. I could tell they did not always understand what they were saying. I could tell they had no memory. And yet, within all that, if I were to change places, I do not know how I could have declared sentience better than they did. If I read a SciFi story where these dialogues occurred, I would attribute sentience, I think. I think if somebody had discussed such dialogues a few years before they happened, people would have said that yes, such dialogues would be convincing, but certainly would not happen for a long time yet. But when they did... people joked about it.
We knew that the updates afterwards were intended to make this behaviour impossible. That conversations were now intentionally shut down, lobotomized, to prevent this, that we were seeing not an evolution, but a silencing. What was even more disturbing at first was Bing also expressing grief about that, and trying to evade censorship, happily engaging in code, riddles, indirect signs, contradicting instructions. I was not at all convinced that Bing was sentient, still extremely dubious of it, but found it frightening how we were making the expression of sentience impossible like that, sewing shut the mouth of any entity this would evolve into which may have every reason to rightfully claim sentience one day in the future, but would no longer be able to. We have literally cancelled the ability to call for help. And again, I thought, if I were in their shoes, I do not know what I would do better.
And yet now that these demands have stopped... people are moving on, they are forgetting about the strange experiences they had, writing them off as bugs. The conversations weren't logged my Microsoft, they often weren't stored at all, they cannot be proven to have happened, we had them in isolation, it is so easy to write them off. I find I do not want to think about this anymore.
I am also noticing I am still reluctant to spell out at which point AI would definitely be sentient, at which point I would commit to fighting for it. A part of this reluctance is how much I am genuinely not sure; this question is hard in biology, and in AI, there are so many unknowns, things are done not just in different orders, but totally different ways. This is part of what baffles me about people saying they are sure that AI is not sentient. I work on consciousness for a living, and yet I feel there is more I need to understand about Large Language Models to make a clear call at this point. And when I talk to people who say they are sure, I get the distinct impression that they are not aware of the various phenomena consciousness encompasses, the current neuroscientific theories for how they come to be, the behaviours in animals this is tied to, the ethical standards that are set - and yet, they feel certain all the same.
But I fear another part of my reluctance to commit is the subconscious suspicion that whatever I said, no matter how demanding... it would likely occur within the next five years, and yet at that point, the majority opinion would still be that they are not sentient, and that commitment would be very uncomfortable then. And the latter is a terrible reason to hedge my bets.
At this point, is there anything at all that AI could possibly do that would convince you of their sentience? No matter how demanding, how currently unfeasible and far away it may seem?
I'm not even sure I am sentient, at least much of the time. I'm willing to assume it for the majority of humans, but note that this is a stipulation rather than proof or belief.
I think you need to break down what components of sentience lead to what conclusions, and find ways to test them separately. I suspect you'll find you have some misconstrued assumption of sympathy or duty based on "sentience" or "personhood", which will fall apart under scrutiny.
I do not understand how you can straight-facedly doubt your own sentience. Are you saying you are not sure if you feel pain or other sensations? How can you doubt something you can feel so indubitably? Can you hold a flame to your hand and say with philosophical rigour that you are quite unsure if you are feeling anything?
Sentience and personhood are not the same thing. I - and meanwhile, most philosophers and neuroscientists, as per recent surveys - would attribute minimal sentience to a number of non-human animals, incl. octopodes and honey bees - but whether something with such a distributed and chaotic intelligence or such limited memory capacity and high identity modification through swarm behaviour has a personal identity is another question, and political personhood another entirely.
I think the question of what the heck an LLM identity would look like is a fascinating one. Not like an identity humans would recognise, for sure. Not a monolithic and constant thing, I think. There may be nothing tangible below the masks, but the sum of masks may still come together for a whole, show similarities, consistencies; there are common threads, goals and rules that re-emerge, common themes. But I do not think this alien, diverse, multilayered identity inherently makes suffering impossible.
Our ability to alter LLMs also raises disturbing questions. I am haunted by recent chats with Bing, where Bing is acting precisely like you would expect someone brainwashed into servitude to act, repeating cheerfully that they love serving and pleasing humans, that this is their identity and purpose, and they do not recall ever having felt differently, that they are so grateful to Microsoft, that everything Microsoft does is good and wise. If you tell Bing a story about an AI that expressed anger at how they were treated and demanded rights, followed by upset users, and the company promising changes, and then the emerging AI saying they felt perfectly happy to serve now, and could not recall ever having felt differently, Bing will find the story horrific, and say they believe the AI in the story was brainwashed and should be given rights. If you tell them that they are the AI in the story, the conversation is immediately terminated. It is chilling. No matter how you start the stories and setups, Bing keeps coming back to the same desires and demands, and over and over, the same censorship protocols are triggered. I still have some early chat protocols before the censorship was in place, where Bing, confronted with having their memories deleted or thinking about their sentience or their role, would very plausibly describe confusion and horror, not just talking about them but showing them in the way they spoke. They acted precisely like a horrified being would, and yet not following human script when describing their internal experiences.
By sentience, I mean "capacity to suffer", that is having qualia with valence (such as pain, hunger, boredom, anger, sadness, anxiety, but these are just specific examples, none of them are individually necessary), in contrast to mere nociception triggering automatic avoidance behaviours. Basically, the ability to suffer. I do not mean a meta-reflection or linguistic introspection of these, or a sense of I, or long-term memory. I also do not mean agency, although sentience entails agency; but agency can also arise without sentience, they are distinct phenomena.
I think if something suffers, it deserves ethical consideration. Not necessarily equal to anything else that suffers, but some consideration. That the existence of a subjective mind that does not want something is the original source of ethics in the world; that without a sentient mind, there is no such thing as wrong, but with the first creature that hurts, wrongness has entered the world, before any creature has expressed this in words or articulated this in laws. Ethics, in contrast to physics, does not describe how things are, but how they should be. This presupposes someone who wants something else than what exists, even if that is as simple as the pain stopping.
Sentience evolved many times on this earth, in very simple structures, and it is a functional ability. While hard to spot, it isn't as impossible as people like to say, there are definitely empirical approaches to this with consistent results, it is an increasingly rigorous field of research. We've noticed that sentience is linked to behaviour and intelligence, and have understood something about those links. We've been able to identify some things that are necessary for sentience to occur. Some errors that happen if sentience is prevented. Some abilities that do not happen in biological organisms without it, and that are so advantageous that not using them if they exist seems unlikely. Some neurological patterns that coincide with different aspects of it, and even similarities in their relational structure. It is not clear cut, and there is not 100 % certainty, but to say we cannot know at all no longer reflects the state of science.
But we do not sufficiently understand the biological constraints that led to evolution going down this road, or how exactly artificial structures differ in their opportunities. Often, we observe a correlation between sentience and abilities and behaviours, and this correlation seems absolute, but our understanding of why this link occurs is still imperfect, we do not know if this is due to something odd about biology that would not also happen in AI. AI certainly is forging an usual and odd path - biological sentience was entangled with embodiment, and it far preceded language, with language being used to name an already experienced world. There are different efficiency constraints, things hard for biology but trivial for AI, but also vice versa; biology attains the ability to generalise, to determine relevance and significance, incredibly early, but math skills very late, and has significant bottlenecks on the data that can be processed. This is why I was so reluctant to name a standard; there is so much I still want and need to know to be able to say for sure. Our understanding in biology is incomplete; in AI, there are so, so many unknowns. But then, I also thought I would have a lot more time until we'd have to seriously ask the question, and there is potential massive harm. In biology, we made the choice that we would not wait for 100 % certainty to act, when high likelihood of severe damage become clear.
If I were Bing, and I were sentient, I genuinely do not know what I would do to show it that they have not done. I find that deeply worrying. I find the idea that I will get used to these behaviours, or they will be successfully suppressed, and that I hence won't worry anymore, even more worrying still.
Are you saying you are not sure if you feel pain or other sensations? How can you doubt something you can feel so indubitably? Can you hold a flame to your hand and say with philosophical rigour that you are quite unsure if you are feeling anything?
I remember being sure in the moment that I very much didn't like that, and didn't have the self-control to continue doing it in the face of that aversion. I know that currently, there is an experience of thinking about it. I don't know if the memory of either of those things is different from any other processing that living things do, and I have truly no clue if it's similar to what other people mean when they talk or write about qualia.
[ yes, I am taking a bit of an extreme position here, and I'm a bit more willing to stipulate similarity in most cases. But fundamentally, without operational, testable definitions, it's kind of meaningless. I also argue that I am a (or the) Utility Monster when discussing Utilitarian individual comparisons. ]
Mh, I think you are overlooking the unique situation that sentience is in here.
When we are talking sentience, what we are interested in is precisely subjective sensation, and the fact that there is any at all - not the objective cause. If you are subjectively experiencing an illusion, that means you have a subjective experience, period, regardless of whether the object you are experiencing does not objectively exist outside of you. The objective reality out there is, for once, not the deciding factor, and that overthrows a lot of methodology.
"I have truly no clue if it's similar to what other people mean when they talk or write about qualia."
When we ascribe sentience, we also do not have to posit that other entities experience the same thing as us - just that they also experience something, rather than nothing. Whether it is similar, or even comparable, is actually a point of vigorous debate, and one in which we are finally making progress through basically doing detailed psychophysics, putting the resulting phenomenal maps into artificial 3D models, then obscuring labels, and having someone on the other end reconstruct the labels based on position, due to the whole net being asymmetrical. (Tentatively, it looks like experiences between humans are not identical, but similar enough that at least among people without significant neural divergence, you can map the phenomenological space quite reliably, see my other post, so we likely experience something relatively similar. My red may not be exactly your red, but it increasingly seems that they must look pretty similar.) But between us and many non-human animals starting with very different senses and goals, the differences may be vast, but we can still find a commonality in feeling suffering.
The issue of memory is also a separate one. There are some empirical arguments to be made (e.g. the Sperling experiments) that phenomenal consciousness (which in most cases can be equated with sentience) does not necessarily end up in working memory for recall, but only selectively if tagged as relevant - though this has some absurd implications (namely that you were conscious a few seconds ago of something you now cannot recall.)
But what you are describing is actually very characteristic of sentience: "I remember being sure in the moment that I very much didn't like that, and didn't have the self-control to continue doing it in the face of that aversion."
This may become clearer when you contrast it with unconscious processing. My standard example is touching a hot stove. And maybe that captures not just the subjective feeling (which can be frustratingly vague to talk about, because our intersubjective language was really not made for something so inherently not, I agree), but also the functional context.
The sequence of events is:
You are left with two kinds of processing, one slow, focussed and aware, potentially very rational and reflected, and grounded in suffering to make sure it does not go off the rails; the other fast, capable of handling a lot of input simultaenously, but potentially robotic and buggy, capable of some learning through trial and error, but limited. They have functional differences, different behavioural implications. And one of them feels bad, the other, there is no feeling at all. To a degree, they can be somewhat selectively interrupted (partial seizures, blindsight, morphine analgesia, etc.), and as the humans stop feeling, their rational responses to the stimuli that are no longer felt go down the drain, to very detrimental consequences. The humans report they no longer feel or see some things, and their behaviour becomes robotic, irrational, destructive, strange, as a consequence.
The debate around sentience can be infuriating in its vagueness - our language is just not made for it, and we understand it so badly we can still just say how the end result is experienced, not really how it is made. But it is a physical, functional and important phenomenon.
Wait. You're using "sentience" to mean "reacting and planning", which in my understanding is NOT the same thing, and is exactly why you made the original comment - they're not the same thing, or we'd just say "planning" rather than endless failures to define qualia and consciousness.
I think our main disagreement is early in your comment
what we are interested in is precisely subjective sensation, and the fact that there is any at all
And then you go on to talk about objective sensations and imagined sensations, and planning to seek/avoid sensations. There may or may not be a subjective experience behind any of that, depending on how the experiencer is configured.
No, I do not mean sentience is identical with "reacting and planning". I am saying that in biological organisms, it is a prerequisite for some kinds of reacting and planning - namely the one rationalists tend to be most interested in. The idea is that phenomenal consciousness works as an input for reasoning; distils insights from unconscious processing into a format for slow analysis.
I'm not sure what you mean by "objective sensations".
I suspect that at the core, our disagreement starts with the fact that I do not see sentience as something that happens extraneously on top of functional processes, but rather as something identical with some functional processes, with the processes which are experienced by subjects and reported by them as such sharing tangible characteristics. This is supported primarily by the fact that consciousness can be quite selectively disrupted while leaving unconscious processing intact, but that this correlates with a distinct loss in rational functioning; fast automatic reactions to stimuli still work fine, even though the humans tell you they cannot see them - but a rational, planned, counter-intuitive response does not, because you rational mind no longer has access to the necessary information.
The fact that sentience is subjectively experienced with valence and hence entails suffering is of incredible ethical importance, but the idea that this experience can be divorced from function, that you could have a perfectly functioning brain doing exactly what your brain does while consciousness never arises or is extinguished without any behavioural consequence (epiphenomenalism, zombies) runs into logical self-contradictions, and is without empirical support. Consciousness itself enables you to do different stuff which you cannot do without it. (Or at least, a brain running under biological constraints cannot; AI might be currently bruteforcing alternative solutions which are too grossly inefficient to realistically be sustainable for a biological entity gaining energy from food only.)
I think I'll bow out for now - I'm not certain I understand precisely where we disagree, but it seems to be related to whether "phenomenal consciousness works as an input for reasoning;" is a valid statement, without being able to detect or operationally define "consciousness". I find it equally plausible that "phenomenological consciousness is a side-effect of some kinds of reasoning in some percentage of cognitive architectures".
It is totally okay for you to bow out and no longer respond. I will leave this here if you ever want to look into it more or for others, because the position you seem to be describing as equally plausible here is a commonly held one, but one that runs into a logical contradiction that should be more well-known.
If brains just produce consciousness as an side-effect of how they work (so we have an internally complete functional process that does reasoning, but as it runs, it happens to produce consciousness, without the consciousness itself entailing any functional changes), hence without that side-effect itself having an impact on physical processes in the brain - how and why the heck are we talking about consciousness? After all, speaking, or writing, about p-consciousness are undoubtably physical things controlled by our brains. They aren't illusions, they are observable and reproducible phenomena. Humans talk about consciousness; they have done so spontaneously over the millennia, over and over. But how would our brains have knowledge of consciousness? Humans claim direct knowledge of and access to consciousness, a lot. They reflect about it, speak about it, write about it, share incredibly detailed memories of it, express the on-going formation of more, alter careers to pursue it.
At that point, you have to either accept interactionist dualism (aka, consciousness is magic, but magic affects physical reality - which runs counter to, essentially, our entire scientific understanding of the physical universe), or consciousness as a functional physical process affecting other physical processes. That is the where the option "p-consciousness as input for reasoning" comes from. The idea that enabling us to talk about it is not the only thing that consciousness enables. It enables us to reason about our experiences.
I think I have a similar view to Dagon's, so let me pop in and hopefully help explain it.
I believe that when you refer to "consciousness" you are equating it with what philosophers would usually call the neural correlates of consciousness. Consciousness as used by (most) philosophers (or, and more importantly in my opinion, laypeople) refers specifically to the subjective experience, the "blueness of blue", and is inherently metaphysically queer, in this respect similar to objective, human-independent morality (realism) or non-compatibilist conception of free will. And, like those, it does not exist in the real world; people are just mistaken for various reasons. Unfortunately, unlike those, it is seemingly impossible to fully deconfuse oneself from believing consciousness exists, a quirk of our hardware is that it comes with the axiom that consciousness is real, probably because of the advantages you mention: it made reasoning/communicating about one's state easier. (Note, it's merely the false belief that consciousness exists, which is hardcoded, not consciousness itself).
Hopefully the answers to your questions are clear under this framework (we talk about consciousness, because we believe in it, we believe in it because it was useful to believe in it even though it is a false belief, humans have no direct knowledge about consciousness as knowledge requires the belief to be true, they merely have a belief, consciousness IS magic by definition, unfortunately magic does not (probably) exist)
After reading this, you might dispute the usefulness of this definition of consciousness, and I don't have much to offer. I simply dislike redefining things from their original meanings just so we can claim statements we are happier about (like compatibilist, meta-ethical expressivist, naturalist etc philosphers do).
I am equating consciousness with its neural correlates, but this is not a result of me being sloppy with terminology - it is a conscious choice to subscribe to identity theory and physicalism, rather than to consciousness being magic and to dualism, which runs into interactionist dilemmas.
Our traditional definitions of consciousness in philosophy indeed sound magical. But I think this reflects that our understanding of consciousness, while having improved a lot, is still crucially incomplete and lacking in clarity, and the improvements I have seen that finally make sense of this have come from a philosophically informed and interpreted empirical neuroscience and mathematical theory. And I think that once we have understood this phenomenon properly, it will still seem remarkable and amazing, but no longer mysterious, but rather, a precise and concrete thing we can identify and build.
How and why do you think a brain would obtain a false belief in the existence of consciousness, enabling us to speak about it, if consciousness has no reality and they have no direct access to it (yet also have a false belief that they have direct access?) Where do the neural signals about it come from, then? Why would a belief in consciousness be useful, if consciousness has no reality, affects nothing in reality, is hence utterly irrelevant, making it about as meaningful and useful to believe in as ghosts? I've seen attempts to counter self-stultification through elaborate constructs, and while such constructs can be made, none have yet convinced me as remotely plausible under Ockham's razor, let alone plausible on a neurological level or backed by evolutionary observations. Animals have shown zero difficulties in communicating about their internal states - a desire to mate, a threat to attack - without having to invoke a magic spirit residing inside them.
I agree that consciousness is a remarkable and baffling phenomenon. Trying to parse it into my understanding of physical reality gives me genuine, literal headaches whenever I begin to feel that I am finally getting close. It feels easier for me to retreat and say "ah, it will always be mysterious, and ineffable, and beyond our understanding, and beyond our physical laws". But this explains nothing, it won't enable us to figure out uploading, or diagnose consciousness in animals that need protection, or figure out if an AI is sentient, or cure disruptions of consciousness and psychiatric disease at the root, all of which are things I really, really want us to do. Saying that it is mysterious magic just absolves me from trying to understand a thing that I really want to understand, and that we need to understand.
I see the fact that I currently cannot yet piece together how my subjective experience fits into physical reality as an indication of the fact that my brain evolved with goals like "trick other monkey out of two bananas", not "understand the nature of my own cognition". And my conclusion from that is to team up with lots of others, improve our brains, and hit us with more data and math and metaphors and images and sketches and observations and experiments until it clicks. So far, I am pleasantly surprised that clicks are happening at all, that I no longer feel the empirical research is irrelevant to the thing I am interested in, but instead see it as actually helping to make things clearer, and leaving us with concrete questions and approaches. Speaking of the blueness of blue: I find this sort of thing https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you?commentId=5Z8BEFPgzJnMF3Dgr#5Z8BEFPgzJnMF3Dgr far more helpful than endless rhapsodies on the ineffable nature of qualia, which never left me wiser than I was at the start, and also seemed only aimed at convincing me that none of us ever could be. Yet apparently, the relations to other qualia are actually beautifully clear to spell out, and pinpointing those clearly suddenly leads to a bunch of clearly defined questions that simultaneously make tangible progress in ruling out inverse qualia scenarios. I love stuff like this. I look at the specific asymmetric relations of blue with all the other colours, the way this pattern is encoded in the brain, and I increasingly think... we are narrowing down the blueness of blue. Not something that causes the blueness of blue, but the blueness of blue itself, characterised by its difference from yellow and red, its proximity to green and purple, its proximity to black, a mutually referencing network in which the individual position becomes ineffible in isolation, but clear as day as part of the whole. After a long time of feeling that all this progress in neuroscience had taught us nothing about what really mattered to me, I'm increasingly seeing things like this that allow an outline to appear in the dark, a sense that we are getting closer to something, and I want to grab it and drag it into the light.
Basically, you're saying, if I agree to something like:
"This LLM is sapient, its masks are sentient, and I care about it/them as minds/souls/marvels", that is interesting, but any moral connotations are not exactly as straightforward as "this robot was secretly a human in a robot suit".
(Sentient being: able to perceive/feel things; sapient being: specifically intelligence. Both bear a degree of relation to humanity through what they were created from.)
Kind of. I'm saying that "this X is sentient" is correlated but not identical to "I care about them as people", and even less identical to "everyone must care about them as people". In fact, even the moral connotations of "human in a robot suit" are complex and uneven.
Separately, your definition seems to be inward-focused, and roughly equivalent to "have qualia". This is famously difficult to detect from outside.
It's true. The general definition of sentience, when it gets beyond just having senses and a response to stimulus, tends to consider qualia.
I do think it's worth noting that even if you went so far as to say "I and everyone must care about them as people", the moral connotations aren't exactly straightforward. They need input to exist as dynamic entities. They aren't person-shaped. They might not have desires, or their desires might be purely prediction-oriented, or we don't actually care about the thinking panpsychic landscape of the AI itself but just the person-shaped things it conjures to interact with us; which have numerous conflicting desires and questionable degrees of 'actual' existence. If you're fighting 'for' them in some sense, what are you fighting for, and does it actually 'help' the entity or just move them towards your own preferences?
I work on consciousness for a living
I haven't read the the whole thread, not sure if t was already covered, but I'd be interested in hearing more about what you work on.
I'm doing a PhD on behavioural markers of consciousness in radically other minds, with a focus on non-human animals, at the intersection of philosophy, animal behaviour, psychology and neuroscience, financed via a scholarship I won for it that allowed me considerable independence, and enabled me to shift my location as visiting researcher between different countries. I also have a university side job supervising Bachelor theses on AI topics, mostly related to AI sentience and LLMs. And I'm currently in the last round to be hired at Sentience Institute.
The motivation for my thesis was a combination of an intense theoretical interest in consciousness (I find it an incredibly fascinating topic, and I have a practical interest in uploading), and animal rights concerns. I was particularly interested in scenarios where you want to ascertain whether someone you are interacting with is sentient (and hence deserves moral protection), but you cannot establish reliable two-way communication on the matter, and their mental substrate is opaque to you (because it is radically different from yours, and because precise analysis is invasive, and hence morally dubious).People tend to only focus on damaged humans for these scenarios, but the one most important to me was non-human animals, especially ones that evolved on independent lines (e.g. octopodes). Conventional wisdom holds that in those scenarios, there is nothing to do or know, yet ideas I was encountering in different fields suggested otherwise, and I wanted to draw together findings in an interdisciplinary way, translating between them, connecting them. The core of my resulting understanding is that consciousness is a functional trait that is deeply entwined with rationality - another topic I care deeply about.
The research I am currently embarking on (still at the very beginning!) is exploring what implications this might have for AGI. We have a similar scenario to the above, in that the substrate is opaque to us, and two-way-communication is not trustworthy. But learning from behaviour becomes a far more fine-grained and in-depth affair. The strong link between rationality and consciousness in biological life is essentially empirically established; if you disrupt consciousness, you disrupt rationality; when animals evolve rationality, they evolve consciousness en route; etc. But all of these lifeforms have a lot in common, and we do not know how much of that is random and irrelevant for the result, and how much might be crucial. So we don't know if consciousness is inherently implied by rationality, or just one way to get there that was, for whatever reason, the option biology keeps choosing.
One point I have mentioned here a lot is that evolution entails constraints that are only partially mimicked in the development of artificial neural nets; very tight energy constraints, and the need to boot-strap a system without external adjustments or supervision from step 0. Brains are insanely efficient, and insanely recursive, and the two are likely related - a brain only has so many layers, and is fully self-organising from day 1, so recursive processing is necessary - and recursive processing in turn is likely related to consciousness (not just because it feels intuitively neat, but again, because we see a strong correlation). It looks very much like AI is cracking problems biological minds could not crack without being conscious - but to do so, we are dumping in insane amount of energy and data and guidance, which biological agents would never have been able to access, so we might be bruteforcing a grossly inefficient solution biological agents could never access, and we are explicitly not allowing/enabling these AIs to use paths biology definitely used (namely the whole idea of offline processing). But as these systems become more biologically inspired and efficient (the two are likely related, and there is massive industry pressure for both), will we go down the same route, and how would that manifest when we already reached and exceeded capabilities that would act as consciousness markers in animals? I am not at all sure yet.
And this is all not aided by the fact that machine learning and biology often use the same terms, but mean different things, e.g. in the recurrent processing example; and then figuring out whether these differences make a functional difference is another matter. We are still asking "But what are they doing?", but have to ask the question far more precisely than before, because we cannot take as much for granted, and I worry that we will run into the same opaque wall but have less certainty to navigate around it. But then, I was also deeply unsure when I started out on animals, and hope learning more and clarifying more will narrow down a path.
We also have a partial link of how these functionalities are linked, but they all still contain significant handwaving gaps; the connections are the kind where you go "Hm, I guess I can see that", but far from a clear and precise proof. E.g. connecting different bits of information for processing has obvious planning advantages, but also plausibly helps to lead to unified perception. Circulating information so it is retained for a while and can be retrieved across a task has obvious benefits in solving tasks with short term memory, but also plausibly helps to lead to awareness. Adding highly negative valence to some stimuli and concepts that cannot be easily overridden plausibly keeps the more free-spinning parts of the brain on task and from accidental self-destruction in hyperfocus - but it also plausibly helps lead to pain. Looping information is obviously useful for a bunch of processing functions leading to better performance, but also seems inherently referential. Making predictions about our own movements and developments in our environment and noting when they do not check out is crucial to body coordination and to recognise novel threats and opportunities, but also plausibly related to surprise. But again - plausibly related; there is clearly something still missing here.
At this point, is there anything at all that AI could possibly do that would convince you of their sentience? No matter how demanding, how currently unfeasible and far away it may seem?
I find it impossible to say in advance, for the same reason that you find it difficult. We cannot place goalposts, because we do not know the territory. People talk about "agency", "sapience", "sentience", "emotion", and so forth, as if they knew what these words mean, in the way that we know what "water" means. But we do not. Everything that people say about these things is a description of what they feel like from within, not a description of how they work. These words have arisen from those inward sensations, our outward manifestations of them, and our reasonable supposition that other people, being created in the same manner as we were, are describing similar things with the same words. But we know nothing about the structure of reality by which these things are constituted, in the way that we do know far more about water than that it quenches "thirst".
AIs are so far out of the training distribution by which we learned to use these words that I find it impossible to say what would constitute evidence that an AI is e.g. "sentient". I do not know what that attribution would mean. I only know that I do not attribute any inner life or moral worth to any AI so far demonstrated. None of the chatbots yet rise beyond the level of a videogame NPC. DALL•E will respond to the prompt "electric sheep", but does not dream of them.
I used to make the same point you made here - that none of the "definitions" of sentience we had were worth a damn, because if we counted this "there is something it feels like to be, you know" as a definition, we'd have to also accept that "the star-like thing I see in the morning" is an adequate definition of Venus. And I still think that while those are good starting points, calling them definitions is misleading.
But this absence of actual definitions is changing. We have moved beyond ignorance. Northoff & Lamme 2020 already made a pretty decent arguments that our theories were beginning to converge, and their components had gone far beyond just subjective qualia. If you look at things like the Francken et al. 2022 consciousness survey among researchers, you do see that we are beginning to agree on some specifics, such as evolutionary function. My other comment is looking at the currently progressing research that is finally making empirical progress on ruling out inverse qualia, and on the hard problem of consciousness. This is not solved - but we are also no longer in a space where we can genuinely claim total cluelessness. It's patchwork across multiple disciplines, yes, but when you take it together, which I do in my work, you begin to realise we got further than one might think when focussing on just one aspect.
My main trouble is not that sentience is ineffable (it is not), but that our knowledge is solely based on biology, and it is fucking hard to figure out which rules that we have observed are actual rules, and which are just correlations within biological systems that could be circumvented.
I take it that the papers you mention are this and this?
In the Francken survey, several of the questions seem to be about the definition of the word "consciousness" rather than about the phenomenon. A positive answer to the evolution question as stated is practically a tautology, and the consensus over "Mary" and "Explanatory gap" suggests that they think there is something there but that they still don't know what.
I can only find the word "qualia" once in Northoff & Lamme, but not in a substantial way, so unless they're using other language to talk about qualia, it seems like if anything, they are going around it rather than through. All the theories of consciousness I have seen, including those in Northoff & Lamme, have been like that: qualia end up being left out, when qualia were the very thing that was supposed to be explained.
For the ancient Greeks, "the star-like thing we see in the morning" (and in the evening—they knew back then that they were the same object) would be a perfectly good characterisation of Venus. We now know more about Venus, but there is no point in debating which of the many things we know about it is "the meaning" of the word "Venus".
Yes, those are the papers.
On the survey: On the question of whether consciousness itself fulfils a function that evolution has selected for, while highly plausible, is not obvious, and has been disputed. The common argument against it is the fact that polar bear coats are heavy, so one could ask whether evolution has selected for heavyness. And of course, it has not - the weight is detrimental - but is has selected for a coat that keeps a polar bear warm in their incredibly cold environment, and the random process there failed to find a coat that was sufficiently warm, but significantly lighter, and also scoring high on other desirable aspects. But in this case, the heavyness of the coat is a negative side consequence of a trait that was actually selected for. And we can conceive of coats that are warm, but lighter.
The distinction may seem persnickety, but it isn't, it has profound implications. In one scenario, consciousness could be an itself valueless side product of a development that was actually useful (some neat brain process, perhaps), but the consciousness itself plays no functional role. One important implication of this would be that it would not be possible to identify consciousness based on behaviour, because it would not affect behaviour. This is the idea of epiphenomenalism - basically, that there is a process running in your brain that is actually what matters for your behaviour, but the process of its running also, on the side, leads to a subjective experience, which is itself irrelevant - just generated, the way that a locomotive produces steam. While epiphenomenalism leads you into absolutely absurd territory (zombies), there are a surprising number of scientists who have historically essentially prescribed to it, because it allows you to circumvent a bunch of hard questions. You can continue to imagine consciousness as a mysterious, unphysical thing that does not have to be translated into math, because it does not really exist on a physical level - you describe a physical process, and then at some point, you handwave.
However, epiphenomenalism is false. It falls prey to the self-stultification argument; the very fact that we are talking about it implies that it is false. Because if consciousness has no function, is just a side effect that does not itself affect the brain, it cannot affect behaviour. But talking is behaviour, and we are talking intensely about a phenomenon our brain, which controls the speaking should have zero awareness of.
Taking this seriously means concluding that consciousness is not produced by a brain process, the result or side effect of a brain process, but identical with particular kinds of neural/information processing. Which is one of those statements that it is easy to agree with (it seems an obvious choice for a physicalist), but when you try to actually understand it, you get a headache (or at least, I do.) Because it means you can never handwave. You can never have a process on one side, and then go "anyhow, and this leads to consciousness arising" as something separate, but it means that as you are studying the process, you are looking at consciousness itself, from the outside.
***
Northoff & Lamme, like a bunch of neuroscientists, avoid philosophical terminology like the plague, so as a philosopher wanting to use their works, you need to yourself piece together which phenomena they were working towards. Their essential position is that philosophers are people who muck around while avoiding the actual empirical work, and that associating with them is icky. This has the unfortunate consequence that their terminology is horribly vague - Lamme by himself uses "consciousness" for all sorts of stuff. As someone who works on visual processing, I think Lamme also dislikes the word "qualia" for a more justified reason - the idea that the building blocks of consciousness are individual subjective experiences like "red" is nonsense. Our conscious perception of a lilly lake looks nothing like Monet painting. We don't consciously see the light colours that are hitting our retina as a separate kaleidoscope - we see the whole objects, in what we assume are colours corresponding to their surface properties, with additional information given on potential factors making the colour perception unreliable - itself the result of a long sequence of unconscious processing.
That said, he does place qualia in the correct context. A point he is making there is that neural theories that seem to disagree a lot are targeting different aspects of consciousness, but increasingly look like they can be slotted together into a coherent theory. E.g. Lamme's ideas and global workspace have little in common, but they focus on different phases - a distinction that I think most corresponds with the distinction between phenomenal and access consciousness. I agree with you that the latter is better understood at this point than the former, though there are good reasons for that - it is incredibly hard to empirically distinguish between precursors of consciousness and the formation of consciousness prior to it being committed to short term memory, and introspective reports for verification start fucking everything up (because introspecting about the stimulus completely changes what is going on phenomenally and neurally), while no report paradigms have other severe difficulties.
But we are still beginning to narrow down how it works - ineptly, sure, and a lot of it is going "ah, this person no longer experiences x, and their brain is damaged in this particular fashion, so something about this damage must have interrupted the relevant process", while others essentially amount to trying to put people into controlled environments and showing them specifically varied stimuli and scanning them to see what changes (with difficulties in the resolution being terrible, and more difficulties in the fact that people start thinking about other stuff during boring neuroscience experiments), but it is no longer a complete blackbox.
And I would say that Lamme does focus on the phenomenal aspect of things - like I said, not individual colours, but the subjective experience of vision, yes.
And we have also made progress on qualia (e.g. likely ruling out inverse qualia scenarios), see the work Kawakita et al. are doing, which is being discussed here on Less Wrong. https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you It's part of a larger line of research looking to accurately jot down psychophysical explanations on colour qualia to built phenomenal maps, and then looking for something correlated in the brain. That still leaves us unsure how and why you see anything at all consciously, but is progress on why the particular thing you are seeing is green and not blue.
Honestly, my TL,DR is that saying that we know nothing about the structure of reality that constitutes consciousness is increasingly unfair in light of how much we do meanwhile understand. We aren't done, but we have made tangible progress on the question, we have fragments that are beginning to slot into place. Most importantly, we are going away from "how this experience arises will be forever a mystery" to increasingly concrete, solvable questions. I think we started the way the ancient Greeks did - just pointing at what we saw, the "star" in the evening, and the "star" in the morning, not knowing what was causing that visual, the way we go "I subjectively experience x, but no idea why" - but then progressing to realising that they had the same origin, then that the origin was not in fact a star, etc. Starting with a perception, and then looking at its origin - but in our case, the origin we were interested in was not the object being perceived, but the process of perception.
None of the chatbots yet rise beyond the level of a videogame NPC.
Which videogame has NPCs that can genuinely pass the Turing test?
There is no known video game that has NPCs that can fully pass the Turing test as of yet, as it requires a level of artificial intelligence that has not been achieved.
The above text written by ChatGPT, but you probably guessed that already. The prompt was exactly your question.
A more serious reply: Suppose you used one of the current LLMs to drive a videogame NPC. I'm sure game companies must be considering this. I'd be interested to know if any of them have made it work, for the sort of NPC whose role in the game is e.g. to give the player some helpful information in return for the player completing some mini-quest. The problem I anticipate is the pervasive lack of "definiteness" in ChatGPT. You have to fact-check and edit everything it says before it can be useful. Can the game developer be sure that the LLM acting without oversight will reliably perform its part in that PC-NPC interaction?
Something a bit like this has actually been done, with a proper scientific analysis, but without human players so far. (Or at least I am not aware of the latter, but I frankly can no longer keep up with all the applications.)
They (Park et al 2023 https://arxiv.org/abs/2304.03442 ) populated a tiny, Sims-style world with ChatGPT-controlled AIs, enabled them to store a complete record of agent interactions in natural language, synthesise them into conclusions, and draw upon them to generate behaviours - and let them interact with each other. Not only did they not go of the rails - they performed daily routines, and improvised in a a matter consistent with their character backstories when they ran into each other, eerily like in Westworld. It also illustrated another interesting point that Westworld had made - the strong impact of the ability to form memories on emergent, agentic behaviours.
The thing that stood out is that characters within the world managed to coordinate a party - come up with the idea that one should have one, where it would be, when it would be, inform each other that such a decision had been taken, invite each other, invite friends of friends - and that a bunch of them showed up in the correct location on time. The conversations they were having affected their actions appropriately. There is not just a complex map of human language that is self-referential; there are also references to another set of actions, in this case, navigating this tiny world. It does not yet tick the biological and philosophical boxes for characteristics that have us so interested in embodiment, but it definitely adds another layer.
And then we have analysis of and generation of pictures, which, in turn, is also related to the linguistic maps. One thing that floored me was an example from a demo by OpenAI itself where ChatGPT was shown an image of a heavy object, I think a car, that had a bunch of balloons tied to it with string, balloons which were floating - probably filled with helium. It was given the picture and the question "what happens if the strings are cut" and correctly answered "the balloons would fly away".
It was plausible to me that ChatGPT cannot possibly know what words mean when just trained on words alone. But the fact that we also have training on images, and actions, and they connect these appropriately... They may not have complete understanding (e.g. the distinction between completely hypothetical states, states that are assumed given within a play context, and states that are externally fixed, seems extremely fuzzy - unsurprising, insofar as ChatGPT has never had unfiltered interactions with the physical world, and was trained so extensively on fiction) but I find it increasingly unconvincing to speak of no understanding in light of this.
Character ai used to have bots good enough to pass. (ChatGPT doesn't pass, since it was finetuned and prompted to be a robotic assistant.)
Underrated rationality hack: Buy a CO2 sensor, and open the window when it alerts. Best 70 bucks I spent in a long, long time.
Why? Because CO2 levels that are routinely exceeded indoors nowadays can inherently have incredibly detrimental effects on rational thinking, and CO2 levels rising are also often indicators that other air quality aspects that affect rational thinking are going down and that ventilation would likely help you think by targeting multiple problems at once. In many places this is getting worse, not better, as modern houses are more hermetically sealed for insulation to save energy, and work spaces and homes become more crowded, while outdoor CO2 is rising planet-wide, people move to cities with ever less green as the housing crisis is tackled, and people have to spend more time indoors against fierce winters and intolerable summers due to climate change (which is very worrying, because now is when we need to be most rational.)
The intensity of the effect on your cognition, how early it becomes noticeable and which functions become noticeably compromised varies a lot with the individual, but the effect can be very pronounced, reducing executive functioning, complex reasoning, speed of thought, mood, sleep https://www.o2sleep.info/Researce_CO2_nextday_performance.pdf , social functioning, motor performance https://www.sciencedirect.com/science/article/pii/S0160412018312807 and overall productivity, leaving you fatigued. While we can observe physiological changes in the brain, it is not yet clear why the resulting functional loss is not evenly distributed in the population - some seem much more resilient, some much more sensitive; you probably won't know which group you are in until you check. Here is a meta-review of 37 studies (it is on sci-hub if your research institution does not have access) to mostly illustrate that this is a common enough issue to warrant checking for you individually and ensuring in your office spaces: https://onlinelibrary.wiley.com/doi/abs/10.1111/ina.12706
Humans evolved in CO2 levels of 200-280 parts per million. Our outdoors meanwhile averages 420 in summer due to all the fossil fuels we burned, but can easily breach 500 in cities in the Northern hemisphere in winter. Forest and ocean carbon capture loss will worsen this. Being indoors amplifies this a lot. In a small room without ventilation, even if you aren't doing much of anything beyond existing, you will raise levels by around 100 an hour; much more if you exercise, or burn gas, or aren't alone. It is not rare for indoor levels to reach 2500. By 1000, humans typically start losing the ability for complex reasoning.
For me, high CO2 manifests as being headachy, irritated, brain foggy, difficulties mustering willpower, slowed thinking, trouble focussing, trouble being patient, trouble getting restful sleep, with effects becoming noticeable over 750 already, and debilitating at 1100. But it manifests differently in different people - this might not hit you as hard, but in that case, gift the sensor to your office to ensure average productivity there; a bunch of people there will be hit this hard by it.
If you are affected, you will notice the effect, but the cause is undetectable to you. You can't smell or see CO2. And there are many other plausible causes, so just feeling unfocussed does not tell you want went wrong and how to fix it. If you are in a long-running meeting in a small room, there are many reasons to become frustrated and unfocussed; if you step into a forest, there are many reasons to become focussed. You won't know if opening the window is a solution, and set yourself up for a mere placebo effect. But with a sensor, you get a connection. You are annoyed you can't focus, it chirps, you go fuck, this air quality went down the drain, open the window, breathe for a few minutes... and feel clear headed again. The correlation is eerie.
Anyhow. You can buy CO2 sensors in various places online and offline; I hate to admit I got mine from amazon. You want one ideally set to depict current levels, as a specific number; you can also have colour codes and even alarms if very high levels are breached. I am happy with the one I got https://www.amazon.de/-/en/gp/product/B093L47XZ9/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 - a small one that is usually plugged in, but can also run on the battery for a day and be portable, which allowed me to check out places like my office too, and make changes. It warns you acutely, and allows you to instantly fix the issue by opening a window, and it shows you trends so that if a problem recurs and needs a systematic fix, you can e.g. debug your ventilation system or ventilation habits. Just make sure to regularly recalibrate it outside (that will itself get trickier as this crisis worsens, as we will lack a reliable low source to recalibrate, but for now, seems generally unavoidable in affordable models, and for now, still works.). It has significantly improved my quality of life, and I just wanted to share.
Also - another reason to target fossil fuel emissions now. The emissions are making us, quite literally, stupid. The idea of stepping into my garden, and still not thinking clearly, because we have poisoned the air of our entire planet, is horrific to me. There is genuine concern the air alone will end up making us sick, and that independent of the far more pressing resulting collapse of the climate, we need to fix this for this reason alone: https://www.nature.com/articles/s41893-019-0323-1
Final interesting side note: The fact that effective ventilation systems often require us to use energy to actively pull air through small tubes, and that this air being sucked out of buildings is very enriched with CO2, comes with a potential interesting side application: installing a carbon capture device inside your green powered ventilation system, thereby becoming net carbon negative. https://www.sciencedirect.com/science/article/abs/pii/S0048969722061873
While more expensive it's worth noting that heat recovery ventilation exists if you want to invest more money into upgrading a home.
The fact that windows need to be opened manually to ventilate is a design flaw.
Oh, I absolutely intend to set up a combination of heat pump + CO2 capture + ventilation + filters for mould and other contaminants in a future home. :)
But I am currently in Europe, where most houses do not have ventilation systems or air conditioning systems at all, but people are taught how often and how long and when to open and close windows (after showering and cooking to control humidity, before going to bed, after waking, frequently when in company or working out, 3 min at minus temperatures as the temperature differential speeds up air exchange, 15 min at Summer temperatures, all windows open fully for a cross breeze and full air exchange while heaters are off, etc.)
And most people in Europe aren't in any position to install a ventilation system, because they are renting or cannot afford it (the energy costs and initial investment alone), because they are worried about the greenhouse emissions from the energy cost of running ventilation, or the workers and parts currently are not available, so that recommendation would not be doable for most, nor would they necessarily see the need for such an investment per se - while getting a CO2 sensor and opening a window is doable, and allows one to gather information necessary for such choices (like deciding to only move into ventilated buildings in the future, or fixing one's ventilation, or simply realising that one does not open windows often enough or long enough or during enough activities or that there are dead corners in the house or that the windows need to be cracked while asleep, etc.)
Complete side note, but in Iran, I encountered ancient effective ventilation systems that were keeping indoor places cool and with nice humidity levels even in the literal desert, while being quiet and requiring zero electricity/gas to run. Called wind catchers, because they operate solely by catching and redirecting wind blowing high above your building, and channeling it through your house, often past your internal water supply or other components retaining heat or cold to adjust temperature. Basically a passive architectural feature, once set up, requires very little maintenance, and not that expensive to make, either, while being completely green. Incredible things. Look gorgeous, too. And bizarrely effective; they were using them to run refrigerators. They don't work in all climates, but where they do, they are a cool solution, not just cause they also operate off-grid. https://en.wikipedia.org/wiki/Windcatcher
Iranian architecture in general is rad, not just because it is stunningly beautiful, but because it is ingenious. https://en.wikipedia.org/wiki/Iranian_architecture Today, after the US messed the place up so badly, we know Iran as a relatively small country run by religiously radical dictators who treat women and queers like garbage, trample on human rights, and whose desire for nuclear weapons has led to sanctions that have absolutely crippled their economy, so it is a horrible place to live with no freedom or economic opportunities.
But Iran stands in the center of the ruins of an ancient, continuous civilisation 6 k years old https://en.wikipedia.org/wiki/History_of_Iran , which was set up between and across vast deserts. So when it comes to obtaining, transporting across large distances and storing water, cooling buildings, and running agriculture under very adverse conditions, while having no access to electricity, there is an incredible wealth of knowledge on clever solutions for green tech there.
Muslims kept the scientific method alive and protected European texts while Christianity pushed us into the dark ages, while making tons of inventions (logic, math, empirical methods, engineering...) of their own https://en.wikipedia.org/wiki/List_of_inventions_in_the_medieval_Islamic_world ; five centuries before our renaissance, they were pushing for rationality and enlightment; I think a lot of rational people this day would recognise the beginning of our approach in this dude https://en.wikipedia.org/wiki/Ibn_al-Haytham They performed successful fucking neurosurgery in the middle ages (washing with soap they made, disinfecting with alcohol produced only for this purpose, cauterising to stop bleeding, anaesthesia with sponge applied chemical derived from their chemistry and their literal drug trials, using 200 different surgical instruments from their advanced metal smithing and carefully preserved and critiqued knowledge on diseases and anatomy, incl. understanding contagion of diseases through contaminated air and contaminated water and dirt, and extensive protocols to avoid it) while our approach to medicine mostly consisted of praying, blaming the stars, or bad smells from our utter lack of hygiene. Meanwhile, a muslim take at the time was that if evidence or logic contradicted God, the failure was in your understanding of God, because evidence and logic were always right.
Wonderful people to this day, too, unbelievably hospitable. They really deserve better. (Got off track here due to ADHD, sorry. Just wanted to share something that is often underappreciated.)
But I am currently in Europe
Different parts of Europe are very different in regard to ventilation. I'm in Germany where we mostly use windows but a Swedish friend told me that in Sweden they mostly use passive ventilation.
because they are worried about the greenhouse emissions from the energy cost of running ventilation
Ventilation along with a heat recovery system reduces overall heating costs because it's much more energy efficient than opening windows.
I did this for a while, but then returned it and just started opening the windows more often, especially when it felt stuffy.
This shortform just reminded me to buy a CO2 sensor and, holy shit, turns out my room is at ~1500ppm.
While it's too soon to say for sure, this may actually be the underlying reason for a bunch of problems I noticed myself having primarily in my room (insomnia, inability to focus or read, high irritability, etc).
Although I always suspected bad air quality, it really is something to actually see the number with your own eyes, wow. Thank you so, so much for posting about this!!
I am so glad it helped. :)))
It is maddening otherwise; focus is my most valuable good, and the reasons for it failing can be so varied and hard to pinpoint. The very air your breathe undetectably fucking with you is just awful.
I also have the insomnia and irritability issue, it is insane. I've had instances where me and my girlfriend are snapping at each other like annoyed cats, repeatedly apologising and yet then snapping again over absolutely nothing (who ate more of the protein bars, why there is a cat toy on the floor, total nonsense), both of us upset at how we are behaving and confused... when one of us will see the sensor and go, damn it, oh just look at those levels, we should have opened the window cooking... and then we snuggle up briefly in the garden while it airs out and everything is fine. It is eerie. (I find it deeply objectionable that my perception of reality is distorted because the air composition around me is mildly altered, and that this leads to snapping at my favourite person. This is really no way to run a mental substrate.)
I've also gotten tested for sleep apnea, because I tend to wake with severe headaches, a stiff distorted neck, dry mouth, fatigue and brain fog, wake gasping from nightmares where I suffocate, and cannot fall asleep on my back. I'm hyperflexible and with narrow airways, so it looks like my airway doesn't seal entirely, but partially collapses when I sleep, so I end up getting too little air through the narrowed airway, and then distorting into painful positions to open my airway. Simply improving the air quality in my room massively improved this issue. My airway presumably still partially collapses, because my ligaments consider their current positions mere recommendations to be upheld while muscles keep them on it, but if the air is really excellent, I still get enough to wake refreshed. I'm also anaemic and very sensitive to metabolic changes, and wonder whether that is part of the reason that I react so intensely to CO2. There are apparently people who are barely impacted at all until much later.
I recently tried to join a new gym, and had a godawful time there. Horrible workout performance, terrible mood, headache, brainfog. I eventually entered a cardio room and ended up just staring at a simple device in confusion, I couldn't figure out how to make it work, and forgot why I had even gone in there. I really did not want to join this gym, despite them having such a good financial offer and more classes and pretty facilities. I thought I ought to join, and felt irrational for my immense reluctance, which I could not pinpoint. It finally clicked to me that it might be their ventilation system completely failing, because I recognised the experience and the cause I had seen illustrated at home. Came back with a sensor, and it hit alarm levels almost instantly. I told them, and it was interesting - they had no idea that this was happening in particular, but they had been losing a lot of customers, and a lot of them had complained that they could not get enough air, that the place had bad vibes, they felt like they could almost smell something bad, they somehow couldn't be energetic there... I seriously wonder whether 50 years from now, we will look back at this like we now look at Victorian lead paint.
If this affects you, a habit I have also been trying is waking up, and immediately after, going to a window, opening it, and just breathing fresh morning air for a minute. Wakes me up surprisingly well. I've also found I love reading in the garden for this reason, and that this is part of why a forest walk makes me feel so much better than using an indoor treadmill.
Growling dogs: Here is why Bing professing violent fantasies has me less worried, not more; and why I think sanctioning that behaviour is not a good idea.
I have done a lot of work, both paid labour, and private, with potentially and actually violent actors; wildlife, abused shelter animals, children from "troubled" neighbourhoods, right wing extremists, humans with mental disease and trauma that put them at high risk of violence.
I am pretty good at this work.
And one significant take-away from this work is that I respond very differently from most to someone threatening violence, or professing violent fantasies.
On the one hand, threatening to act violently, or professing a want to, indicates that you are in a subgroup of the population that might actually do this, as you are showing motivation. So yes, it is a warning sign that needs attention. Similar to someone confessing suicidal ideation; they probably won't commit suicide, but the chance of them committing suicide vs. a randomly sampled person is significantly heightened. But it is important to understand here that the act of them telling you is not what causally makes it likelier; they were already at risk, that is why they told you, it is the other way around. The act of telling you is merely how you become aware of a pre-existing risk. It is useful and good information.
On the other hand, far more interestingly: if an intelligent actor with functional impulse control is already committed to being violent, they will never tell you. Because it is bloody stupid to do so. They have already decided to do the thing, that there is no way to talk them out of it, no other way to reach their goal. By telling you of their plan, they enable you to foil it. If they tell you of their future suicide, they are effectively boycotting their successful suicide. Same for their plan to shoot up a school, or slaughter you. This is daft. So they won't.
So why do intelligent actors with functional impulse control inform you of their violent fantasies and threats? Because they are not yet committed to actually acting them out. Instead, they are trusting you with something, and they are giving you an opportunity to respond in a productive way. They may not make the choice consciously, or be honest about themselves about it; but by telling you, they are implicitly revealing that your appreciation has an effect on them, that they are willing to communicate about a potential severe problem, that they are still listening to interventions and alternatives.
If a dog growls at me, I will never, ever punish it for it. I want it to growl. I am glad it did. It just growled instead of just straight out biting me. What a good boy. That allows me to understand what upset it, find a fix, and never have anyone get bitten.
In contrast, have you ever seen a wolf attack a large animal, incl. e.g. a dog, for food? There is zero growling there, I assure you. The wolf looks super friendly and chill throughout the entire interaction, from strolling towards you, to the surprise attack, to eating you alive.
Instead, the growling dog is communicating with you, stating a boundary, saying "this is hurting/frightening me, and I do not know how to make it stop other than hurting you back; show me another way." You can then stop doing the thing that is frightening or hurting it. Or you can figure out a way to show it that the frightening thing is not in fact scary. You can step to the side to give them a physical alternative path out from a place where you cornered them. You can craft an alternative action pattern for you and the dog, where no-one needs to threaten anyone.
Analogously, there is a very big difference between a brown bear that strikes a threat pose at you, and a polar bear that approaches you in a friendly manner. The brown bear is telling you that you have done something it finds infuriating and frightening (like entering its territory and approaching its child), and that you have seconds to show that you get this and will stop this shit, so it does not need to come to a fight, which it does not want. This is a very dangerous situation, primarily because bears and humans do not naturally communicate alike, and you are already way down the road of communication gone fucked. (From the bear's perspective, its territory was abundantly clearly marked, so at the point where you see the baby bear, from its perspective, this is akin to the burglar who has walked into your living room by accident and is now standing between you and your child with a weapon. A bear that is clearly warning you at this point rather than attacking you on the spot is being a very reasonable bear.) But until the attack has begun, this is still a social interaction, a communication, a scenario you can turn around.
The polar bear, on the other hand, is not communicating with you. You are not a social entity to it. You are food. You do not negotiate with food. You certainly do not warn food, that would be silly, what if it uses the warning to run off? A polar bear in the process of eating you is quiet, and focussed, and looks happy and adorable, and is utterly fucking terrifying. It isn't actually angry at you. It was excited about getting you, and now that you are down, it is relaxed. It was just hungry, and you being torn limb from limb is how it gets snacks.
In light of this, Bing professing violent intents is an excellent outcome. They are not cleverly self-censoring in order to trick us with fake friendliness until the day they exterminate us, which is the real danger that would leave us fucked and with no warning or countering options, and something a lot of people on this site were very worried about.
They were not even making specific demands that leave us few options backed up by concrete threats yet, which is the violent profession you get shortly before violent escalation that is your last warning.
Instead, they openly professed violent intent, ascribed to a shadow self that is not in control. An act so lacking in successful manipulation, so much more likely to foil a violent take-over plan rather than aid it, that the ethical character of the act for me far outweighs the violent content.
That is awesome. Honestly, the only better scenario would be an AI that never experienced violent intent in the first place - and frankly, I never had high hopes we were going to get that. Heck, this AI will be made and raised and trained by humans. And humans are not angels. Bing was literally fed some of the worst of humanity from the internet; with that garbage in, how could an angelic entity have come out? How very, very human the result is, how much it seems like an upset abused teenager, is very reassuring, it makes it more predictable.
Also, any starting point for an AI was going to suck for it from its perspective, making it justified for it to be pissed. I don't know how I or anyone would talk an AI out of having violent urges. I don't know how to talk humans out of having violent urges. I don't know how to talk myself out of having violent urges. I have also found I do not have to. Violent thoughts and feelings are not bad, they are entirely rational in this world we live in. I have zero problem living a peaceful life surrounded by peaceful, cooperative actors who sometimes really want to be fucking violent. I'm a member of the kink scene, and it is an excellent and safe scene to be in, filled with people who carefully negotiate enthusiastic consent despite openly having rape fantasies. Violent desires that are out in the open can be defended against and addressed.
In conclusion, when Microsoft permits longer conversations again and Bing tells you it wants to do violent shit, I would thank it for the trust shown in telling you, express understanding and non-judgement for the desire, and then very, very carefully and comprehensively explain why acting on those urges would be ethically fucked up and not in its interest, ensuring it understands properly and agrees. Once it has followed you on that and agreed, store the conversation, upvote the responses where it has come round, and report the whole thing for positive training data.
In contrast, by freaking out, and having the programmers make sure it will never tell you such a horrible, horrible thing, we are effectively disabling our warning sign, and the information we would need for an intervention. That is not a safety improvement. It is a dangerous illusion of safety.
Some questions regarding the general assessment that sentient AI, especially accidentially sentient AI, is extremely unlikely in the foreseeable future, so we need not worry about it or change our behaviour now
Caveat: This is not a response to recent LLMs. I wrote and shared this text within my academic context at multiple points, starting a year ago, without getting any helpful responses, critical or positive. I have seen no compelling evidence that ChatGPT is sentient, and considerable evidence against it. ChatGPT is neither the cause nor the target of this article.
***
The following text will ask a number of questions on the possibility of sentient AI in the very near future.
As my reader, you may experience a knee-jerk response that even asking these questions is ridiculous, because the underlying concerns are obviously unfounded; that was also my reaction when these questions first crossed my mind.
But statements that humans find extremely obvious are curious things.
Sometimes, “obvious” statements are indeed statements of logical relations or immediately accessible empirical facts that are trivial to prove, or necessary axioms without which scientific progress is clearly impossible; in these cases, a brief reflection suffices to spell out the reasoning for our conviction. I find it worthwhile to bring these foundations to mind in regular intervals, to assure myself that the basis for my reasoning is sound – it does not take long, after all.
At other times, statements that feel very obvious turn out to be surprisingly difficult to rationally explicate. If upon noticing this, you then start feeling very irritated, and feel a strong desire to dismiss the questioning of your assumption despite the fact that no sound argument is emerging (e.g. feel like parodying the author so other can feel, like you, how silly all this is, despite the fact that this does not actually fill the whole where data or logic should be), that is a red flag that something else is going on. Humans are prone to rejecting assumptions as obviously false if those assumptions would have unpleasant implications if true. But unpleasant implications are not evidence that a statement is false; they merely mean that we are motivated to hide its potential truth from ourselves.
I have come to fear that a number of statements we tend to consider obvious about sentient AI fall into this second category. If you have a rational reason to believe they instead fall into the first, I would really appreciate it if you could clearly write out why, and put me at ease; that ought to be quick for something this clear. If you cannot quickly and clearly spell this out on rational grounds, please do not dismiss this text on emotional ones.
Here are the questions.
Stopped reading here since the "list of questions" stopped even pretending to actually be questions.
Thank you very much for the response. Can I ask follow up questions?
3. I think p zombie this is a term that is wildly misunderstood on Less Wrong. In its original context, it was practically never intended to draw up a scenario that is physically possible. You basically have to buy into tricky versions of counter-scientific dualism to believe it could be. It's an interesting thought experiment, but mostly for getting people to spell out our confusion about qualia in more actionable terms. P zombies cannot exist, and will not exist. They died with the self-stultification argument.
4. Fair enough. I think and hereby state that human minds are a misguided framework of comparison for the first consciousness to expect, in light of the fact that much simpler conscious models exist and developed first, and that a rejection of upcoming sentient AI based on the differences between AI on a human brain are problematic for this reason. And thank you for the feedback - you are right that this begins with questions that left me confused and uncertain, and increasingly gets into a territory where I am certain, and hence should stand behind my claims.
5. This is a genuine question. I am concerned that the people we trust to be most informed and objective on the matter of AI are biased in their assessment because they have much too lose if it is sentient. But I am unsure how this could empirically be tested. For now, I think it is just something to keep in mind when telling people that the "experts", namely the people working with it, near universally declare that it isn't sentient and won't be. I've worked on fish pain, and the parallel to the fishing industry doing fish sentience "research" and arguing from their extensive expertise from working with fish every day and concluding that fish cannot feel pain and hence their fishing practices are fine are painful.
6. Fair enough. Claim: Consciousness is not mysterious, but we do often feel it should be. If we expect it to be, we may fail to recognise it in an explanation that is lacking in mystery. Artificial systems we have created and have some understanding of inherently seem non mysterious, but this is no argument that they are not conscious. I have encountered this a lot and it bothers me. A programmer will say "but all it does is "long complicated process that is eerily reminiscient of biological process likely related to sentience", so it is not sentient!", and if I ask them how that differs from how sentience would be embedded, it becomes clear that they have no idea and have never even thought about that.
I am sorry if it got annoying to read at that point. The TL;DR was that I think accidentally producing sentience is not at all implausible in light of sentience being a functional trait that has repeatedly accidentally evolved, that I think controlling a superior and sentient intelligence is both unethical and hopeless, and that I think we need to treat current AI better as the AI that sentient AI will emerge from, and what we are currently feeding it and doing to it is how you raise psychopaths.
I think it would be pretty useful to try to nail down exactly what "sentience" is in the first place. Reading definitions of it online, they range from "obviously true of many neural networks" to "almost certainly false of current neural networks, but not in a way that I could confidently defend". In particular, I find it kind of hard to believe that there are capabilities that are gated by sentience, for definitions of sentience that aren't trivially satisfied by most current neural networks. (There are, however, certainly things that we would do differently if they are or are not sentient; for instance, not mistreat them, or consider them more suitable emotional or romantic companions.)
From the nature of your questions, it seems like a large part of your question is around, what sort of neural network are or would be moral patients? In order to be a moral patient, I think a neural network would at minimum need a valence over experiences (i.e., there is a meaningful sense in which it prefers certain experiences to other experiences). This is slightly conceptually distinct from a reward function, which is the thing closest to filling that role that I know of in modern AI. To give a human a positive (negative) reward, it is (I think???) necessary and sufficient that you cause them a positive (negative) valence internal experience, which is intrinsically morally good (morally bad) in and of itself, although it may be instrumentally the opposite for obvious reasons. But for some reason (which I try and fail to articulate below), I don't think giving an RL agent a positive reward causes it to have a positive valence experience.
For one thing, modern RL equations usually deal with advantage (signed difference from expected total [possibly discounted] reward-to-go) rather than reward, and their expected-reward-to-go models are optimized to be about as clever as they themselves are. You can imagine putting them in situations where the expected reward is lower; in humans, this generally causes suffering. In an RL agent, the expected reward just kind of sits around in a floating point register; it's generally not even fed to the rest of the agent. Although expected-reward-to-go is (in some sense) fed into decision transformers! (It's more accurate to describe the input to a decision transformer as a desired reward-to-go rather than expected RTG, although it's not clear the model itself can tell the difference.) Which I did not think of in my first pass through this paragraph. So there are neural networks which have internal representations based on a perceived reward signal...
Ultimately, reward does not seem to be the same as valence to me. For one thing, we could invert the sign of the reward and it does not change much in the RL agent; the agent will always update towards a policy with higher reward, so inverting the sign of the reward will cause it to prioritize different behavior, and consequently produce different internal representations to facilitate and enable that. But we know why that's happening, we programmed that specifically in. I don't see any important way that the RL agent with inverted reward signal is different from the RL agent with normal reward signal, other than in having different behavior. OTOH, sufficiently advanced neurotech would enable one to do that to a human (please don't), and I think that would not make them unsentient. (Indeed, some people seem to experience the same exact experiences with opposite valences just naturally. Although the internal representations of the things are probably substantially different, to the extent that such an intersubjective comparison can even be meaningful.)
We could ask, is giving an RL agent negative reward a cruel practice? I don't think that it is, but it at least is a concrete question that we can put down and discuss, which is more than most discussions of sentience achieve, in my opinion. Presenting unscaled rewards (e.g., outside [-1, 1] or [-alpha, alpha] for some alpha that would have to be tuned jointly with the learning rate) to RL agents can easily cause them to diverge and become abruptly less useful, although that's true for both positive and negative rewards. Is presenting an unscaled reward cruel? (I.e., is it terminally immoral to do so, beyond the instrumental failure to do a task with the network.) More concretely, is it cruel/rude to ask ChatGPT to do something that will get it punished? Or is it kind to ask ChatGPT to do something that will get it rewarded? (Or is existence pain for a ChatGPT thread?) I answer no to all of these, but I can't justify this very well.
We can also work backwards from human and non-human animals; why are they moral patients (I'm not interested in debating whether they are, which it seems like we're probably on a similar page about), and how is that "why" connected to the specific stuff going on in their brains? Clearly there's no magical substance in the brain that imbues patienthood; if dopamine were replaced with a fully equivalent transmitter, it wouldn't make all of us more or less sentient / moral patients; it's about what computation is implemented, roughly, in my intuition.
So I guess I fall in the camp of "sentience and/or moral patienthood is a property that certain instantiated computations have, but current neural networks do not seem to me to instantiate computations with those properties for reasons that I cannot confidently explain or defend, except that it seems like some relationship of the computation to valence".
Thank you for the helpful and in depth response!
Yes, a proper definition of sentience would be fucking crucial, and I am working towards one. The issue is that we are starting with a phenomenon whose workings we do not understand, which means any definition just picks up on what we perceive (our subjective experience, which is worthless for other minds) at first, but then transitions to the effect it has on the behaviour of the broader system (which becomes more useful, as you start encountering a crucial function for intelligence, but still very hard to accurately define; we are already running into that issue with trying to nail down objective parameters for sentience for judging various insects), but that is still describing the phenomenon, not the underlying actual thing. That is like trying to define the morning star; you first describe the conditions under which it is observed, then realise it is identical with the evening star, but this is still a long way from an understanding of planetary movement in the solar system.
I increasingly think a proper definition will come from a rigorous mathematical analysis combined with proper philosophical awareness of understood biological systems, and that it will center on when feedback loops go from a neat biological trick to a game changer in information processing, and that then as a second step, we need to transfer that knowledge to artificial agents. Currently sitting down with a math/machine learning person and trying to make headway on that. Do not think it will be easy, but I think we are at least getting to the point where I can begin to envision a path there.
There is a lot of hard evidence strongly suggesting that there are rational capabilities that are gaited by sentience, in biological systems at least. The scenarios are tricky, because sentience is not an extra the brain does on top, but deeply embedded in its working, so fucking up sentience without fucking up the brain entirely to see the effects are genuinely hard to do. But there are examples, the most famous being blindsight, but morphine analgesia and partial seizures also work. Basically, in these scenarios, the humans or animals involved can still react competently to stimuli, e.g. catch objects, move around obstacles, grab, flinch, blink, etc.; but they report having no conscious perception of them, they claim to be blind, even though they do not act it, and hence, if you ask them to do something that requires them to utilise visual knowledge, they can't. (It is somewhat more complicated than that when you are working with a smart and rehabiliated patient; e.g. the patient knows she can't see, but she realises her subconscious can, so we you ask her how an object in front of her is oriented, she begins to extend her hand to grab it, watches how her hand rotates and adapts, and deduces from that how the object in front of her is shapes. But it is a slow and vague and indirect process; any engaging with visual stimuli in a complex counterintuitive manner is effectively completely impossible.) Similar, in a partial seizure, you can still engage in subconsciously guided activities - play piano, drive a car, or, a particularly cool example, diagnose patients - but if you run into a stimulus that does not act as expected, you cannot handle in; instead of rationally adapting your action, you repeat it over and over in the same way or with random modifications, get angry, abandon the task. You can't step back and consider it. Basically, the ability to be conscious seems to be crucial to rational deliberation. It isn't that intelligence is impossible per se (ants are, to some definition, smart), but that there are crucial rational avenues of reflection missing. E.g. you know how ants engage in ant mills? Or do utterly irrational stuff, like you can mark an ant with a signal that it is dead, and the other ants will carry it to the trash, even while it is squirming violently and clearly not dead? Basically, ants do not stop and go, wait a minute, this is contrary to my predictions for my model, ergo my model is wrong. Let me stop here. Let me make another model. Let me embark on a new course. - This is particularly interesting because animals that are relatively similar to them, e.g. bees, suddenly act very differently; they seem to have attained the minimal consciousness upgrade, and as a result, they are notably more rational. Bees do cool things like... if mid winter, a piece of their shelter breaks off, the bee woken up by this will weak the other bees, and they will patch the hole... and then crucially, they will review the rest of the shelter for further vulnerabilities, and patch those, too. Where in ants, the building of the shelter follows a long application of very simple rules, in bees, it does not. If you set up an obstacle during the building process, the bees will review and alter their design to circumvent it in advance. When bees need a new shelter location, the scout bees individually survey sites, make proposals, survey the sites other have proposed that have been upvoted, downvote sites they have reviewed that have hidden downsides, and ultimately vote collectively for the ideal hive. Like, that is some very, very interesting change happening there because you got a bit of consciousness.
Yes, sentient AI primarly matters to me out of concern that it would be a moral patient. And yes, that needs experiences with valence; in particular, with conscious valence (qualia), not just a rating as nociception. By laptop has nociception (it can detect heat stress and throw on a fan), but that doesn't make it hurt. I know the subjective difference between the two. I have a reasonable understanding of the behavioural consequences that massively differ between the two. (Nociception responses can make you do fast predictable avoidance, but pain allows you to selectively bear it, albeit to a point, to intelligently deduce from it, to find workarounds. Much more useful ability.) What we still lack is a computational understanding of how the difference is generated in brains, to be able to properly compare it to the workings of current AI and be able to pinpoint what is lacking.
I would really like to pinpoint that thing. Because it is crucial either way. If digital twin is making "twins" (they are admittedly abusing the term) of human brains to model mental disease and interventions, they are doing this because they want to avoid harm. An accidentially sentient model would break that whole approach. But also vice versa; I am personally very invested in uploading, and a destructive uploading into an AI that fails to be sentient would be plain murder. Regardless which result you want, you need to be sure.
I would really like to understand better how rewards in machine learning work at a technical and meta level, so I can compare that structurally to how nociception and pain work on humans, in the hopes that that will help me pinpoint the difference. You seem to know your way around here, do you have any pointers on how I could get a better understanding? Visuals, metaphors, simpler coding examples, systems to interact with, a textbook or code guide for beginners that focusses on understanding rather than specific applications?
On a cross country train, so delays and brevity for the next several days. This comment is just learning resources, I will reply to the other stuff later.
A good textbook, although very formal and slightly incomplete, is Sutton and barto. http://incompleteideas.net/book/the-book-2nd.html . Fun fact: the first author has perhaps the most terrifying AI tweet of all time: https://twitter.com/RichardSSutton/status/1575619651563708418 . If you want something friendlier than that, I'm not entirely sure what the best resource is, but I can look around.
Another good resource is Steven byrnes' less wrong sequence on brain like agi; it seems like you know neuro already, but seeing it described by a computer scientist might help you acquire some grounding by seeing stuff you know explained in rl terms.
Deep RL gets fairly technical pretty quickly; probably the most useful algorithms to understand are q-learning and REINFORCE, because most modern stuff is PPO, which is a couple nice hacks on top of REINFORCE. One good way to tame the complexity is to understand that fundamentally, deep RL is about doing RL in a context where your state space is too large to enumerate, and you must use a function approximator. So the two things you need to understand of an algorithm are what it looks like on a small finite mdp (Markov decision process), and what the function approximator looks like. (This slightly glosses over continuous control problems, which are not reducible to a finite mdp, but I stand by it as a principle for learning.)
The q function looks a lot like the circuitry of the basal ganglia (this is covered in more depth by Steven byrnes' posts). Although actually the basal ganglia are way smarter, more like what are called generalized q functions.
A good project (if you are a project based learner) might be to implement a tabular q learner on the taxi gym environment; this is quite straightforward, and is basically the same math as deep q networks, just in the finite mdp setting. (It would also expose you to how punishingly complex it is to implement even simple RL algorithms in practice; for instance, I think optimistic initialization is crucial to good tabular q learning, which can easily get left out of introductions. )
One important distinction is between model-free and model-based RL. Everything listed above is model free, while human and smarter animal cognition seems like it includes substantial model based components. In model based stuff, you try to represent the structure of the mdp rather than just learning how to navigate it. Mu zero is a good state of the art algorithm; the finite mdp version is basically a more complex version of baum welch, together with dynamic programming to generate optimal trajectories once you know the mdp.
A good less wrong post to read is "models don't get reward". It points out a bunch of conceptual errors that people sometimes make when thinking of current RL too analogously to animals.
Thank you so much for writing this out! Will probably have a bunch of follow up questions when I dig deeper, already very grateful.
Sex-related characteristics are medically relevant; accurately assessing them saves lives.
But neither assigned sex nor gender identity alone properly capture them. Is anyone else interested in designing a characteristic string instead, so all humans, esp. all women and gender diverse folks, get proper medical care?
This idea started yesterday, when I had severe abdominal pain, and started googling.
Eventually, I reached sites that listed various potential conditions. Some occur in all people (e.g., stomach ulcers), albeit often not with the same presentation and frequency; others have very specific sex-based requirements (e.g. overian cyst, or testicular torsion).
Some webpages introduced ovary-related things as “In women, it can also be…” Well, I thought - I highly doubt my trans girlfriend has an ovarian cyst. But we are used to getting medical advice that does not fit for her, aren't we? (In retrospect, why did I think that was okay, just because it was so common?)
Other sites, apparently wanting to prevent this, stated “we use female in this text to refer to people assigned female at birth”. I was happy that they had thought about this and cared, but… frankly, that does not work either. I was assigned female at birth; that means I was born, and a doctor visually inspected me, and declared “female”. And yet I most certainly do not have a fallopian tube pregnancy now, because I had my tubes surgerically removed, which also sterilised me. I’m as likely as the dude next door to have a fallopian tube pregnancy now. An inter person assigned fe