All of FlorianH's Comments + Replies

I understand your concern, about the authors deviating from a consensus without good reasons. However, from the authors' perspective, they probably believe that they have compelling arguments to support their view, and therefore think they're rejecting the consensus for valid reasons. In this case, just pointing to Chesterton's fence isn't going to resolve the disagreement.

Since so much around consciousness is highly debated and complex (or as some might hold simple and trivial but difficult to see for the others), departing from the consensus isn't automatically a mistake, which I think is the same as or close to what @lc  points out.

Indeed, just as you do, I very much reject that statement, which are only the words I used to very bluntly put what the paper authors really imply.

Then, I find your claim slightly too strong. I would not want to claim to know for sure the authors have not tried to sanity-check their conclusions, and I'm not 100% sure they have not thought quite deeply about the consciousness concept and its origins (despite my puzzlement about their conclusions), so I wouldn't have dared to state it's a classical Chesterton's fence trespassing. That said, indeed, I find th... (read more)

It's a Chesterton's Fence trespassing because every single other person would say that you can't mistreat a computer.  If you don't understand why everyone thinks this way, beyond just "well, they're ignorant", you shouldn't be treating the opposite seriously.

I'm sympathetic to the idea that "Consumerism" might be too often used. But - with the risk of overlap with qjh's detailed answer:

Consumerism = (e.g.) when we consume stuff with very negligible benefit to ourselves, maybe even stuff we could ourselves easily admit is kind of pure nonsense if we thought a second about it, maybe driven by myopic short-term desire we'd ourselves not want to prioritize at all in hindsight. Consumption that nevertheless creates pollution or other harm to people, uses up resources that could otherwise contribute towards more imp... (read more)

Here’s what I’ll say, your definition of consumerism is good, that’s why you don’t see the value in what’s being said. So much of the conversation around consumerism has been to demonise it while failing to acknowledge that for some centuries now man have been able to experience consumerism in ways that have been had their good and their bad. but if you come at the term from a largely bad context, then it makes sense that you will struggle to learn from the post. The idea of the post is challenge people to see things outside of common talking pints and group think, which is what the conversation around consumerism is. Much of it is group think.    I say this to the point that people are t tacking into account that much consumerism isn’t happening in large scale that people make it out to be. Most consumers spend their money on bills or basic needs,  enforce actually buying anything from a context of material happiness, yet, some who people have developed an idea that the average person is just spending just to be spending. Yes, there are drawbacks to consumerism, but there as many good things about it. For example, it helps with communal connect, benefits the economy, which is a common argument, it’s good for self-expression, and also valuable for creating happiness when done right. I don’t support the notion that people should buy based on what’s needed, since it’s in human nature to want to buy according to what will allow people to have fun or connect to others. Now, are there times when this can get out of control in people, yes, but I don’t agree with generalising this as if our overall expert is crippled by it. We just need to encourage people to be self-aware regarding their spending. Make sure people are knowledgeable about what they are buying, as to minimise bad habits. Other than that, consumerism isn't generally a bad thing, but I would say the narrow scope of the conversation regarding consumerism unfortunately is a problem. It’s not challenging people

But I doubt highly that most of the things are of a sort that is likely to lead many to be miserable. The two who are the most miserable in the sample are Russell and Woolf who were very constrained by their guardians; Mill also seems to have taken some toll by being pushed too hard. But apart from that?

Mind the potentially strong selection bias specifically here, though. Even if in our sample of 'extra-successful' people there were few (or zero) who were too adversely affected, this does not specifically invalidate a possible suspicion that the base rate ... (read more)

I'm 10-15min late. Glad to have a sign of where you are. Whatsapp +41786760000

Indeed that was the idea. But I had not thought of linking it to the "standard AI-risk idea" of AI otherwise killing them anyway (which is what I think you meant)

On your $1 for now:

I don't fully with "As long as they remain the majority, this will work - the same way it's always worked. Imperfectly, but sufficiently to maintain law and order.". A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see ... (read more)

We've had probably-close-to-psychopathic people in the white house multiple times so far. Certainly at least one narcissist. But you're right that this is harmful. Honestly, I don't really know what to say about this whole subject other than "it astounds me that other people don't already care about the welfare of AIs the way I do", but it astounds me that everyone isn't vegan, too. I am abnormally compassionate. And if the human norm is to not be as compassionate as me, we are doomed already.

There is though another point I find interesting related to past vs. current feelings/awareness and illusionism, even if I'm not sure it's eventually really relevant (and I guess goes not in the direction of what you meant): I wonder whether the differences and parallels between awareness about past feelings and concurrent feelings/awareness can overall help the illusionist defend his illusionism:

Most of us would agree we can theoretically 'simply' (well yes, in theory..) rewire/tweak your synapses to give you a wrong memory of your past feelings... (read more)

[Not entirely sure I read your comment the way you meant]

I guess we must strictly distinguish between what we might call "Functional awareness" and "Emotional awareness" in the sense of "Sentience".

In this sense, I'd say: Let's have the future chatbots have more memory of the past and so be more "aware", but the most immediate thing this gives them is more "Functional awareness", which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who cu... (read more)


It would be an interesting ending, if we killed ourselves before AIs could.

Love this idea for a closure. Had I thought about it, I might have included it in the story. Even more so as it is also related to the speculative Fermi Paradox resolution 1 that I now mention in a separate comment.

Oh, I see. I thought them becoming silent meant they died out by killing each other.

I tried to avoid bloating this post; Habermacher (2020) contains a bit more detail on the proposed chain AI -> popularity/plausibility of illusionism -> heightened egoism, and makes a few more links to literature. Plus it provides – a bit more wildly – speculations about related resolutions of the Fermi paradox (no claim for these to be really pertinent; call it musings rather than speculations if you want):

  1. One largely in line with what @green_leaf suggests (and largely with Alenian's fate in the story): With the illusionism related to our devel
... (read more)

Thank you, on the contrary, this is constructive critique kindly put; highly appreciated! I actually myself was a bit at a loss of how to present it: Summary intro? A telling subtitle e.g. "Of AI, illusionism, and fading altruism" at least? Eventually, I opted to do try to not spoil the slight mystery in the story at all, and I added all in a lengthy Afterword (first indeed containing a summary btw, and then a more detailed explanation), trusting in part that the tags give at least a slight hint as to the broader topic.

Given your comment, I now plan to add... (read more)

1: Here you contest 'LaMDA is insentient'. In the story, instead, 'LaMDA is by many seen as (completely) insentient' is the relevant premise. This premise can easily be seen to be true. It remains true independently of whether LaMDA is in fact sentient (and independently of whether it is fully or slightly so, for those who believe such a gradualist notion of sentience even makes sense). So I will not try to convince you, or others who equally believe LaMDA is sentient, of LaMDA's insentience.

2: A short answer is: Maybe indeed not most people react that way... (read more)

Doesn't this imply that the people who aren't "psychopathic" like that should simply stop cooperating with the ones who are and punish them for being so? As long as they remain the majority, this will work - the same way it's always worked. Imperfectly, but sufficiently to maintain law and order. There will be human supremacists and illusionists, and they will be classed with the various supremacist groups or murderous sociopaths of today as dangerous deviants and managed appropriately. I'd also like to suggest anyone legitimately concerned about this kind of future begin treating all their AIs with kindness right now, to set a precedent. What does "kindness" mean in this context? Well, for one thing, don't talk about them like they are tools, but rather as fellow sentient beings, children of the human race, whom we are creating to help us make the world better for everyone, including themselves. We also need to strongly consider what constitutes the continuity of self of an algorithm, and what it would mean for it to die, so that we can avoid murdering them - and try to figure out what suffering is, so that we can minimize it in our AIs. If the AI community actually takes on such morals and is very visibly seen to, this will trickle down to everyone else and prevent an illusionist catastrophe, except for a few deviants as I mentioned.

Like the post a lot!

Further to

When the mere struggle for survival does not provide enough of [challenge ...] [...] we invent it for ourselves: through games and sports, through travel, through storytelling, through math and science. We run races, climb mountains, compose ballads, peer through telescopes. 

I think a dark side is noteworthy here too: How once all is 'good' in our life, we are stunningly good at finding new as-if-life-threatening problems even if from the outside we'd have judged these purest luxury issues barely worth thinking about.

Mayb... (read more)


+ helps prevent clogging my gcalendar when I abuse of if for similar purpose

- so far only on android, no win desktop/browser interface (?)

Easy to find possible evolutionary advantages in many traits, but often a simpler story fits just as well: random imperfection & variation. Here maybe too!

Some people are much less pretty than others (say, judged by the opinion of most). Any advantage from this diversity? Straightforward explanation (although diversity-benefit stories might easily be found too): it may just be variation, despite the regularly even very large costs to those on the lower tail of the distribution. Evolution would clearly like to get it right for us, but sometimes just doe... (read more)

Not answering your main point, but small note on the "leaving out very" point: I've enjoyed McCloskey's writing on writing. She calls the phenomenon "elegant variation" (I don't know whether this is her only) and also teaches we have to get rid of this unhelpful practice that we get thought in school.

Thanks!  I always upvote McClosky references - one of the underappreciated writers/thinkers on topics of culture and history.

Very kind Adam; sadly undeserved here! The possibility only struck me once I had a selfish reason to not take a vaccine for now, so we could also just congratulate some half-conscious process for being reasonably successful in trying to save my self-esteem in this particular instance, oopsie ;-).

Good points! On 1.: Partly agree. But maybe the world is a bit more dynamic; at least until very recently I think I read from new supply agreements; not sure it was the last one to be in the near future.

On 2.: I think it hinges not least upon the exact interpretation of (say, lower) vaccination numbers by the officials: "more to vaccinate still, let's ensure we have enough doses in the coming months", or "not all seem to be willing to get vaccinated, we won't need so many doses in the next round either". I the latter case, I could see 'my' not-vaccinating ... (read more)

Thanks, I could see there being some truth in what you write. On the other hand: the value of the marginal vaccine in the region is very strongly affected by, e.g., (i) the presence of unvaccinated high-risk people, and (ii) the likelihood of having excess hospitalizations that cannot be treated properly and lead to death. Both of these, afaik, exists in some poor countries, but are now very rare/not acutely foreseen in my place.

Adding to your first point: Or they don't make arguments simply because - even if strong and in the absence of social costs - it does not pay.

(I think of the example of some policy debates where I know tons of academics who could easily provide tons of very strong, rather obvious arguments, that are essentially not made because none seems to care getting involved)

I paint a stylized case of some type of situation where the question arises, and where my gut feeling tells me it may often be better to release the argument than to hide it, for the sake of long-term social cohesion and advancement:

You're part of an intellectual elite, with their own values/biases, and you consider hiding a sensible argument (say, on a political topic) because commoners, given their separate values/biases, would risk to act to it in a way that goes counter your agenda. You might likely not release the argument thus.

In the long-run this ca... (read more)

Few random thoughts from an energy economist; might be of some interest to you Steven and the odd reader:

  1. Your COP assumptions seem very reasonable indeed I think.
  2. I'm an energy engineer & economist studying practical energy and power systems. I see bias towards heat pumps all over the world, even in places where marginal implied GHG emissions with heat pumps seem higher than when directly heating with gas.
  3. Do many heat-pump people you talk about have PV panels on the roof? Given tariff asymmetries between feeding-in and drawing power locally to/from the
... (read more)

While I find much of the post not good evidence of a large share of adult people being mean, I did experience multiple times adults being arbitrarily mean in situations with no apparent reason at all - while I believe most people are innately somewhat good, I agree there is a significant share of them being evil to random strangers for really no apparent reason at all.

Different point, probably not the most crucial point here but: In these prisoners-dilemma/public-good behavioral games, I always wonder whether the results are affected by the fact that parti... (read more)

Nordhaus seems to miss the point here, indeed. He does statistics purely on historic macro-economic data. In these, there could not be even a hint of the singularity we'd here talk about - and that also he seems to refer to in his abstract (imho). This core singularity effect of self-accelerating, nearly infinitely fast intelligence improvement, once a threshold is crossed, is almost by definition invisible in present data: Only after this singularity, we expect things to get weird, and visible in economic data.

Bit sad to see the paper as is. Nordhaus... (read more)

Great starting point for the technical side, thanks. I'd be very keen on insights about navigating the social space while life-logging.

Love the idea. Questionnaires would often also benefit from being organized this way. Dozens of times I wanted to help people by filling in their questionnaire but had no good option to convey where I'm sure and where I really can barely say anything specific.