FlorianH

Wiki Contributions

Comments

I'm sympathetic to the idea that "Consumerism" might be too often used. But - with the risk of overlap with qjh's detailed answer:

Consumerism = (e.g.) when we consume stuff with very negligible benefit to ourselves, maybe even stuff we could ourselves easily admit is kind of pure nonsense if we thought a second about it, maybe driven by myopic short-term desire we'd ourselves not want to prioritize at all in hindsight. Consumption that nevertheless creates pollution or other harm to people, uses up resources that could otherwise contribute towards more important aims, resources we could have used to help poorer persons so easily. Things along such lines. And I have the impression such types of consumption are not rare - and they remain as sad a part of society after this post as ever before, no?

So I struggle to see what to learn from this post.

But I doubt highly that most of the things are of a sort that is likely to lead many to be miserable. The two who are the most miserable in the sample are Russell and Woolf who were very constrained by their guardians; Mill also seems to have taken some toll by being pushed too hard. But apart from that?

Mind the potentially strong selection bias specifically here, though. Even if in our sample of 'extra-successful' people there were few (or zero) who were too adversely affected, this does not specifically invalidate a possible suspicion that the base rate of creating bad outcomes from the treatment is very high - if the latter have a small chance of ever getting to fame.

(This does not mean I disagree with your conclusions in general in any way; nice post!)

I'm 10-15min late. Glad to have a sign of where you are. Whatsapp +41786760000

Indeed that was the idea. But I had not thought of linking it to the "standard AI-risk idea" of AI otherwise killing them anyway (which is what I think you meant)

On your $1 for now:

I don't fully with "As long as they remain the majority, this will work - the same way it's always worked. Imperfectly, but sufficiently to maintain law and order.". A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see any reduction in our already too often too limited kindness as a serious risk.

More generally, I'm at least very skeptical about your "it's always worked" at a time when many of us agree that, as a society, we're running at rather full speed towards multiple abysses without much in the way of us reaching them.

There is though another point I find interesting related to past vs. current feelings/awareness and illusionism, even if I'm not sure it's eventually really relevant (and I guess goes not in the direction of what you meant): I wonder whether the differences and parallels between awareness about past feelings and concurrent feelings/awareness can overall help the illusionist defend his illusionism:

Most of us would agree we can theoretically 'simply' (well yes, in theory..) rewire/tweak your synapses to give you a wrong memory of your past feelings. We could tweak things in your brain's memory so that you believe you'll have had experiences with certain feelings in the past, even if this past experience had never taken place.

If we defend our current view of having had certain past feelings as much as we tend to defend that we now have the sentience/feelings we feel we have, this is interesting, as we have then two categories of insights to our qualia (the past and the current ones) we're equally willing to defend, all while knowing some of them could have been purely fabricated and never existed.

Do we defend to know that we had our past feelings/sentience, just as much as we do with concurrent feelings/sentience? I'm not sure.

Clearly, being aware of the rewiring possibility described above, we'd easily say: ok, I might be wrong. But more relevant could be if we wonder whether, say, historic humans w/o awareness of their brain structure, of neurons etc. (and thus w/o the rewiring possibility in their mind), whether they would have not insisted just as much that their knowledge about having felt past feelings is just as infallible as their knowledge about their current feelings. I so far see this some sort of support for the possibility of illusionism despite our outrage against it; though not sure yet it's really watertight.

If the first paragraph in your comment would be entirely true, this could make this line of pro-illusionist argumentation in theory even simpler (though I'm personally not entirely sure your first paragraph really can be stated as simple as that).

[Not entirely sure I read your comment the way you meant]

I guess we must strictly distinguish between what we might call "Functional awareness" and "Emotional awareness" in the sense of "Sentience".

In this sense, I'd say: Let's have the future chatbots have more memory of the past and so be more "aware", but the most immediate thing this gives them is more "Functional awareness", which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who currently deny LaMDA sentience, there's no immediate reason to believe that the new, memory-enhanced bot is sentient. But yes, it might much more seem like it when we interact with it.

Thanks!

It would be an interesting ending, if we killed ourselves before AIs could.

Love this idea for a closure. Had I thought about it, I might have included it in the story. Even more so as it is also related to the speculative Fermi Paradox resolution 1 that I now mention in a separate comment.

I tried to avoid bloating this post; Habermacher (2020) contains a bit more detail on the proposed chain AI -> popularity/plausibility of illusionism -> heightened egoism, and makes a few more links to literature. Plus it provides – a bit more wildly – speculations about related resolutions of the Fermi paradox (no claim for these to be really pertinent; call it musings rather than speculations if you want):

  1. One largely in line with what @green_leaf suggests (and largely with Alenian's fate in the story): With the illusionism related to our development & knowledge about advanced AI, we kill (or back-to-stoneage) ourselves even before we can build smarter-than-us, independently evolving AGI
  2. Without illusionism (and a related upholding of altruism), we cannot even develop high enough intelligence without becoming too lethal to one another to sustain peaceful co-living & collaboration. Hence, advanced intelligence is even less likely than one could otherwise think, as more 'basic' creatures who become more intelligent (without illusionism) cannot collaborate so well; they're too dangerous to each other!
    1. There is some link here to some of evolutionary biology maintaining broad (non-kin) altruism itself is in many environments not evolutionarily stable; but maybe independently of that, one can ask the question: what with a species that had generally altruistic instincts, but which evolves to be highly dominated by an abstract mind that is able to put into question all sorts of instincts, and that might then also put into perspective its own altruistic instinct unless there's something very special directly telling its abstract mind that kindness is important...
    2. Afaik, in most species, an individual cannot effortlessly kill a peer (?); humans (speers etc.) arguably can. Without a genuine mutual kindness, i.e. in a tribe among rather psychopathic peers, it'd often have been particularly unpleasant to fall asleep as a human
    3. Admittedly this entire theory would help resolve the Fermi Paradox mostly on a rather abstract level. Conditional on the observation of us having evolved to be intelligent in due time despite the point made here, the probability of advanced intelligence evolving on other planets need not necessarily be impacted by the reflection.
Load More