Wiki Contributions

Comments

Unconvinced. Bottom line seems to be an equation of Personal Care with Moral Worth.

But I don't see how the text really supports that: Just because we feel more attached to entities we interact with, it doesn't inherently elevate their sentience i.e. their objective moral worth.

Example: Our lesser emotional attachment or physical distance to chickens in factory farms does not diminish their sentience or moral worth, I'd think. Same for (future) AIs too.

At best I could see this equation to +- work out in a perfectly illusionist reality, where there is no objective moral relevance. But then I'd rather not invoke the concept of moral relevance at all - instead we'd have to remain with mere subjective care as the only thing there might be.

This page also provides a neat summary of Zulip advantages; mostly in the similar direction as here: https://stackshare.io/stackups/slack-vs-zulip

Interesting and good to hear, as I was thinking of using it for a class too (also surprised; I don't remember the slightest hint of counter-intuitiveness when I personally used Zulip with its threads).

This discussion could be made more fruitful by distinguishing between phenomenal consciousness (sentience) and access/reflective consciousness ('independent cognition' in the author's terminology). The article mainly addresses the latter, which narrows its ethical implications for AI.

"[ChatGPT] would therefore be conscious by most definitions" should be caveated; the presence of advanced cognition may (arguably) be convincingly attributed to ChatGPT by the article, but this does not hold for the other, ethically interesting phenomenal consciousness, involving subjective experience.

Maybe "I'm interested in the hypothesis/possibility..."

File explorer where I don't type all of D:/F1/F2/F3/F4/X to get/open folder or file X, but I type (part of) F2 and F4 and it immediately (yes, indexing etc.) offers me X as a result (and maybe the few others that fit the pattern)

If I have an insane amount of subfolders/files, maybe it indexes better those with recent or regular access.

Extension: A version on steroids might index even file-seek results of files and index on (my or so) most common searched words. Find if that's a bit too extravagant.

Useful in traditional file structures, as we then type/think/remember less. Plus, it might encourage a step towards more tag-based file organization which I feel might be useful more generally, though that's just a potential side-effect and not the basic aim.

Love the post. One relevant possibility that I think would be worthy of consideration w.r.t. the discussion about human paperclippers/inter-human compatibility:

Humans may not be as much misaligned as our highlty idiosyncratic value theories might suggest, if a typical individual's value theory is really mainly what she uses to justify/explain/rationalize her today's local intuitions & actions, yet without really driving the actions as much as we think. A fooming human might then be more likely to simply update her theories, to again become compatible what underlying more basic intuitions dictate to her. So the basic instincts that make us today locally have often reasonably compatible practical aims, might then still keep us more compatible than our individual exicit and more abstract value theories, i.e. rationalizations, would seem to suggest.

I think there are some observatiins that might suggest sth in that direction. Give the humans a new technology , and initially some will call it the devil's tool to be abstained from - but ultimately we all converge to using it, updating our theories, beliefs.

Survey persons on whether it's okay to actively put in acute danger one life for the sake of saving 10, and you have people stronlgy diverge on the topic, based on their abstract value theories. Put them in the corrsponding leadership position where that moral question becomes an regular real choice that has to be made, and you might observe them act much more homogenously according to the more fundamental ad pragmatic instincts.

(I think Jonathan Haidt's Righteous mind wluld also support some of this)

In this case, the Yudkowskian AI alignment challenge may keep a bit more of its specialness in comparison to the human paperclipper challenge.

You had suggested the issues with free-riding/insufficient contributions to public goods, might not be so much of a problem. The linked post suggests otherwise, as it beautifully highlights some of the horrors that come from these issues. It's point is, even if humans are not all bad as of themselves, within the larger societies, there tend to arise strong incentives, for the individual, to act in the disinterest of society.

Load More