Yeah, making a program claim to be sentient is trivially easy.
printf("I am sentient\n")
Yeah, I basically see this episode as anti-science propaganda.
The "friendship lesson" basically says "make-belief is a good thing and should be respected".
Either that, or accepting the "supernatural" as such without further inquiry. Because it's by definition beyond the realm of science, duh.
(Whether it's intentional anti-science-propaganda is another question)
You could use the "zombie argument" to "prove" that any kind of machine is more than the sum of its parts.
For example, imagine a "zombie car" which is the same on an atom-by-atom basis as a normal car, except it doesn't drive.
In this context, the absurdity of the zombie argument should be more obvious.
EDIT: OK, it isn't quite the same kind of argument, since the car wouldn't behave exactly the same, but it's pretty similar.
EDIT2: Another example to illustrate the absurdity of the zombie argument:
You could imagine an alternative world that's exactly the same as ours, except humans (who are also exactly the same as in our world) don't perceive light with a wavelength of 700 nanometer as red. This "proves" that there is more to redness then wavelength of light.
"Regarding the first question: evolution hasn’t made great pleasure as accessible to us as it has made pain. Fitness advantages from things like a good meal accumulate slowly but a single injury can drop one’s fitness to zero, so the pain of an injury is felt stronger than the joy of pizza. But even pizza, though quite an achievement, is far from the greatest pleasure imaginable.
Humankind has only recently begun exploring the landscape of bliss, compared to our long evolutionary history of pain. If you can’t imagine a pleasure great enough to make the trade-off worthwhile, consider that you may be falling prey to the availability heuristic. Pain is a lot more plentiful and salient, but it’s not a lot more important. The fact that pleasure is rare should only make it more valuable when offsetting pain, and an hour is a lot longer than 5 minutes."
What makes you think there's an equilibrium where the greatest pleasure imaginable is as good as the greatest suffering imaginable is bad (That's at least what I think what you think)? I think there's an asymetrie insofar that truly great suffering is hard to outweigh with great happiness. However, since no finite suffering can be infinitly bad, there has to be some amount of pleasure that outweights 5 minutes of the greatest suffering imaginable, but I don't think 1 hour of greatest pleaure is enough. Something like 1,000,000 years may be enough.
EDIT: 1,000,000 years might be over-the-top. Assuming 100 years of greatest pleasure outweigh 5 seconds of greatest suffering, 6,000 years of greatest pleasure should be enogh.
"Taking seriously the position that life is not worth living should lead one to a philosophy of extinctionism – the stance that it would be pretty great if all humans died in their sleep tonight."
if you subscribe to timeless decision theory, you may still be against extinctionism even if you think life is net-negative, because, when people would expect to die painlessly in there sleep, they would be absolutly terrified, and this would be bad.
If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases]) about the moral decisions you would make.
If people would ask you whether you would kill/did kill a patient, and you couldn't confidently say "No" (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must not kill the patient.
EDIT: honesty must mean keeping promises (to a reasonable degree -- it is always possible that something unexpected happens which you didn't even consider as an improbable possibility when making the promise) to avoid Parfit's Hitchhiker-like problems.
slighly modified version:
Instead of chosing at once whether you want to take one box or both boxes, you first take box 1 (and see whether it includes 0$ or 1.000.000$), and then, you decide whether you want to also take box 2.
Assume that you only care about the money, you don't care about doing the opposite of what Omega predicted.
slightly related:
Suppose Omega forces you to chose a number 0<p<=1 and then, with probability p, you get tortured for 1/(p²) seconds.
Assume for any T, being tortured for 2T seconds is exactly twice as bad as being tortured for T seconds.
Also assume that your memory gets erased afterwards (this is to make sure there won't be additional suffering from something like PTSD)
The expected value of seconds being tortured is p * 1/(p²)=1/p, so, in terms of expected value, you should chose p=1 and be tortured for 1 second. The smaller the p you chose, the higher the expected value.
Would you actually chose p=1 to maximize the expected value, or would you rather chose a very low p (like 1/3^^^^3)?
I think this could be considered one the the very basics of rational thinking. Like, if someone asked what rationality/being rational means and wants a short answer, this Litany is a pretty good summary.
I once thought I could prove that the set of all natural numbers is as large as its power set. However, I was smart enough to acknowledge my limitations (What‘s more likely: That I made a mistake in my thinking I haven‘t yet noticed, or that a theorem pretty much any professional mathematician accepts as true is actually false?), so I activly searched for errors in my thinking. Eventually, I noticed that my methods only works for finite sub sets (The set of all natural numbers is, indeed, as large as the set of all FINITE subsets), but not for infinite subsets.
Eliziers method also works for all finite subsets, but not for infinite subsets
Jeez, „Collapse of Western Civilisation“, that‘s some serious clickbait.