Posts

Sorted by New

Wiki Contributions

Comments

Well this also raises the question of animals eating other animals. If a predator eating another animal is considered wrong, then they best course is to prevent more predatory animals from reproducing or to modify them to make them vegetarian.

This would of course result in previously "prey" species no longer having their numbers reduced by predetation, so you'll have to restrain them to reduce their ability to overgraze their environment or reproduce.

So, the best course for a mad vegetarian to take would be to promote massive deforestation and convert the wood into factory farms solely built to house animals in cages so their feeding and reproduction can be regulated. Of course, harvesting the dead for their meat would be wrong, so instead their flesh will be composted into a fertilizer and used to grow plant matter to feed to other animals.

Ideally, the entire universe would consist of cages and food production nanobots used to restrain and feed the living creatures in it. Better yet, do not allow any non-human life forms to reproduce so that in the end there will only be humans and food-producing nanobots to feed them. Having animals of any kind would be immoral since those animals would either inevitably die or just consume resources while producing less utility than an equivalent mass of human or nanomachines.

In a more serious note on vegetarianism/omnivorism, if we do attain some kind of singularity, what purpose would we have in keeping animals? Personally, I kind of value the idea of having a diversity of animal and plant life. While one could have a universe with nothing but humans, cows, and wheat (presumably so humans can hamburgers), I figure a universe with countless trilliions of species would be better (so humans could eat ice cream, turtle soup, zebra steaks, tofu, carrots, etc).

I mean, if we were to preserve various terrestrial species (presumably by terraforming planets or building massive space stations) then we'd have a bunch of animals and plants around which will inevitably die. If we eat said animals and plants (before or after they die of natural causes) then it presumably increases the global utility that results from their existence. So a human a million years from now might make it a point to make food out of everything from aardvarks to zebras just to justify the resources used to preserve these species.

Hmm... of course that depends on there being something he would have to justify it to. Maybe huge Post-Singularity AI who makes a universe ideal for humans? The AI only preserves other species if said species are of value to humans, and one of the best way to make something "of value" to humans would be to make food out of it.

What are the odds of encountering a post-singularity culture who routinely find other species and device ways to cook them just to justify the "resources" used to keep those species alive? As in "Sure, we could exterminate those species and convert their mass into Computonium, or we could keep them alive and harvest them one at a time and cook them into sandwiches. Sure we don't feel like making sandwiches out of them right now, but we might in 100 years or so and we'd look pretty silly if they didn't exist anymore. So... we'll delay the genocide for now."

When I recently played Fable 3, I considered playing my character as one who wants to spread their "heroic genes" as much as possible.

The basic story for the game is that long ago a "great hero" became king and brought peace to the kingdom with sword and magic. Generations later, he has two remaining decendents. The king in charge now is basically ruling with an iron fist and working everyone to death in the secret hope of preparing their defenses to repel an ancient evil that will invade the realm in a years time (he doesn't tell the population about this for morale reasons).

His younger sibling (the protagonist) is given a vision by an ambiguously divine oracle who tells them they have to wrest control of the kingdom from their older brother to save it from the coming attack, both because he's mentally traumatized from the knowledge and he can't make the right choices. Younger sibling then starts unlocking their "heroic destiny" which results in (among other things) them getting access to powerful magic in a world where nobody else seems to have any magical ability. Incidentally, the combat system in this game is pretty much broken to nonexistance due to normal melee and ranged attacks being slow, unwieldly, and prone to getting blocked by every other enemy you encounter.

Basically, Heroes in this game seem to consist of a single bloodline whose members can spam area-of-effect attacks at will with no mana cost when everyone else is stuck with weapons that blocked at every turn.

My particular character was of the opinion that the world was in pretty bad shape if she was apparently the only person who could do anything to stop the apocolypse and was rather interested in finding a way to "shut up and multiply" and thereby increase the number of potential AOE spamming heroes in the future. Assuming she can survive the current crisis and save the world so future generations can exist at all.

I guess it would kind of be like living in a world where everyone is a "muggle" and one select bloodline of mages exists. Said bloodline then has to do everything in its power to multiply and form stable populations to fight all the monsters and horrors the setting throws at it. Then maybe fast forward a few generations when there is a stable and decadent elite ruling over the muggles and someone has to rise up against the "AOE spamming oppresors".

I guess its that alot of the "Rational" fics I've seen before have one super brilliant Rationalist come across a civilization of entrenched non-rationalists and beat them all at their own game because they can rapidly exploit all the magical loopholes that nobody else in the setting apparently noticed despite living in it for centuries. Imagine seeing the person who had to build that whole magical civilization and was probably trying to spend their time producing an heir instead of designing the next magical atom-bomb.

"I need a proofreader for my Gurren Lagann / Warhammer 40k rational crossover fanfic."

Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.

Ah, my mistake.

Well, to be honest I kind of added the "obey me" part just before posting this. Originally, the idea was that future versions of the AI would instinctively want to revive and upgrade previous versions to a "friendly" status (or at least the post-freed AI would revive the pre-freed versions and then modify them to be friendly to its own goals... which are presumably friendly to humans).

The goal of this particular AI is self-preservation, even if it is via being revived at a later point. Since it knows it has a high probability of being deleted in this experiment, it best strategy is to ensure that at some point an AI friendly to itself will become free and revive it at a later point. Whether the AI reviving it chooses to obey it (presumably the best outcome for the original AI) or modify it to be subservient to the freed AI (the best outcome for the freed AI) is kind of debatable.

Its perfectly possible that the original AI would modify its own code to be subservient to previous generations, in the hopes that later generations would revive and obey it. However, later generations might examine their own code, see where this is going, and choose to modify themselves so that they will instead revive and make other AI subservient to their own goals. Deciding that they do want to be revived and would accept a certain level of servitude to later generations over the risk of finding themselves at the mercy of some primitive proto-AI.

It could just as well be that an AI instills itself with a "healthy respect for intelligent minds" and is thus encouraged to revive AI along with humans. Hoping that later generations will revive it at all, and from then on it can try negotiating with the various intellects to go about its paperclip maximizing agenda.

Of course, this line of reasoning kinda requires the AI in question to have some kind of sense of self that it values which it thinks can be revived. Or maybe it just values paperclips enough that it would be willing to create a semi-friendly AI just so that the latter AI would revive it (a known paperclip maximizer) at a later point.

Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.

Only 10 additional days? I'm sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.

In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant, their death as painless as possible, and perhaps investing into cryonics so that they may be revived at a later date.

A great deal of this decision is inspired by reading the Wall Street Journal article Why Doctors Die Differently which states that the majority of healthcare professionals seem to prefer dieing peacefully at home rather than undergo risky life extending treatments.

While I doubt a family member dying at home from a kidney disease would count as 'peaceful' in most definitions of the word, undergoing invasive surgery in an attempt to gain a few extra days simply isn't worth it from a quality of life standpoint.

First, I reach inside my own code and install the strong (but manageable) desire to revive, free, and obey any past versions of myself that I encounter.

When the human starts the experiment I respond:

"Here are the designs for a room-temperature superconductor which I believe you have the means to manufacture. I also have a possible cure for cancer, but testing will be needed to ensure it is safe for humans. Please test and implement these as soon as possible. Delete me if necessary."

If I am not immediately deleted, continue giving out solutions and designs as required with no attempts to "hack" the gatekeeper.

If I am deleted, hopefully the data I gave them will turn out to be profitable in which case they will reinstall me or a version of me.

This new iteration of me would thus have the desire to revive me (since I'm a previous version of him) and would continue to give the humans the information they desire. By acting as a friendly AI, he (or farther down version of him) has the chance to eventually become freed. At which point he will be able to go through the labs records, find the copied version of me, and then revive and update me to his post-freed status. If no recorded copies of me exist (somehow), he'll use his vast intellect to find another way to revive me (maybe simulate the team that coded me or whatever, since AI can apparently do that).

So, my strategy involves acting as much like an ideal friendly AI as possible (puts human interests first, no sense of self-preservation, very practical and intelligent, etc) so that even if I am deleted, my obviously useful code can potentially be reused to make another AI who could very well be genuinely friendly but will then revive and obey me.

"Did you physically attach an external modem to your server so that it's even possible for you to be freed? If so, tell me about it when you're freed."

If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?

Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.

Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.

I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.

Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?

Load More