It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me t... (read more)
List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive
Giant Science Vessel - GSV - General Systems Vehicle
Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep?
Greater Archive - allusion to Orion's Arm's Greater Archives?
Will Wilkinson said at 50:48:
People will shout at you in germany if you jaywalk, I'm told.
I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.
This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
Also, at the risk of being redundant: Great story.
To add to Abigail's point:
Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova.
I don't see any reasonable way of even assigning a lower bound to f_i.
The of helping someone, ...
Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.
Up to now there never seemed to be a reason to say this, but now that there is:
Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.
It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;
You posted your raw email address needlessly. Yum.
How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.
Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?
Cooperation only makes sense in the iterated version of the PD. This isn't the iterated case, and there's no prior communication, hence no chance to negotiate for mutual cooperation (though even if there was, meaningful negotiation may well be impossible depending on specific details of the situation).
Superrationality be damned, humanity's choice doesn't have any causal influence on the paperclip maximizer's choice. Defection is the right move.
Nitpicking your poison category:
What is a poison? ... Carrots, water, and oxygen are "not poison". ... (... You're really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)
What character is ◻?
Larry, interpret the smiley face as saying:
PA + (◻C -> C) |- I'm still struggling to completely understand this. Are you also changing the meaning of ◻ from 'derivable from PA' to 'derivable from PA + (◻C -> C)'? If so, are you additionally changing L to use provability in PA + (◻C -> C) instead of provability in PA?
s/abstract rational reasoning/abstract moral reasoning/
But my moral code does include such statements as "you have no fundamental obligation to help other people." I help people because I like to.
In the modern world, people have to make moral choices using their general intelligence, because th
I think my highest goal in life is to make myself happy. Because I'm not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me.
After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to t
Constant [sorry for getting the attribution wrong in my previous reply] wrote:
We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.
We've been told that a General AI will have power beyond any despot known to history.
If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn't bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.
Thank you for this post. "should" being a label for results of the human planning algorithm in backward-chaining mode the same way that "could" is a label for results of the forward-chaining mode explains a lot. It's obvious in retrospect (and unfortunately, only in retrospect) to me that the human brain would do both kinds of search in parallel; in big search spaces, the computational advantages are too big not to do it.
I found two minor syntax errors in the post:
"Could make sense to ..." - did you mean "Could it make s... (read more)
It's harder to answer Subhan's challenge - to show directionality, rather than a random walk, on the meta-level.
Regarding the first question,
Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?
This post reminds me a lot of DialogueOnFriendliness.
There's at least one more trivial mistake in this post:
Is their nothing more to the universe than their conflict?
Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.
Why doesn't the AI do it verself? Even if it's boxed (and why would it be, if I'm convinced it's an FAI?), at the intelligence it'd need to make the stated prediction with any degree of confidence, I'd expect it to be able to take over my mind quickly. If what it claims is correct, it shouldn't have any qualms about doing that (taking over one human's body for a few minutes is a small price to pay for the utility involved).
If this happened in practice I'd be confused as heck, and the alleged FAI being honest about its intentions would be prett... (read more)
Are there no vegetarians on OvBias?
Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.
Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
Here's my vision of this, as a short scene from a movie. Off my blog: The Future of AI
If you think as though the whole goal is to save on computing power, and that the brain is actually fairly good at this (it has to be), then you won't go far astray.
I'm trying to see exactly where your assertion that humans actually have choice comes in.
What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?
Each twin might feel strong regard for the other, but there's no way they wo
Is the 'you' on mars the same as 'you' on Earth?
And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?
But I don't buy the idea of intelligence as a scalar value.
They only depend to within a constant factor. That's not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you're proposing is, or is NOT, the shortest possible program to express the law.
But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.
"A short time?" Jeffreyssai said incredulously. "How many minutes in thirty days? Hiriwa?"
"28800, sensei," she answered. "If you assume sixteen-hour waking periods and daily sleep, then 19200 minutes."I would have expected the answers to be 43200 (30d 24h/d 60/h) and 28800 (30d 16h/d 60/h), respectively. Do these people use another system for specifying time? It works out correctly if their hours have 40 minutes each.
Aside from that, this is an extremely insightful and quote-worthy post.
I have^W^W My idiotic ... (read more)
I hope the following isn't completely off-topic:
... if I'd been born into that time, instead of this one...
Maybe later I'll do a post about why you shouldn't panic about the Big World. You shouldn't be drawing many epistemic implications from it, let alone moral implications. As Greg Egan put it, "It all adds up to normality." Indeed, I sometimes think of this as Egan's Law.
Good writing, indeed! I also love what you've done with the Eborrian anzrf (spoiler rot13-encoded for the benefit of other readers since it hasn't been mentioned in the previous comments).
The split/remerge attack on entities that base their anticipations of future input directly on how many of their future selves they expect to get specific input is extremely interesting to me.
I originally thought that this should be a fairly straightforward problem to solve, but it has turned out a lot harder (or my understanding a lot more lacking) than I expected.
I th... (read more)
Similarly to "Zombies: The Movie", this was very entertaining, but I don't think I've learned anything new from it.
Z. M. Davis wrote:
Also, even if there are no moral facts, don't you think the fact that no existing person would prefer a universe filled with paperclips ...
For a rather silly reason, I wrote something about:
... explaining the lowest known layer of physics ...
A configuration can store a single complex value - "complex" as in the complex numbers (a + bi).
To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back.
And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, ...
Posting here since the other post is now at exactly 50 replies:
Re michael vassar:
Sane utility functions pay attention to base rates, not just evidence, so even if it's impossible to measure a difference in principle one can still act according to a probability distribution over differences.
You're right, in principle. But how would you estimate a base rate in the absence of all empirical data? By simply using your priors?
I pretty much completely agree with the rest of your paragraph.
Re Nick Tarleton:
(1) an entity without E can have identical outward be... (read more)
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure.
For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p... (read more)
Your brain assumes that you have qualia
Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain.
If qualia-concepts are shown in some point in the future to be useful in understanding the real world, i.e. specify a compact border around a high-density region of thingspace, my brain will likely become interested i... (read more)
Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this).
What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.
I believe there's a theorem which states that the problem of producing a Turing machine which will give output Y for input X is uncomputable in the general case.
What? That's trivial to do; a very simple general method would be to use a lookup table. Maybe you meant the inverse problem?
WHY is a human being conscious?
I don't understand this question. Please rephrase while rationalist-tabooing the word 'conscious'.
I wonder how this relates to tracking down hard-to-find bugs in computer programs.
And that the tremendous high comes from having hit the problem from every angle you can manage, and having bounced; and then having analyzed the problem again, using every idea you can think of, and all the data you can get your hands on - making progress a little at a time - so that when, finally, you crack through the problem, all the dangling pieces and unresolved questions fall into place at once, like solving a dozen locked-room murder mysteries with a single clue.
This s... (read more)