I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.
More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I...
Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do no...
I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).
And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once ...
The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.
So, yes, by all means, work on better skills.
But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.
It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to be...
So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.
But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other st...
Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.
I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.
The real issue here - running a prison being a profit-making business - has already been pointed out.
Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.
I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about th...
I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW - this is less about the "...
Perhaps instead of the prison, the ex-prisoner should be given the financial incentive to avoid recidivism. Reward good behavior, rather than punish bad.
We could do this by providing training, and given them reasonable jobs. HA HA! I make myself laugh. Sigh.
It seems to me the issue is less one of recidivism, and more one of the prison-for-profit machine. Rather than address it by trying to make them profit either way (they get paid if the prisoner returns already - this is proposing they get paid if they stay out) - it seems simpler to remove profit as a...
Fair question.
My point is that if improving techniques could take you from (arbitrarily chosen percentages here) a 50% chance that an unfriendly AI would cause an existential crisis, to 25% chance that it would - you really didn't gain all that much, and the wiser course of action is still not to make the AI.
The actual percentages are wildly debatable, of course, but I would say that if you think there is any chance - no matter how small - of triggering ye olde existential crisis, you don't do it - and I do not believe that technique alone could get us a...
The article supports that agricultural diets were worse - but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier - far from it.
Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have - a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the ben...
It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don't actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of "Well - you certainly can narrrow it down in some way" is lovely - but you still don't actually know. The incorrect statement would be "I know nothing (about your number)" - but nobody actually says that.
I kinda flip it - we know nothing for sure ...
Actually - I took a closer look. The explanation is perhaps simpler.
Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus.
So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!"
To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.
While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:
http://wiki.lesswrong.com/wiki/I_don%27t_know
The talk page does not exist, and I have no rights to create it, so I will ask here: If I say "I am thinking of a number - what is it?" - would "I don't know" be not only a valid answer, but the only answer, for anyone other than myself?
The assertion the page makes is that "I don't Know" is "Something that can't be entirely true if you can even formulate a question." ...
Ah - that's much clearer than your OP.
FWIW - I suspect it violates causality under nearly everyone's standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".
So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so - the flaw lies in orders of infinity. For every wa...
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
Then perhaps I simply do not understand the proposal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argu...
Fair enough. I should mention my "Why" was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as "What do you observe about lesswrong, as it stands, that make you believe it can or should be improved". I am willing to take the desire for it as a given.
The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the "improvement" is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I g...
The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design - but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could ...
Strawman?
"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.
Ok. valid point. Not the same.
I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."
They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a tas... (read more)