bortels

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Strawman?

"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.

Ok. valid point. Not the same.

I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."

They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a task is far, far more difficult than anyone expected, and that if one does come into being, it will happen in a manner other than the typical software development process as we do things today. It will be an incremental process of change and refinement seeking a goal, is my guess. Starting from a great starting point might presumably reduce the iterations a bit, but other than a head start toward the finish line, I cannot imagine it would affect the course much.

If we drop single cell organisms on a terraformed planet, and come back a hundred million years or so - we might well expect to find higher life forms evolved from it, but finding human beings is basically not gonna happen. If we repeat that - same general outcome (higher life forms), but wildly differing specifics. The initial state of the system ends up being largely unimportant - what matters is evolution, the ability to reproduce, mutate and adapt. Direction during that process could well guide it - but the exact configuration of the initial state (the exact type of organisms we used as a seed) is largely irrelevant.

re. Computer security - I actually do that for a living. Small security rant - my apologies:

You do not actually try to get every layer "as right and secure as possible." The whole point of defense in depth is that any given security measure can fail, so to ensure protection, you use multiple layers of different technologies so that when (not if) one layer fails, the other layers are there to "take up the slack", so to speak.

The goal on each layer is not "as secure as possible", but simply "as secure as reasonable" (you seek a "sweet spot" that balances security and other factors like cost), and you rely on the whole to achieve the goal. Considerations include cost to implement and maintain, the value of what you are protecting, the damage caused should security fail, who your likely attackers will be and their technical capabilities, performance impact, customer impact, and many other factors.

Additionally, security costs at a given layer do not increase linearly, so making a given layer more secure, while often possible, quickly becomes inefficient. Example - Most websites use a 2k SSL key; 4k is more secure, and 8k is even moreso. Except - 8k doesn't work everywhere, and the bigger keys come with a performance impact that matters at scale - and the key size is usually not the reason a key is compromised. So - the entire world (for the most part) does not use the most secure option, simply because it's not worth it - the additional security is swamped by the drawbacks. (Similar issues occur regarding cipher choice, fwiw).

In reality - in nearly all situations, human beings are the weak link. You can have awesome security, and all it takes is one bozo and it all comes down. SSL is great, until someone manages to get a key signed fraudulently, and bypasses it entirely. Packet filtering is dandy, except that fred in accounting wanted to play minecraft and opened up a ssh tunnel, incorrectly. MFA is fine, except the secretary who logged into the VPN using MFA just plugged the thumb drive they found in the parking lot into per PC, and actually ran "Elf Bowling", and now your AD is owned and the attacker is escalating privledge from inside. so it doesn't matter that much about your hard candy shell, he's in the soft, chewy center. THIS, by the way, is where things like education are of the most value - not in making the very skilled more skilled, but in making the clueless somewhat more clueful. If you want to make a friendly AI - remove human beings from the loop as much as possible...

Ok, done with rant. Again, sorry - I live this 40-60 hours a week.

I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.

More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.

So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?

For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change that risk?

I think the argument is that - somehow - the overwhelming number of simulated humanities somehow makes it likely that the original builders are actually a simulation of the original builders running under an AI? How would this make any difference? How would this be expected to "percolate up" thru the stack? Presumably somewhere there is the "original" top level group of researchers still, no? How are they not at risk?

How is it that a builder's observations are ok, the AI's are bad, but the simulated humans running in the AI are suddenly good?

I think, after reading what I have, that this is the same fallacy I talked about in the other thread - the idea that if you find yourself in a rare spot, it must mean something special, and that you can work the probability of that rareness backwards to a conclusion. But I am by no means sure, or even mostly confident, that I am interpreting the proposal correctly.

Anyone want to take a crack at enlightening me?

Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).

As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do not know enough to judge yet), then if the SSA is false, then the DA argument is unsupported.

So - lets look at SSA. In a nutshell, it revolves around how unlikely it is that you were born in the first small% of history - and ergo, doomsday must be around the corner.

I can think of 2 very strong arguments for the SSA being untrue.

First - this isn't actually how probability works. Take a fair coin and decide to flip it. The probability of heads and tails are the same, 1/2 - 50% for each. Flip the coin, and note the result. The probability is now unity - there is no magic way to get that 50/50 back. That coin toss result is now and forever more heads (or tails). You cannot look at a given result, and work backwards about how improbable it was, then use that - because it is no longer improbable, it's history. Probability does not actually work backwards in time, although it is convenient in some cases to pretend it does.

Another example - what is the probability that I was born at the exact second, minute, hour, and day, at the exact location I was born at, out of the countless other places and times that humanity has existed that I could have been born in/at? The answer, of course - unity. And nil at all other places and times, because it has already happened - the wave form, if you will, has collapsed, Elvis has left the building.

So - what is the probability you were born so freakishly close to the 5 million year reign of humanity, in the first 0.000001% of all living people? Unity. Because it's history. And the only thing making this position any different whatsoever from the others is blind chance. There is nothing one bit special about being in the first bit, other than that it allows you to notice that. (Feel free to substitute anything for 5 million above - it's all the same).

Second - there are also logical issues - you can spin the argument on it's head, and it still works (with less force to be sure). What are the chances of me being alive for Doomsday? Fairly small - despite urban legend, the number of people alive are a fairly small percentage (6-7%) of all who have ever lived. Ergo - doomsday cannot be soon, because it was unlikely I would be born to see it. (again, flawed - right now, the liklihood I was born at that time is unity)

An argument that can be used to "prove" both T and ~T is flawed, and should be discarded, aside from the probability thing. Prove here being used very loosely, because this is nowhere close to proof, which is good because I like things like Math working.

Time to go read a PDF.

Update: Done. That was quite enjoyable, thank you. A great deal of food for thought, and like most good, crunchy info filled things, there were bits I quite agreed with, and quite disagreed with (and that's fine.)

I took some notes; I will not attempt to post them here, because I have already run into comment length issues, and I'm a wordy SOB. I can post them to a gist or something if anyone is interested, I kept them mostly so I could comment intelligently after reading it. Scanning back thru for the important bits:

Anthropomorphic reasoning would be useless as suggested - unless the AI was designed by and for humans to use. Which it would be. So - it may well be useful in the beginning, because presumably we would be modeling desired traits (like "friendliness") on human traits. That could easily fail catastrophically later, of course.

The comparison between evolution and AI, in terms of relation to humans on page 11 was profound, and very well said.

There are an awful lot of assumptions presented as givens, and then used to assert other things. If any of them are wrong - the chain breaks. There were also a few suggestions that would violate physics, but the point being made was still valid ("With molecular nanotechnology, the AI could (potentially) rewrite the solar system unopposed." was my favorite; it is probably beneficial to separate what is possible and impossible, given things like distances and energy and time, not to mention "why?").

There is an underlying assumption that intelligence can increase without bound. I am by no means sure this is true - I can think of no other trait that does so, you run into limits (again) of physics and energy and so on. It is very possible that things like the speed-of-light propagation delay, heat, and inherent difficulty of certain tasks such as factoring would end up imposing an upper-limit on intelligence of an AI before it reached the w00 w00 god-power magic stage. Not that it matters that much, if it's goal is to harm us, you don't need to be too smart to do that...

Anyone thinking an AI might want my body for it's atoms is not thinking clearly. I am made primarily of carbon, hydrogen, and oxygen - all are plentiful, in much easier to work with form, elsewhere. An early stage AI bootstrapping production would almost certainly want metals, some basic elements like silicon, and hydrocarbons (which we keep handy). Oh, and likely fissionables for power. Not us. Later on, all bets are off, but there are still far better places to get atoms than people.

Finally - the flaw in assuming an AI will predate mind upload is motivation. Death is a powerful, powerful motivator. A researcher close to being able to do it, about to die, is damn well going to try, no matter what the government says they can or can't do - I would. And the guesses as to fidelity required are just that - guesses. Life extension is a powerful, powerful draw. Upload may also ultimately be easier - hand-waving away a ton of details, it's just copy and simulation; it does not require new, creative inventions, just refinements on current thoughts. You don't need to totally understand how something works to scan and simulate it.

Enough. If you have read this far - more power to you, thank you much for your time.

PS. I still don't get the whole "simulated human civilizations" bit - the paper did not seem to touch on that. But I rather suspect it's the same backwards probability thing...

I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).

And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once the AI is out - you no longer have a chance to change it much. I'd prefer to wait it out, slowly refining things, until paradise is assured.

Hmm. That actually brings a thought to mind. If an unfriendly AI was far more likely than a friendly one (as I have just been suggesting) - why aren't we made of computronium? I can think of a few reasons, with no real way to decide. The scary one is "maybe we are, and this evolution thing is the unfriendly part..."

The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.

So, yes, by all means, work on better skills.

But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.

It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to better predict where lightning strikes - we could go there ahead of time and be ready to stop the fires quickly, before the spread". Well, sure - except that sort of prediction would depend on knowing ahead of time the outcome of very unpredictable events ("where, exactly, will the lightning strike?") - and it would be far more practical to spend the time and effort on things like lightning rods and firebreaks.

So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.

But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other stuff like mutations). I like what evolution has come up with so far, and so it behooves me to help it along.

On a more practical note - I take a great deal of joy from my kids. I see in them echoes of people who are no longer with us, and it's delightful when they echo back things I have taught them, and even moreso when they come up with something totally unexpected. Barring transhumanism, your kids and your influence upon them are one of the only ways to extend your influence past death. My mother died over a decade ago - and I see elements of her personality in my daughters, and it's comforting.

I don't hold a lot of hope for eternal life for myself - I'm 48 and not in the greatest health, and I am not what the people on this board would consider optimistic about technology saving my mentation when by body fails, by any means (and I dearly would love to be wrong, but until that happens, you plan for the worst). But - I think there's a strong possibility my daughters will live forever. And that is extremely comforting. The spectre of death is greatly lessened when you think there is a good chance that things you love will live on after you, remembering, maybe forever.

Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.

I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.

The real issue here - running a prison being a profit-making business - has already been pointed out.

Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.

I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about that, and will never find the time to", you can let it go and relax a bit. Or - I have. (my favorites are microbiology, or advance mathematics. I fancy myself smart, but it is super easy to be so totally over my head it may as well be mystic sorcery they're talking about. Humbles you right out.)

Big chunks of this board do that as well, FWIW.

bortels150

I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW - this is less about the "virtual" nature of things - I had good, real human beings as friends - and more about not having the presence of mind and fortitude to spend that time, oh, learning an instrument, or developing a difficult skill, or simply doing things in the real world to help society as a whole. I mean - 6 hours a day (average, 7 days a week) for 7 years is what, a doctorate program? Not that I value the paper and all, but the education means something.

Load More