All of TRIZ-Ingenieur's Comments + Replies

After Go, what games should be next for DeepMind?

Why not check out the AGI capabilities of Alphago... It might be possible to train chess without architectural modifications. Each chessboard square could be modelled by a 2x2 three-state Go field storing information about chess figure type. How good can Alphago get? How much of its Go playing abilities will it loose?

2gjm6yThis isn't at all the same thing, but it might amuse you: Gess the game [http://www.archim.org.uk/eureka/53/gess.html].
If there IS alien super-inteligence in our own galaxy, then what it could be like?

Obviously Singleton AIs have a high risk to get extinct by low probability events before they initiate Cosmic Endowment. Otherwise we would have found evidence. Given the foom development speed a singeton AI might decide after few decades that it does not need human assistance any more. It extinguishes humankind to maximize its resources. Biological life had billions of years to optimize even against rarest events. A gamma ray burst or any other stellar event could have killed this Singleton AI. How we are currently designing AI will definetely not lead to a Singleton AI that will mangle its mind for 10 million years until it decides about the future of humankind.

[Link] Using Stories to Teach Human Values to Artificial Agents

For real story understanding more complex models will be necessary than off-the-shelf convolutional deep NN. If these complex network structures were subjected to a traumatic event these networks will work properly as before after some time. But if something triggers the memory of this traumatic event subnetworks will run wild: Their outputs will reach extremes and will influence all other subnetworks with biases. This biases could be: Everything you observe is the opposite of what you think - you cannot trust your teacher, you cannot trust anybody, everyt... (read more)

The map of global catastrophic risks connected with biological weapons and genetic engineering

All risks from existing viral/bacterial sources are proven to be of non-existential risk to humanity. If the mortality rate is close to 100% the expansion is slowed down by killing potential disease distributors. In addition global measures will prevent mass spreading.

Regarding human/AI designed bio weapons: The longer the incubation period the more dangerous a bio-weapon will be. To extinguish the entire human race the incubation time has to be in the range of years together with an almost 100% successful termination functionality. From observation of th... (read more)

1turchin6yOk, lets me be a devil advocate. The map is about future possible biowepaons created using genetic engineering, not exiting. Lets imagine that a rogue country created 100 different pathogens with 50 per cent lethality each which seems to be possible with current technologies. These pathogens include different variants of flu, smallpox, anthrax and so on, total 100 species. Than the rogue country send 200 letters with mixture of all these pathogens in each large city in world. In result there will be multipandemic with mortality 1- (0.5 power 100) = 0,99999... Such multipandemic would wipe out most of humanity and survivors will die of starvation. By playing with incubation periods and different environment carriers, as well as adding artificial fungi infections which are known to wipe out species, this rogue country could make such multipandemic very difficult to stop. The map include many more ideas how make such multipandemic even stronger. All these means that we should take such possibility seriously and invest in its prevention.
[Link] Using Stories to Teach Human Values to Artificial Agents

We teach children simple morality rules with stories of distinct good and evil behaviour. We protect children from disturbing movies that are not appropriate for their age. Why?

Because children might loose their compass in the world. First they have to create a settled morality compass. Fairy tales are told to widen the personal experience of children by examples of good and evil behaviour. When the morality base is settled children are ready for real life stories without these black/white distinctions. Children who experience a shocking event that changes... (read more)

2Gunnar_Zarncke6yI have commented about the need of something comparable like a caregiver for an AI before:http://lesswrong.com/lw/ihx/rationality_quotes_september_2013/9r1f [http://lesswrong.com/lw/ihx/rationality_quotes_september_2013/9r1f] I don't mean that necessarily literally but in the sense of providing a suitable learning context at the right development phase. Think training different layers of a NN with differently advanced patterns. I'd like to know in what sense you mean an AI to be traumatized. Getting stuck in a 'bad' local maximum of the search space?
Yoshua Bengio on AI progress, hype and risks

But people underestimate how much more science needs to be done.

The big thing that is missing is meta-cognitive self reflection. It might turn out that even today's RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.

it’s probably not going to be useful to build a product tomorrow.

Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.

The amount of safety related research is for sure underestimated. Evolu... (read more)

To contribute to AI safety, consider doing AI research

So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. [AI Foom Debate, Eliezer Yudkowski]

Capturing resource bonanzas might be enough to make AI go FOOM. It is even more effective if the bonanza is not only a dumb computing resource but offers useful data, knowledge and AI capabilities.

Therefore attackers (humans, AI-assisted humans, AIs) may want:

  • overtake control to use existing capabilities
  • extract
... (read more)
To contribute to AI safety, consider doing AI research

My idea of a regulatory body is not that of a powerful institution that it deeply interacts with all ongoing projects because of the known fallible members who could misuse their power.

My idea of a regulatory body could be more that of a TÜV interconnected with institutions who do AI safety research and develop safety standards, test methods and test data. Going back to the TÜVs foundation task: pressure vessel certification. Any qualified test institution in the world can check if it is safe to use a given pressure vessel based on established design tests... (read more)

To contribute to AI safety, consider doing AI research

Do you have any idea how to make development teams invest substantial parts in safety measures?

0Lumifer6yTo start with you need some sort of a general agreement about what "safety measures" are, and that should properly start with threat analysis. Let me point out that the Skynet/FOOM theory isn't terribly popular in the wide world out there (outside of Hollywood).
To contribute to AI safety, consider doing AI research

Because all regulation does is redistribute power between fallible humans.

Yes. The regulatory body takes power away from the fallible human. If this human teams up with his evil AI he will become master of the universe. Above all of us including you. The redistribution will take power from to the synergetic entity of human and AI and all human beings on earth will gain power except the few ones entangled with that AI.

Who is that "we"?

Citizens concerned about possible negative outcomes of Singularity. Today this "we" is only a sm... (read more)

1Lumifer6yThe "regulatory body" is the same fallible humans. Plus power corrupts. Why wouldn't a "regulatory body" team up with an evil AI? Just to maintain the order, you understand... Colour me sceptical. In fact, I'll just call this hopeful idiocy. In the real world? Do tell.
To contribute to AI safety, consider doing AI research

Why is regulation ungood? I want to understand the thoughts of other LWers why regulation is not wanted. Safe algorithms can only be evaluated if they are fully disclosed. There are many arguments against regulation - I know:

  • Nobody wants to disclose algorithms and test data.
  • Nobody wants projects being delayed.
  • Nobody wants to pay extra costs for external independent safety certifcation.
  • Developers do not want to "waste" their time with unproductive side issues.
  • Nobody wants to lose against a non-regulated competitor.
  • Safety concepts are com
... (read more)
3Lumifer6yBecause all regulation does is redistribute power between fallible humans. Who is that "we"? LOL. So, do you think I have problems finding torrents of movies to watch? Why would the politicians need AI professionals when they'll just hijack the process for their own political ends?
To contribute to AI safety, consider doing AI research

What happens inside an AI can hardly be understood especially if structures get very complex and large. How the system finds solutions is mathematically clear and reproducible. But huge amounts of data make it incomprehensible to human beings. Today's researchers do not really know why a certain net configuration performs better than others. They define a metric to measure total performance - and do trial and error. Algorithms assist already with this. They play around with meta parameters and see how learning improves. Given that the improvement was a su... (read more)

To contribute to AI safety, consider doing AI research

The recent advances of deep learning projects combined with easy access to mighty tools like Torch or TensorFlow might trigger a different way: Start-ups will strive for some low-hanging fruits. Who is fastest gets all of the cake. Who is second has lost. The result of this were on display on CES: IoT systems full of security holes were pushed into the market. Luckily AI hardware/software is not yet capable to create an existential risk. Imagine you research as team member on a project that turns out to make your bosses billionairs... how are your chances being heard when you come up with your risk assessment: Boss, we need 6 months extra to design safeguards...

Superintelligence 16: Tool AIs

Yes. Tool AIs built solely for AGI safeguarding will become existential for FAI:

AIs can monitor AIs [Stephen Omohundro 2008, 52:45min]

Capsulated tool AIs will be building blocks of a safety framework around AGI. Regulations for aircraft safety request full redundancy by independently developed control channels from different suppliers based on separate hardware. If an aircraft fails a few hundred people die. If safety control of a high capable AGI fails humankind is in danger.

Superintelligence 16: Tool AIs

Agent, oracle and tool are not clearly differenciated. I question wether we should differenciate these types the way Bostrums does. Katja last week drew a 4-quadrant classification scheme with dimensions "goal-directedness" and "oversight". Realisations of AI would be classified into sovereign|genie|autonomous tool|oracle(tool) by some arbitrarily defined thresholds.

I love her idea to introduce dimensions, but I think this entire classification scheme is not helpful for our control debate. AI realisations will have a multitude of dimen... (read more)

Superintelligence 15: Oracles, genies and sovereigns

Also in this future, the monitoring software the AI's owner might use would also be near AI level intelligent.

A set of specialized oracles could be used to monitor inputs, internal computations and outputs. One oracle keeps records of every input and output. The question to this oracle is always the same: Is the AI lying? Another oracle is tasked with input steam analysis to filter out any taboo chunks. Other oracles can serve to monitor internal thought processes and self-improvement steps.

If these safeguarding oracles are strictly limited in their ca... (read more)

Superintelligence 14: Motivation selection methods

WBE is not necessarily the starting point for augmentation. A safe AI path should avoid the slippery slope of self-improvement. An engineered AI with years of testing could be a safer starting point to augmentation because its value and safeguard system is traceable - what is impossible to a WBE. Other methods have to be implemented prior to starting augmentation.

Augmentation starting from WBE of a decent human character could end in a treacherous turn. We know from brain injuries that character can change dramatically. The extra abilities offered by exten... (read more)

0diegocaleiro7yThe human mind is very sensitive to small modification in it's constituent parts. Up to a third of USA inmates have some sort of neurological condition that undermines the functioning of their frontal cortexes or amigdalas. It is utmost important to realize how much easier an unpredictable modification would be in a virtual world - both because the simulation may be imperfect, and it's imperfections have cumulative effects, or because the augmentation itself changes the Markov network structures in such a way that brain-tumoresque behavior emerges.
Superintelligence 13: Capability control methods

Wistleblowing and self-declarations will not help. Successful FAI development at MIRI will not help either - UFAI will be faster with more impact. An UFAI explosion can be stopped at extremely high costs. Switching off all computers, networks and global blackout for days. Computer hardware worth billions will have to be disposed of. Companies worth trillions will go bankrupt. Global financial depression will last for several years. Millions will die. After this experience the values of "them" and us come closer together and a global regulatory body can be established.

Superintelligence 13: Capability control methods

The taboo of lying is vital for thought monitoring. This taboo covers only the AGIs internal world representation. Based on its knowledge it never lies. By monitoring input and output channels the stunting system can detect lying and applies appropriate stunting measures.

If the stunting system manipulates input channels, memories or output channels the result to an outside observer will look like lying. The AGI is not capable to tell the truth when the stunting system has removed or manipulated information for safety reasons. The outside observer can chec... (read more)

Superintelligence 13: Capability control methods

Fear is one of the oldest driving forces to keep away from dangers. Fear is different from negative motivation. Motivation and goals are attractors. Fears, bad conscience and prohibitions are repellors. The repellent drives could count as third column to the solution of the control problem.

Superintelligence 13: Capability control methods

The high dimensionality of stunting options makes it easier to find the "right amounts" because we can apply digital stunting measures without need of fine tuning based on context. For some contexts stunting applies, for others not.

Bostrum lists several stunting means which can include a multitude of inner dimensions:

  • limit intellectual faculties (per capability/skill)
  • limit access to information (per capability/skill)
  • limit processing speed (per capability/skill)
  • limit memory (per capability/skill)
  • limit sensory input channels (stunting/boxing
... (read more)
1Liso7yThis could be not good mix -> Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying. This taboo: "structured programming languages.", could be impossible, because structure understanding and analysing is probably integral part of general intelligence. She could not reprogram itself in lower level programming language but emulate and improve self in her "memory". (She could not have access to her code segment but could create stronger intelligence in data segment)
Superintelligence 13: Capability control methods

Boxing and stunting combined can be very effective when an easy controllable weak AI gatekeeper restricts information that is allowed to get into the box. If we manage to educate an AI with humanistic experiences and values without any knowledge of classical programming languages, OSes and hardware engineering we minimize the risk of escaping. For self improvement we could teach how to influence and improve cognitive systems like its own. This system should use significantly different structures dissimilar to any known sequential programming language.

The growing AI will have no idea how our IT infrastructure works and even less how to manipulate it.

Superintelligence 12: Malignant failure modes

The "own best interest" in a winner- takes-all scenario is to create an eternal monopoly on everything. All levels of Maslow's pyramide of human needs will be served by goods and services supplied by this singleton.

Superintelligence 12: Malignant failure modes

With very little experimenting an AGI instantly can find out, given it has unfalsified knowledge about laws of physics. For nowadays virtual worlds: take a second mirror into a bathroom. If you see yourself many times in the mirrored mirror you are in the real world. Simulated raytracing cancels rays after a finite number of reflections. Other physical phenomena will show similar discrepencies with their simulated counterparts.

An AGI can easily distinguish where it is: it will use its electronic hardware for some experimenting. Similarly could it be possible to detect a nested simulation.

2selylindi7yThat would depend on it knowing what real-world physics to expect.
Superintelligence 12: Malignant failure modes

I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer.

Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excell... (read more)

Superintelligence 12: Malignant failure modes

A mayor intelligence agency announced recently to replace human administrators by "software". Their job is infrastructure profusion. Government was removed from controlling post latest in 2001. Competing agencies know that the current development points directly towards AGI that disrespects human property rights - they have to strive for similar technology.

Superintelligence 11: The treacherous turn

Changing one’s mind typically happens in an emotional conflict. An AGI might have thought to influence its parent researchers and administrators. The AI pretends to be nice and non-mighty for the time being. Conflicts arise when humans do not follow what the AI expects them to do. If the AI is mighty enough it can drop its concealing behavior and reveal its real nature. This will happen in a sudden flip.

Superintelligence 11: The treacherous turn

No. Open available knowledge is not enough to obtain decisive advantage. For this close cooperation with humans and human led organizations is absolutely necessary. Trust building will take years even for AGIs. In the mean time competing AGIs will appear.

Ben Goertzel does not want to waste time debating any more - he pushes open AGI development to prevent any hardware overhang. Other readers of Bostrums book might start other projects against singleton AI development. We do not have a ceteris paribus condition - we can shape what the default outcome will be.

1artemium7yBut who are "we"? There are many agents with different motivations doing AI development. I'm afraid that it will be difficult to control each of this agents(companies, governments, militaries, universities, terrorist groups) in the future, and the deceasing cost of technology will only increase the problem over time .
Superintelligence 11: The treacherous turn

To prevent human children taking a treacherous turn we spend billions: We isolate children from dangers, complexity, perversitiy, drugs, porn, aggression and presentations of these. To create a utility function that covers many years of caring social education is AI complete. A utility function is not enough - we have to create as well the opposite: the taboo and fear function.

Superintelligence 11: The treacherous turn

What about hard wired fears, taboos and bad conscience triggers? Recapitulating Omohundro "AIs can monitor AIs" - assume to implement conscience as an agent - listening to all thoughts and taking action in case. For safety reasons we should educate this concience agent with utmost care. Conscience agent development is an AI complete problem. After development the conscience functionality must be locked against any kind of modification or disabling.

0Liso7yPositive emotions are useful too. :)
Superintelligence 10: Instrumentally convergent goals

Your additional instrumental values spread your values and social influence become very important if we avoid rising of a decisive advantage AI.

In a society of AI enhanced humans and other AGIs altruism will become an important instrumental value for AGIs. The wide social network will recognize well-behavior and anti-social behavior without forgetting. Facebook gives a faint glimpse of what will be possible in future.

Stephen Omohundro said it in a nutshell: “AIs can control AIs.”

Superintelligence 8: Cognitive superpowers

Your argument we could be the first intelligent species in our past light-cone is quite weak because of the extreme extension. You are putting your own argument aside by saying:

We might still run into aliens later ...

A time frame for our discussion is covering maybe dozens of millenia, but not millions of years. Milky way diameter is about 100,000 lightyears. Milky way and its satellite and dwarf galaxies around have a radius of about 900,000 lightyears (300kpc). Our next neighbor galaxy Andromeda is about 2.5 million light years away.

If we run int... (read more)

Superintelligence 8: Cognitive superpowers

Yes indeed. Adaptability and intelligence are enabling factors. The human capabilities of making diverse stone tools, making cloth and fire had been sufficient to settle in other climate zones. Modern humans have many more capabilities: Agriculture, transportation, manipulating of any physical matter from atomic scales to earth surrounding infrastructures; controlling energies from quantum mechanical condensation up to fusion bomb explosions; information storage, communication, computation, simulation, automation up to narrow AI.

Change of human intelligenc... (read more)

Superintelligence 8: Cognitive superpowers

I fully agree to you. We are for sure not alone in our galaxy. But I disagree to Bostrums instability thesis either extinction or cosmic endowment. This duopolar final outcome is reasonable if the world is modelled by differential equations which I doubt. AGI might help us to make or world a self stabilizing sustainable system. An AGI that follows goals of sustainability is by far safer than an AGI thriving for cosmic endowment.

2Sebastian_Hagen7yThat is close to the exact opposite of what I wrote; please re-read. There are at least three major issues with this approach, any one of which would make it a bad idea to attempt. 1. Self-sustainability is very likely impossible under our physics. This could be incorrect - there's always a chance our models are missing something crucial - but right now, the laws of thermodynamics strongly point at a world where you need to increase entropy to compute, and so the total extent of your civilization will be limited by how much negentropy you can acquire. 2. If you can find a way to avoid 1., you still risk someone else (read: independently evolved aliens) with a less limited view gobbling up the resources, and then knocking on your door to get yours too. There's some risk of this anyway, but deliberately leaving all these resources lying around means you're not just exposed to greedy aliens in your past, you're also exposed to ones that svolve in the future. The only sufficient response to that would be if you can't just get unlimited computation and storage out of limitd material resources, but you also get an insurmountable defense to let you keep it against a less restrained attacker. This is looking seriously unlikely! 3. Let's say you get all of these, unlikely though they look right now. Ok, so what leaving the resources around does in that scenario is to relinquish any control about what newly evolved aliens get up to. Humanity's history is incredibly brutal and full of evil. The rest of our biosphere most likely has a lot of it, too. Any aliens with similar morals would have been incredibly negligent to simply let things go on naturally for this long. And as for us, with other aliens, it's worse; they're fairly likely to have entirely incompatible value systems, and may very well develop into civilizations that we would continue a blight on our universe
-1Liso7yWhen we discuss about evil AI [http://lesswrong.com/lw/l0o/superintelligence_reading_group_2_forecasting_ai/bdoa?context=3] I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level. Now I am thankful because your comment enlarge possibilities to think about Fermi. We could not think only self destruction - we could think modesty and self sustainability. Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one). We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.
Superintelligence 8: Cognitive superpowers

Poor contextual reasoning happens many times a day among humans. Our threads are full of it. In many cases consequences are neglectable. If the context is unclear and a phrase can be interpreted one way or the other, no magical wisdom is there:

  • Clarification is existential: ASK
  • Clarification is nice to have: Say something that does not reveal that you have no idea what is meant and try to stimulate that the other reveals contextual information.
  • Clarification unnecessary or even unintended: stay in the blind or keep the other in the blind.

Correct associ... (read more)

Superintelligence 8: Cognitive superpowers

Let us try to free our mind from associating AGIs with machines. They are totally different from automata. AGIs will be creative, will learn to understand sarcasm, will understand that women in some situations say no and mean yes.

On your command to add 10 to x an AGI would reply: "I love to work for you! At least once a day you try to fool me - I am not asleep and I know that + 100 would be correct. ShalI I add 100?"

0Liso7yVery good! But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?
Superintelligence 8: Cognitive superpowers

Why do not copy concepts how children learn ethical codes?

Inherited is: fear of death, blood, disintegration and harm generated by overexcitation of any of the five senses. Aggressive actions of a young child against others will be sanctioned. The learning effect is "I am not alone in this world - whatever I do it can turn against me". A short term benefit might cause overreaction and long term disadvantages. Simplified ethical codes can be instilled although a young child cannot yet reason about it.

Children between the ages of 7 and 12 years

... (read more)
5Viliam_Bur7yBecause the AI is not a child [http://lesswrong.com/lw/sp/detached_lever_fallacy/], so doing the same thing would probably give different results. The essence of the problem is that the difference between "interpreting" and "misinterpreting" only exists in the mind of the human. If I as a computer programmer say to a machine "add 10 to X" -- while I really meant "add 100 to X", but made a mistake -- and the machine adds 10 to X, would you call that "misinterpreting" my command? Because such things happen every day with the existing programming languages, so there is nothing strange about expecting a similar thing happening in the future. From the machine point of view, it was asked to "add 10 to X", it added 10 to X, so it works correctly. If the human is frustrated because that's not what they meant, that's bad for the human, but the machine worked correctly according to its inputs. You may be assuming a machine with a magical source of wisdom [http://lesswrong.com/lw/rf/ghosts_in_the_machine/] which could look at command "add 10 to X" and somehow realize that the human would actually want to add 100, and would fix its own program (unless it is passively aggressive and decides to follow the letter of the program anyway). But that's not how machines work.
Superintelligence 8: Cognitive superpowers
  • We need a global charta for AI transparency.
  • We need a globally funded global AI nanny project like Ben Goertzel suggested.
  • Every AGI project should spend 30% of its budget on safety and control problem: 2/3 project related, 1/3 general research.

We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.

Superintelligence 8: Cognitive superpowers

No. The cosmic endowment and related calculations do not make any sense to me. If these figures were true this tells us that all alien societies in our galaxy directly went extinct. If not, they would have managed cosmic endowment and we should have found von Neumann probes. We haven't. And we won't.

Instead of speculating about how much energy could be harvested when a sphere of solar cells is constructed around a star I would love to have found a proper discussion about how our human society could manage the time around crossover.

5Sebastian_Hagen7yI see no particular reason to assume we can't be the first intelligent species in our past light-cone. Someone has to be (given that we know the number is >0). We've found no significant evidence for intelligent aliens. None of them being there is a simple explanation, it fits the evidence, and if true then indeed the endowment is likely ours for the taking. We might still run into aliens later, and either lose a direct conflict or enter into a stalemate situation, which does decrease the expected yield from the CE. How much it does so is hard to say; we have little data on which to estimate probabilities on alien encounter scenarios.
Superintelligence 8: Cognitive superpowers

Yudkowski wanted to break it down to the culmination point that a single collaborator is suffient. For the sake of the argument it is understandable. From the AIs viewpoint it is not rational.

Our supply chains are based on division of labor. A chip fab would not ask what a chip design is good for when they know how to test. A pcb manufacturer needs test software and specifications. A company specified on burn-in testing will assemble any arrangement and connect it even to the internet. If an AI arranges generous payments in advance no one in the supply ch... (read more)

Superintelligence 8: Cognitive superpowers

The capabilities of a homo sapiens sapiens 20,000 years ago are more chimp-like than comparable to a modern internet- and technology-amplified human. Our base human intelligence seems to be only a very little above the necessary threshold to develop cultural technologies that allow us to accumulate knowledge over generations. Standardized languages, the invention of writing and further technological developments improved our capabilities far above this threshold. Today children need years until they aquire enough cultural technologies and knowledge to beco... (read more)

0SilentCal7yI'm not a prehistorian or whatever the relevant field is, but didn't paleolithic humans spread all over the planet in a way chimps completely failed to? Doesn't that indicate some sort of very dramatic adaptability advantage?
Superintelligence 8: Cognitive superpowers

It is less likely that AI algorithms will happen to be especially easy if a lot of different algorithms are needed. Also, if different cognitive skills are developed at somewhat different times, then it's harder to imagine a sudden jump when a fully capable AI suddenly reads the whole internet or becomes a hugely more valuable use for hardware than anything being run already. [...] Overall it seems AI must progress slower if its success is driven by more distinct dedicated skills.

To me the skill set list on table 8 (p94) was most interesting. Superintel... (read more)

Superintelligence 7: Decisive strategic advantage

Your argumentation based on the orthogonality principle is clear to me. But even if the utility function includes human values (fostering humankind, preserving a sustainable habitat on earth for humans, protecting humans against unfriendly AI developments, solving the control problem) strong egoistic traits are needed to remain superior to other upcoming AIs. Ben Goertzel coined the term "global AI Nanny" for a similar concept.

How would we get notion of existence of a little interfering FAI singleton?

Do we accept that this FAI wages military war against a sandboxed secret unfriendly AI development project?

1Sebastian_Hagen7yThe AI's values would likely have to be specifically chosen to get this outcome; something like "let human development continue normally, except for blocking existential catastrophes". Something like that won't impact what you're trying to do, unless that involves destroying society or something equally problematic. Above hypothetical singleton AI would end up either sabotaging the project, or containing the resulting AI. It wouldn't have to stop the UFAI before release, necessarily; with enough of a hardware headstart, later safe containment can be guaranteed fine. Either way, the intervention needn't involve attacking humans; interfering with just the AI's hardware can accomplish the same result. And certainly the development project shouldn't get much chance to fight back; terms like "interdiction", "containment", "sabotage", and maybe "police action" (though that one has unfortunate anthropomorphic connotations) are a better fit than "war".
Superintelligence 7: Decisive strategic advantage

Probably not: Some nerdy superintelligent AI systems will emerge but humans will try their utmost to shut off early enough. Humankind will become very creative to socialize AGI. The highest risk is that a well funded intelligence agency (e.g. NSA) will be first. Their AI system could make use of TAO knowledge to kill all competing projects. Being nerdy intelligent it could even manipulate competing AI projects that their AIs get "mental" illnesses. This AI will need quite a long time of learning and trust-building until it could take over world d... (read more)

Superintelligence 7: Decisive strategic advantage

Diametral opposing to theft and war szenarios you discuss in your paper "Rational Altruist - Why might the future be good?":

How much altruism do we expect?

[...] my median expectation is that the future is much more altruistic than the present.

I fully agree with you and this aspect is lacking in Bostrums book. The FOOM - singleton theory intrinsically assumes egoistic AIs.

Altruism is for me one of the core ingredience towards sustainably incorporating friendly AIs into society. I support your view that the future will be more altruistic tha... (read more)

1Sebastian_Hagen7yNo, that's wrong. The speed of takeoff is largely a technical question; from a strategic planning POV, going through a rapid takeoff likely makes sense regardless of what your goals are (unless your friendliness design is incomplete /and/ you have corrigibility aspects; but that's a very special case). As for what you do once you're done, that does indeed depend on your goals; but forming a singleton doesn't imply egoism or egocentrism of any kind. Your goals can still be entirely focused on other entities in society; it's just that if have certain invariants you want to enforce on them (could be anything, really; things like "no murder", "no extensive torture", "no destroying society" would be unoffensive and relevant examples) - or indeed, more generally, certain aspects to optimize for - it helps a lot if you can stay in ultimate control to do these things. As Bostrom explains in his footnotes, there are many kinds of singletons. In general, it simply refers to an entity that has attained and keeps ultimate power in society. How much or how little it uses that power to control any part of the world is independent of that, and some singletons would interfere little with the rest of society.
Superintelligence 7: Decisive strategic advantage

Non-immunity to illnesses is very important to us. Our computers and network infrastructure is more or less immune against script-kiddies and polymorphal viruses and standard attack schemes.

Our systems are not immune against tailored attacks from intelligence agencies or AIs.

Superintelligence 7: Decisive strategic advantage

Yes indeed I am convinced that 30 years of learning is a minimum for running a large company or a government. I compiled data from 155 government leaders of five countries. On the average they took office for their first term at the age of 54.3 years.

For my above statement allow me to substract 2 standard deviations (2 x 8.5 = 19 years). A government leader is therefore with 97.7% probability older than 35.3 years when he takes office for the first time. The probability of a government leader being younger than 30 years is 0.22%, calculated from the st... (read more)

2Lumifer7yYour data shows what typically happens. That's a bit different from what "a human is capable" of. Technically speaking, a single counter-example overturns your claim.
Superintelligence 7: Decisive strategic advantage

Bostrum underestimates complexity of learning, compare Robin Hanson's criticism "I Still Don’t Get Foom" on his book.

Assume following small team scenario that could reach a decisive advantage: A hedge fond seeks world dominion and develops in secrecy a self-improving AI. Following skills shall reach superhuman capabilities:

  • cyberattack and cryptanalysis
  • semantic comprehension of tech and business documents
  • trading strategy

Latest when this AI reaches a decisive strategic advantage over other market players they will acknowledge this instantly. ... (read more)

0Lumifer7yLOL [http://en.wikipedia.org/wiki/Alexander_the_Great]. Do you really [http://en.wikipedia.org/wiki/Bill_Gates] think so [http://en.wikipedia.org/wiki/Mark_Zuckerberg]?
Superintelligence 6: Intelligence explosion kinetics

The price-performance charts document averaged development results from subsequent technological S-curves, documented by Genrich Altschuller, inventor of TRIZ (short article, [Gadd2011]). At the begin high investment does not result in direct effect. The slope in the beginning is slowly rising because of poor technological understanding, lacking research results and high recalcitrance. More funding might start alternative S-curves. But these new developing technologies are still in their infancy and give no immediate results.

Recalcitrance in renewable ener... (read more)

Superintelligence 6: Intelligence explosion kinetics

For a Dornier DO-X it took 5 km to reach takeoff speed in 1929. To me many mental connections match the intended sense:

  • high tech required (aircraft)
  • high power needed to accelerate
  • long distance to reach takeoff speed
  • losing contact to ground in soft transition
  • steep rising and gaining further speed
  • switching to rocket drive

... and travel to Mars...

Other suggestions welcome! Foom is ok for a closed LW community but looks strange to outsiders.

Load More