All of asciilifeform's Comments + Replies

We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?

Perhaps such a test would become part of an objective method to measure rationality.

Would you be willing to show a reference or back-of-the-envelope calculation for this?

The last time I checked, the manufacture of large photovoltaic panels was energy-intensive and low-yield (their current price suggests that these problem persist.) They were also rated for a useful life of around two decades.

I do not believe that these problems have been corrected in any panel currently on the market. There is no shortage of vaporware.

Manufacturers hold the details of their processes close to the chest, but you can use their sale price for a back-of-the-envelope upper bound. Searching Google for "photovoltaics", the first price I found was $2.38/W (in one of the ads). Assuming the manufacturer only breaks even, and that entire price was spent on energy used for manufacturing the panels and the materials used to make the panels, then at 5 cents/kWh (which according to is a good price), making the panel couldn't have used more than 47kWh/W. If that much energy was used, and the solar panel operated at rated capacity for 8 hours/day, then it would take 16 years to produce the energy used to make it. By comparison, the warranty on the same panel guarantees at least 80% capacity for 25 years. However, this is a very loose upper bound, in that I assumed that the entire purchase price was spent on electricity. This is probably off by an order of magnitude, since the cost of solar cells is dominated by labor, R&D, factory equipment, and profit, not by energy use.

solar panels take more energy to manufacture than they'll produce in their lifetime

Do you mean to say that this is false?

I haven't done the necessary investigation to tell whether or not it's false, although I'm inclined to believe that recent technological advances support jimrandomh's position, but that was aside from the point. I was merely saying that I have heard people argue that every one of the points is a fantasy, and solar energy was one of them. I'm not the one who connected it to gay marriage and evolution, so its inclusion among two things I believe I have enough knowledge to say are not fantasies is not meant to imply an endorsement of solar energy.
It's easy to check: what costs more: a new solar panel, or the amount of electricity it's able to produce? If the panel costs less, then the evergy used to make it costs less as well.
It is false for all panels on the market today. There may in the past have been solar panels that cost a lot more energy to manufacture and produced a lot less energy, but no one would be a panel like that anymore.
It doesn't seem like she has a good grasp on what people are doing with Etsy and what it's about. If you want to make a 'profitable' business, you're already looking in the wrong place on Etsy. But if your time isn't worth much and you want to sell some crafts, it seems to work fine.

This comes to mind. The author claims that "the winner was accurate to six decimal places."

Could you give more examples about things you like about Mathematica?

1) Mathematica's programming language does not confine you to a particular style of thinking. If you are a Lisp fancier, you can write entirely Lispy code. Likewise Haskell. There is even a capability for relatively painless dataflow programming.

2) Wolfram Inc. took great pains to make interfacing with the outside world from within the app as seamless as possible. For example, you can suck in a spreadsheet file directly into a multidimensional array. There is import and export capabil... (read more)

intelligence doesn't necessarily have anything to do with our capacity to detect lies

Do you actually believe this?

Yep. Higher intelligence implies a greater capacity to work out the logical consequences of assertions and thus potentially detect inconsistencies between two assertions or an assertion and an action. It doesn't imply that people will have the drive to look for such contradictions, or that such a detected contradiction will be interpreted properly, nor does it imply that it will be useful at detecting lies without logical contradictions.
Yes. A new fact is much more likely to be wrong or misunderstood than the entirety of your previous experience. Updating is a cumulative process.

I do not know of a working society-wide solution. Establishing research institutes in the tradition of Bell Labs would be a good start, though.

Do you mean that organizations aren't very good at selecting the best person for each job.

Actually, no. What I mean is that human society isn't very good at realizing that it would be in its best interest to assign as many high-IQ persons as possible the job of "being themselves" full-time and freely developing their ideas - without having to justify their short-term benefit.

Hell, forget "as many as possible", we don't even have a Bell Labs any more.

This, I think, is a special case of what I meant. A simple, crude, way to put the general point is that people don't defer enough to those who are smarter. If they did, smart folks would be held in higher esteem by society, and indeed would consequently have greater autonomy.
How should society implement this? I repeat my claim that other personal characteristics are as important as IQ.

How does increasing "the marginal social status payoff from an increase in IQ" help?

The implication may be that persons with high IQ are often prevented from putting it to a meaningful use due to the way societies are structured: a statement I agree with.

Do you mean that organizations aren't very good at selecting the best person for each job. I agree with that statement, but its about much, much, more than IQ. It is a tough nut to crack but I have given some thought to how we could improve honest signaling of people's skills.

But there is no evidence that any pill can raise the average person's IQ by 10 points

Please read this short review of the state of the art of chemical intelligence enhancement.

We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads.

if some simple chemical balance adjustment could have such a dramatic effect on fitness

Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of ... (read more)

I will accept that "AGI-now" proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence - or, say, the output of a purely human AI research community unburdened by Friendliness worries - might be able to counter. I previously gave Orlov's petrocollapse as yet another example.)

I cannot pin down this idea as rigorously as I would like, but there seems to exist such a trait as liking to think abstractly, and that this trait is mostly orthogonal to IQ as we understand it (although a "you must be this tall to ride" effect applies.) With that in mind, I do not think that any but the most outlandishly powerful and at the same time effortless intelligence amplifier will be of much interest to the bulk of the population.

I did not address the issue of actually getting people to take cognitive enhancers in my post. It is a huge can of worms that would take at least a whole post to get into. Let's concentrate on the hypothetical here: IF we could get people to do this, then it would be a good thing.
I think this is called need for cognition. (I first saw this phrase somewhere here on LW.)

ASCII - the onus is on you to give compelling arguments that the risks you are taking are worth it

Status quo bias, anyone?

I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I'm thinking of Yudkowsky's Super-Happies.)

Since your justification is omitted here, I'll go ahead and suspect it's at least as improbable as this one. The question isn't simply "do we need better technology to mitigate existential risk", it's "are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk". If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk. So please, be more specific. The argument "lack of progress contributes to existential risk" contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.

It's highly non-obvious that it would have significant effects

The effects may well be profound if sufficiently increased intelligence will produce changes in an individual's values and goal system, as I suspect it might.

At the risk of "argument from fictional evidence", I would like to bring up Poul Anderson's Brain Wave, an exploration of this idea (among others.)

not quite what I was aiming at

I am curious what you had in mind. Please elaborate.

I had in mind average Joe the truck driver who cannot understand an argument like "Corn ethanol is a bad idea because the energy conversion efficiency of corn plants is extremely low, so the energy output of the process, including all the farming and processing, may be negative", but who instead falls victim to "Corn ethanol is good because you should SUPPORT OUR FARMERS!" You're talking about enhancing the efficiency of the smartest people (like you), I'm talking about enhancing the efficiency of the average person.

Software programs for individuals.... prime association formation at a later time.... some short-term memory aid that works better than scratch paper

I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally.

Or just biting the bullet and learning Mathematica to an exper

... (read more)
Could you give more examples about things you like about Mathematica? Years ago, I resolved to become an expert at it after reading A New Kind of Science (will you guys forgive me?) and like it for a while, but then noticed some things were needlessly complicated or refused to spit out the right results (long time ago so I can't give examples). Btw, I learned about Lisp after Mathematica, and was like, "wow, that must have been where Wolfram got the idea."
Thanks for the motivation, by the way -- I have toyed with the idea of getting Mathematica many times in the past but the $2500 price tag dissuaded me. Now I see that they have a $295 "Home Edition", which is basically the full product for personal use. I bought it last night and started playing with it. Very nifty program.
Cool stuff. Good luck with your research; if you come up with anything that works I'll be in line to be a customer!
Sounds cool, but this is not quite what I was aiming at.

I have located a paper describing Lenat's "Representation Language Language", in which he wrote Eurisko. Since no one has brought it up in this thread, I will assume that it is not well-known, and may be of interest to Eurisko-resurrection enthusiasts. It appears that a somewhat more detailed report on RLL is floating around public archives; I have not yet been able to track down a copy.

Fair enough. It may very well take both.

How are truly fundamental breakthroughs made?

Usually by accident, by one or a few people. This is a fine example.

ought to be more difficult than building an operating system

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Le... (read more)

"Reversed Stupidity is Not Intelligence." If AGI takes deep insight and a pyramid, then we would expect those projects to fail.

Do you agree that you hold a small minority opinion?

Yes, of course.

Do you have any references where the arguments are spelled out in greater detail?

I was persuaded by the writings of one Dmitry Orlov. His work focuses on the impending collapse of the U.S.A. in particular, but I believe that much of what he wrote is applicable to the modern economy at large.

Please attack my arguments. I truly mean what I say. I can see how you might have read me as a troll, though.

In the next century I think it is unlikely 1. resource scarcity will dramatically lower economic growth across the world, or 2. competition for resources will lead to devastating war between major powers, e.g. U.S. and China, because each country has too much to lose. I believe my opinions are shared by most economists, political scientists, politicians. Do you agree that you hold a small minority opinion? Do you have any references where the arguments are spelled out in greater detail?

There were some extremely dedicated and obsessive people involved in Traveller, back then

How many of them made use of any kind of computer? How many had any formal knowledge applicable to this kind of optimization?

the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research

Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than "the rabble" to decide the fate of all mankind.

Name three. Not being rhetorical, genuinely curious here.
This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science's efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.

if you are not dead as a result

I am profoundly skeptical of the link between Hard Takeoff and "everybody dies instantly."

ad-hoc tinkering is expected to lead to disaster

This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the "premature" development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.

Ad-... (read more)

To discuss it, you need to address it explicitly. You might want to start from here, here and here. That's a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it's shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from "natural" causes. That's all. Whether it's likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable. True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it's otherwise convenient and almost indispensable, and has proven itself over the centuries.

Thank you for the link.

I concede that a post-collapse society might successfully organize and attempt to resurrect civilization. However, what I have read regarding surface-mineral depletion and the mining industry's forced reliance on modern energy sources leads me to believe that if our attempt at civilization sinks, the game may be permanently over.

Why would we need to mine for minerals? It's not as though iron or copper permanently stop being usable as such when they're alloyed into structural steel or semiconductors. The wreckage of an industrial civilization would make better ore than any natural stratum.
Possible, but again your reply doesn't contain an argument, it can't change anyone's beliefs.

I view the teenager's success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my "goal" in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.

You consider the creation of an unFriendly superinelligence a step on the road to understanding Friendliness?
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it's defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it's not, but let's leave it aside for the moment. If the teenager implemented something that has a good effect, it's FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of "Friendly AI", but that ad-hoc tinkering is expected to lead to disaster, however you call it.

he logic of mutually assured destruction would be clear and compelling even to the general public

When was the last time a government polled the general public before plunging the nation into war?

Now that I think about it, the American public, for instance, has already voted for petrowar: with its dollars, by purchasing SUVs and continuing to expand the familiar suburban madness which fuels the cult of the automobile.

I encourage you to write more serious comments... or find some other place to rant.

have 10% of the population do science

Do you actually believe that 10% of the population are capable of doing meaningful science? Or that post-collapse authority figures will see value in anything we would recognize as science?

This addresses the wrong issue: the question I answered was about capability of the pre-industrial society to produce enough surplus for enough people to think professionally, while your nitpick is about a number clearly intended to serve as a feasible upper bound being too high. See also: Least convenient possible world.

we have nuclear, wind, solar and other fossil fuels

Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture's non-negotiable dependence on synthetic fertilizers.

Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.

You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct. But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.

AGI is a really hard problem

It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.

If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years

This is not how truly fundamental breakthroughs are made.

Will they be so immersed in the math that they won't have read the deep philosophical tracts?

Here is where I agree with you - anyone both qualified and motivated to work on AGI will have no time or inclination to pontifi... (read more)

This is not how truly fundamental breakthroughs are made.

Hmm---now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough---that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.


... (read more)

Is that your plan against intelligence stagnation?

I'll bet on the bored teenager over a sclerotic NASA-like bureaucracy any day. Especially if a computer is all that's required to play.

This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).

The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov's blog and dead-tree book.

As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of &... (read more)

You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher. Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.

Dying is the default.

I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.

You make a lot of big claims in this thread. I'm interested in reading your detailed thoughts on these. Could you please point to some writings?
i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don't believe hard takeoff occurs easily?
but when do you think the petrocollapse is? Personally, I don't think that the end of oil will be so bad; we have nuclear, wind, solar and other fossil fuels. Also, look at the incentives: each country is individually incentivized to develop alternative energy sources.

How is blindly looking for AGI in a vast search space better than stagnation?

No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.

How does working on FAI qualify as "stagnation"?

It is a distraction from doing things which are actually useful in the creation of our successors.

You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put int... (read more)

AGI is a really hard problem. If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won't have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.
That truly would be a sad day. Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are "things which are actually useful in the creation of our successors"? Is that your plan against intelligence stagnation?
Earlier: In other words, Friendly AI is an ineffective effort even compared to something entirely hypothetical.
Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.

How about thinking about ways to enhance human intelligence?

I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.

If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.

Dying doesn't appeal to me, hence the desire to build an FAI.

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

That doesn't make guaranteed destruction any better. It just makes FAI harder, because the time limit is closer. Also, excellent example with the "planetary IQ test" thing.
What convinced you and how convinced are you?
So you, like me are a "Risk transhumanist" - someone who thinks that existential risk motivates the enhancement of the intelligence of those humans who do the substantial information processing in our society (i.e. politicians, economists, scientists, etc). I completely agree with this position. However, creating an uFAI doesn't make things any better. How about thinking about ways to enhance human intelligence?

If humans manage to invent a virus that wipes us out, would you still call that intelligent?

Super-plagues and other doomsday tools are possible with current technology. Effective countermeasures are not. Ergo, we need more intelligence, ASAP.

catastrophic social collapse seems to require something like famine

Not necessarily. When the last petroleum is refined, rest assured that the tanks and warplanes will be the very last vehicles to run out of gas. And bullets will continue to be produced long after it is no longer possible to buy a steel fork.

R&D... efficient services... economy of scale... new technologies will appear

Your belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind. The one historical precedent we have ... (read more)

The key is that some group would set up some form of government. My best guess is that governments which established rule of law, including respect for private property, would become more powerful relative to other governments. Technological progress would begin again. Also, see what I just wrote to Roko about why resource scarcity is unlikely to be as a great a problem as you think and why wars and famines are unlikely to affect wealthy countries as a result of resource scarcity.
IIRC, Robert Wright argued in his book NonZero that technological development had stagnated when the Roman Empire reached its apex, and that the dark ages actual brought several important innovations. These included better harnesses, better plows, and nailed iron horse shoes, all of which increased agricultural yield. The Dark Ages also saw improvements to water-wheel technology, which led to much wider use if it. He also makes the case that all the fractured polities led to greater innovations in the social and economic spheres as well.
It could - and most probably would - rise up again, eventually. Rising up from the half-buried wreckage of modern civilization is easier than building it from scratch. But I don't go as far as Vladimir and say it's virtually guaranteed. One scenario is that the survivors could all fall to a new religion that preached that technology itself was evil. This religion might suppress technological development for longer than Christianity suppressed it in the dark ages - which was 1000 years. I still think it is likely that technology would eventually make it through, but perhaps it would be used to create a global totalitarian state?

permanently put us back in the stone age

Exactly. The surface-accessible minerals are entirely gone, and pre-modern mining will have no access to what remains. Even meaningful landfill harvesting requires substantial energy and may be beyond the reach of people attempting to "pick up the pieces" of totally collapsed civilization.

I agree with the opinion presented in this comment.
Even thrown back to a stone age, the second arc of development doesn't need to repeat the first. There are plenty of ways to systematically develop technology by other routes, especially if you don't implement mass production for the planetary civilization in the process, working only on improving technology on small scale, up to a point of overcoming the resource problem.

resource depletion (as alluded to by RWallace) is a strong possible threat. But so is a negative singularity.

Resource depletion is as real and immediate as gravity. You can pick up a pencil and draw a line straight through present trends to a horse-and-cart world (or the smoking, depopulated ruins from a cataclysmic resource war.) The negative singularity, on the other hand, is an entirely hypothetical concept. I do not believe the two are at all comparable.

Sure, this is a fair point. I think this is unfair. Resource depletion is also a hypothetical concept, because it hasn't happened yet. Both resource depletion and the technological singularity are based upon trend extrapolation and uncertain theoretical arguments. It is also the case that resource depletion is being addressed by $10^9 's of research money.
That is your present position, not a good argument for it. It could be valuable as a voice of dissent, if many other people shared your position but hesitated to voice it. My position, for example, is that resource depletion isn't really an issue, it may only lead to some temporary hardship, but not to something catastrophic and civilization-stopping, while negative AGI is a very real show-stopper. Does my statement change your mind? If not, what's the message of your own statement for people who disagree?

Would you have hidden it?

You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.

Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, &... (read more)

Would you have hidden it?

I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they'd known they could've used purified graphite... well, they probably still wouldn't have gotten nuclear weapons in this Everett branch but they might have somewhere else.

Before 2001 I would probably have been on Fermi's side, but that's when I still believed deep down that no true harm could come to someone who was only faithfully trying... (read more)

How is blindly looking for AGI in a vast search space better than stagnation? How does working on FAI qualify as "stagnation"?
What do you mean by this?

I was going to reply, but it appears that someone has eloquently written the reply for me.

I'd like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms - something I believe we're headed for by default, in the very near future.

This reminds me of the response I got when I criticized an acquaintance for excessive, reckless speeding: "Life is all about taking risks."

Well, Lenat did. Whether or in what capacity a computer program was involved is an open question.

It's useful evidence that EURISKO was doing something. There were some extremely dedicated and obsessive people involved in Traveller, back then. The idea that someone unused to starship combat design of that type could come and develop fleets that won decisively two years in a row seems very unlikely. It might be that EURISKO acted merely as a generic simulator of strategy and design, and Lenat did all the evaluating, and no one else in the contest had access to simulations of similar utility, which would negate much of the interest in EURISKO, I think.

Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place?

This was addressed in "Value is Fragile."

It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.

I don't think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it's just this device that tiles the light... (read more)

Until Yudkowsky releases the chat transcripts for public review, the AI Box experiment proves nothing.

EURISKO accomplished it in fits and starts

Where is the evidence that EURISKO ever accomplished anything? No one but the author has seen the source code.

Load More