Sorted by New

Wiki Contributions


The OP is basically the fairly standard basis of american-style libertarianism.

It doesn't particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology.

But I don't think the moral intuitions you list are terribly universal.

The closest parallel I can think of is someone listing contemporary american copyright law and listing it's norms as if they're some kind of universally accepted system of morals.

"but you are definitely not allowed to kill one"

Johny thousand livers is of course an exception.

Or put another way, if you say to most people,

"ok, so you're in a scenario a little bit like the films Armageddon or deep impact. Things have gone wrong but it's a smaller rock and and all you can do at this point is divert it or not, it's on course for new york city, ten million+ will die, you have the choice to divert it to a sparsely populated area of the rocky mountains... but there's at least one person living there"

Most of the people who would normally declare that the trolley problem with 1vs5 makes it unethical to throw that one person in front of the trolley... will change their view once the difference in the trade is large enough.

1 vs 5 isn't big enough for them but the idea of tens of millions will suddenly turn them into consequentialist.

"You are not required to save a random person"

Also, this is a very not-universal viewpoint. Show people that video of the chinese kid being run over repeatedly while people walk past ignoring her cries and many will declare that the passers-by who ignored the child committed a very clear moral infraction.

"Duty of care" is not popular in american libertarianism but it and variations is a common concept in many countries.

The deliberate failure to provide assistance in the event of an accident is a criminal offence in France.

In many countries if you become aware of a child suffering sexual abuse there are explicit duties to report.

And once you accept the fairly commonly held concept of "duty of care", the idea that you actually do have a duty to others, and suddenly the absolutist property stuff largely sort of falls apart and it becomes entirely reasonable to require some people to give up some fraction of property to provide care for those around them just as it's reasonable to expect them to help an injured toddler out of the street or to help the victim of a car accident or to let the authorities know if they find out that a kid is being raped.

"Duty" or similar "social contract" precepts that imply that you have some positive duties purely by dint of being a human with the capacity to intervene tend to be rejected by the american libertarian viewpoint but it's a very very common aspect of the moral intuitions of a large fraction of the worlds population.

It's not unlimited and it tends towards Newtonian Ethics but moral intuitions aren't known for being perfectly fair.

Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.

Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an astronaut but aspirations don't put you in the category with Armstrong until you actually do the thing.

Your pet collie might dream vaguely of building cars, perhaps in 5,000,000 years it's descendants might have self selected for intelligence and we'll have collie engineers, that doesn't make it an engineer today.

Currently by the definition in that book humans are not universal constructors, at best we might one day be universal constructors if we don't all get wiped out by something first. It would be nice if we became such one day. But right now we're merely closer to being universal constructors than unusually bright ravens and collies.

Feelings are not fact. Hopes are not reality.

Assuming that nothing will stop us based on a thin sliver of history is shaky extrapolation:

Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.

Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .

It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.

This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science.

It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)

This again feels like one of those things that creeps the second anyone points you to examples.

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

Nothing to see here everyone.

This is just yet another boring iteration of the forever shifting goalposts of AI .

First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)

From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.

When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.

His arguments are full of gaping holes. How does this not jump out at other readers?

This argument seems chosen to make it utterly unfalsifiable.

If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"

You're describing what's known as General game playing.

you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.

This is in fact a field in AI.

also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.

...ok so I don't get to find the arguments out unless I buy a copy of the book?

right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"

But lets have a read of the chapter "Artificial Creativity"

big long spiel about ELIZA being crap. Same generic qualia arguments as ever.

One minor gem in there for which the author deserves to be commended:

"I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it"


Claim that genetic algorithms and similar learning systems aren't really inventing or discovering anything because they reach local maxima and thus the design is really just coming from the programmer. (presumably then the developers of alpha-go must be the worlds best grandmaster go players)

I see the phrase "universal constructors" where the author claims that human bodies are able to turn anything into anything. This argument appears to rest squarely on the idea that while there may be some things we actually can't do or ideas we actually can't handle we should, one day, be able to either alter ourselves or build machines (AI's?) that can handle it. Thus we are universal constructors and can do anything.

On a related note I an in fact an office block because while I may not actually be 12 stories tall and covered in glass I could in theory build machines which build machine which could be used to build an office block and thus by this books logic, that makes me an office block and from this point forward in the comments we can make arguments based on the assumption that I can contain at least 75 office workers along with their desks and equipment

The fact that we haven't actually managed to create machines that can turn anything into anything yet strangely doesn't get a look in on the argument about why we're currently universal constructors but dolphins are not.

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Summary: I'd give it a C- but upgrade it to C for being better than the geocities website selling it.

Also, the book doesn't actually address my objections.

I started this post off trying to be charitable but gradually became less so.

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true? anything rigorous? The human mind could have some notable blind spots. For all we know there could be concepts that happen to cause normal human minds to suffer lethal epileptic fits similar to how certain patterns of flashing light can to some people. Or simple concepts that could be incredibly inefficient to encode in a normal human mind that could be easily encoded in a mind of a similar scale with a different architecture.

"There is no such thing as a partially universal knowledge creator."

What is this based upon? some animals can create novel tools to solve problems. Some humans can solve very simple problems but are quickly utterly stumped beyond a certain point. Dolphins can be demonstrated to be able to form hypothesis and test them but stop at simple hypothesis.

Is a human a couple of standard deviations bellow average who refuses to entertain hypotheticals a "universal knowledge creator"? Can the author point to any individuals on the border or below it either due to brain damage or developmental problems?

Just because a turning machine can in theory run all computable computations that doesn't mean that a given mind can solve all problems that that Turing machine could just because it can understand the basics of how a turing machine works. The programmer is not just a super-set of their programs.

"These ideas imply that AI is an all-or-none proposition."

You've not really established that very well at all. You've simply claimed it with basically no support.

your arguments seem to be poorly grounded and poorly supported, simply stating things as if they were fact does not make them so.

"Humans do not use the computational resources of their brains to the maximum."

Interesting claim. So these ruthlessly evolved brains aren't being used even when our lives and the lives of our progeny are in jeopardy? Odd to evolve all that expensive excess capacity then not use it.

"Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter"

Ok, here's a challenge. We both set up a chess AI but I get to use the hardware that was recently used to run AlphaZero while you only get to use a 486. We both get to use the same source code. Standard tournament chess rules with time limits.

You seem to be mentally modeling all potential AI as basically just a baby based on literally... nothing whatsoever.

Your TCS link seems to be fluff and buzzwords irrelevant to AI.

"Some reading this will object because CR and TCS are not formal enough — there is not enough maths"

That's an overly charitable way of putting it. Backing up none of your claims then building a gigantic edifice of argument on thin air is not great for formal support of something.

"Not yet being able to formalize this knowledge does not reflect on its truth or rigor."

"We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. "

The space of potentially true things that are actually completely false is infinite. If you just pick ideas out of the air and don't bother with testing them and showing them to be correct you provide about as much useful insight to those around you as the average screaming madman on the street corner preaching that the Robot Lizardmen are working with the CIA to put radio transmiters in his teeth to hide the truth about 9/11.

Proving your claims to actually be true or to have some meaningful chance of being true matters.

Load More