All of timtyler's Comments + Replies

We don't think it has exactly the probability of 0, do we?

It isn't a testable hypothesis. Why would anyone attempt to assign probabilities to it?

3private_messaging10y
Suppose there's a special kind of stone you can poke with a stick, and it does something, and any time it does something, it's defying the pattern, you have to add more code to your computable model for it. Meanwhile there's a simple model of those kind of stones as performing one or other kind of hypercomputation, and while it doesn't let you use a computer to predict anything, it lets you use one such stone to predict the other, or lets you use a stone to run certain kinds of software much faster. edit: to be more specific, let's say, we got random oracle magic stones. That's the weakest kind of magic stone, and it is already pretty cool. Suppose that any time you hit a magic stone, it'll flash or not, randomly. And if you take a stone and break it in two, the binary flash sequences for both halves are always the same. Obvious practical applications (one-time pad). So, people say, ohh, those stones are random oracles, and paired stones do not violate locality. After some hypothesis testing, they're reasonably skeptical that you can build an FTL communicator or an uberdense hard drive from those magic stones. Except a particularly bone headed AI. Any time it taps a stone, it needs to add one bit to the stone's description. The AI is not entirely stupid - it can use paired stones as an one time pad too. Except, according to the AI, the first stone to be tapped transmits bits to the second stone. The AI is ever hopeful that it'll build an FTL communicator on those stones one day, testing more and more extravagant theories of how stones communicate. edit: or a has a likewise screwed up model with incredible data storage density in the stones.

Hypercomputation doesn't exist. There's no evidence for it - and nor will there ever be. It's an irrelevance that few care about. Solomonoff induction is right about this.

2private_messaging10y
We don't think it has exactly the probability of 0, do we? Or that it's totally impossible that the universe is infinite, or that it's truly non discrete, and so on. A lot of conceptually simple things have no exact representation on a Turing machine, and unduly complicated approximate representations. edit: also it strikes me as dumb that the Turing machine has an infinite tape, yet it is not possible to make an infinite universe on it with finite amount of code.
5gedymin10y
You're right that it probably doesn't exist. You're wrong that no one cares about it. Humans have a long history of caring about things that do not exist. I'm afraid that hypercomputation is one of those concepts that open the floodgates to bad philosophy. My primary point is the confusion about randomness, though. Randomness doesn't seem to be magical, unlike other forms of uncomputability.

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.

Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years - with their "UnFriendly" peers and their "UnFriendly" institutions. Evidently, "Friendliness" is not necessary for human flourishing.

0Rob Bensinger10y
I agree with this part of Chrysophylax's comment: "It's not necessary when the UnFriendly people are humans using muscle-power weaponry." Humans can be non-Friendly without immediately destroying the planet because humans are a lot weaker than a superintelligence. If you gave a human unlimited power, it would almost certainly make the world vastly worse than it currently is. We should be at least as worried, then, about giving an AGI arbitrarily large amounts of power, until we've figured out reliable ways to safety-proof optimization processes.
0Chrysophylax10y
It's not necessary when the UnFriendly people are humans using muscle-power weaponry. A superhumanly intelligent self-modifying AGI is a rather different proposition, even with only today's resources available. Given that we have no reason to believe that molecular nanotech isn't possible, an AI that is even slightly UnFriendly might be a disaster. Consider the situation where the world finds out that DARPA has finished an AI (for example). Would you expect America to release the source code? Given our track record on issues like evolution and whether American citizens need to arm themselves against the US government, how many people would consider it an abomination and/or a threat to their liberty? What would the self-interested response of every dictator (for example, Kim Jong Il's successor) with nuclear weapons be? Even a Friendly AI poses a danger until fighting against it is not only useless but obviously useless, and making an AI Friendly is, as has been explained, really freakin' hard. I also take issue with the statement that humans have flourished. We spent most of those millions of years being hunter-gatherers. "Nasty, brutish and short" is the phrase that springs to mind.

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

1jsteinhardt10y
I don't think you should form your opinion of Anna from this video. It gave me an initially very unfavorable impression that I updated away from after a few in-person conversions. (If you read the other things I write you'll know that I'm nowhere close to a MIRI fanatic so hopefully the testimonial carries some weight.)
8lukeprog10y
Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna's statement "Don't trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully." (E.g. after purchasing more information.) However, Anna next says: And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.
-10V_V10y

Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.

Failure is a necessary part of mapping out the area where success is possible.

Being Friendly is of instrumental value to barely any goals. [...]

This is not really true. See Kropotkin and Margulis on the value of mutualism and cooperation.

1Rob Bensinger10y
Friendliness is an extremely high bar. Humans are not Friendly, in the FAI sense. Yet humans are mutualist and can cooperate with each other.

Uploads first? It just seems silly to me.

The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(

Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.

Overall, I think I would have preferred Robopocalypse.

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.

To learn more about this, see "Scientific Induction in Probabilistic Mathematics", written up by Jeremy Hahn

This line:

Choose a random sentence from S, with the probability that O is chosen proportional to u(O) - 2^-length(O).

...looks like a subtraction operation to the reader. Perhaps use "i.e." instead.

The paper appears to be arguing against the applicability of the universal prior to mathematics.

However, why not just accept the universal prior - and then update on learning the laws of mathematics?

why did you bring up the 'society' topic in the first place?

A society leads to a structure with advantages of power and intelligence over individuals. It means that we'll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed - via collective intelligence. If the the designers are smarter than the designed, maybe they'll be able to stop them from wireheading themselves.

If wireheading is plausible, then it's equally plausible given an alien-fearing government, since wireheading

... (read more)

We can model induction in a monistic fashion pretty well - although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.

Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date - e.g. by liberally sprinkling aversive sensors around the creature's brain. The argument that such approaches do not scale up is probably wrong - designers will al... (read more)

1Rob Bensinger10y
Band-aids as a solution to catastrophes require that we're able to see all the catastrophes coming. Biology doesn't care about letting species evolve to extinction, so it's happy to rely on hacky post-hoc solutions. We do care about whether we go extinct, so we can't just turn random AGIs loose on our world and worry about all the problems after they've arisen. Odd comment marked in bold. Why do you think that? I'm confused. Doesn't this predict that no undesirable technology will ever be (or has ever been) invented, much less sold? We can't rely on a superintelligence to provide solutions to problems that need to be solved as a prerequisite to creating an SI it's safe to ask for help on that class of solutions. Not every buck can be passed to the SI. What about the vision of an agent improving on its design and then creating the new model of itself? Are you claiming that there will never be AIs used to program improved AIs? Because any feasible AI will want to self-replicate? Or because its designers will desire a bunch of copies? What's the relevant difference between a society of intelligent machines, and a singular intelligent machine with a highly modular reasoning and decision-making architecture? I.e., why did you bring up the 'society' topic in the first place? I'm not seeing it. If wireheading is plausible, then it's equally plausible given an alien-fearing government, since wireheading the human race needn't get in the way of putting a smart AI in charge of neutralizing potential alien threats. Direct human involvement won't always be a requirement.

Naturalized induction is an open problem in Friendly Artificial Intelligence. The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.

I checked. These models of induction apparently allow reasoners to treat their own computations as modifiable processes:

... (read more)

This was discussed on Facebook. I'll copy-paste the entire conversation here, since it can only be viewed by people who have Facebook accounts.

Kaj Sotala: Re: Cartesian reasoning, it sounds like Orseau & Ring's work on creating a version of the AIXI formalism in which the agent is actually embedded in the world, instead of being separated from it, would be relevant. Is there any particular reason why it hasn't been mentioned?

Tsvi BT: [...] did you look at the paper Kaj posted above?

Luke Muehlhauser: [...] can you state this open problem using the notat... (read more)

Deutsch is interesting. He seems very close to the LW camp, and I think he's someone LWers should at least be familiar with.

Deutsch seems pretty clueless in the section quoted below. I don't see why students should be interested in what he has to say on this topic.

It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techn

... (read more)
4Vaniver10y
He's clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it's relatively quick to dispense with his confusions if you know the right response, but if you can't it points out places you need to shore up your knowledge. (Here I'm using the general you; I'm pretty sure you didn't have any trouble, Tim.) I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don't expect it to be novel content for many (instead of just novel emphasis).

There never was a bloggingheads - AFAIK. There is: Yudkowsky vs Hanson on the Intelligence Explosion - Jane Street Debate. However, I'd be surprised if Yudkowsky makes the same silly mistake as Deutsch. Yudkowsky knows some things about machine intelligence.

But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences.

My estimate is 80% prediction, with the rest evaluation and tree pruning.

He also says confusing things about induction being inadequate for creativity which I'm guessing he couldn't support well in this short essay (perhaps he explains better in his books).

He does - but it isn't pretty.

Here is my review of The Beginning of Infinity: Explanations That Transform the World.

I remember Eliezer making the same point in a bloggingheads video with Robin Hanson.

A Hanson/Yudkowsky bloggingheads?!? Methinks you are mistaken.

2knb10y
I looked it up, and I couldn't find a Hanson/Yudkowsky bloggingheads. I'm not sure if it was taken down or if the video was not done through bloggingheads.

So:

  • What most humans tell you about their goals should be interpreted as public relations material;
  • Most humans are victims of memetic hijacking;

To give an example of a survivalist, here's an individual who proposes that we should be highly prioritizing species-level survival:

As you say, this is not a typical human being - since Nick says he is highly concerned about others.

There are many other survivalists out there, many of whom are much more concerned with personal survival.

If you're dealing with creatures good enough at modeling the world to predict the future and transfer skills, then you're dealing with memetic factors as well as genetic. That's rather beyond the scope of natural selection as typically defined.

What?!? Natural selection applies to both genes and memes.

I suppose there are theoretical situations where that argument wouldn't apply

I don't think you presented a supporting argument. You referenced "typical" definitions of natural selection. I don't know of any definitions that exclude culture. H... (read more)

The question's more about what function's generating the fitness landscape you're looking at (using "fitness" now in the sense of "fitness function"). "Survival" isn't a bad way to characterize that fitness function -- more than adequate for eighth-grade science, for example. But it's a short-tern and highly specialized kind of survival [...]

Evolution is only as short-sighted as the creatures that compose its populations. If organisms can do better by predicting the future (and sometimes they can) then the whole process is... (read more)

2Nornagest10y
If you're dealing with creatures good enough at modeling the world to predict the future and transfer skills, then you're dealing with memetic factors as well as genetic. That's rather beyond the scope of natural selection as typically defined. Granted, I suppose there are theoretical situations where that argument wouldn't apply -- but I'm having trouble imagining an animal smart enough to make decisions based on projected consequences more than one selection round out, but too dumb to talk about it. We ourselves aren't nearly that smart individually.

Even to the extent that natural selection can be said to be care about anything, saying that survival is that thing is kind of misleading.

Well, I have gone into more details elsewhere.

It's perfectly normal for populations to hill-climb themselves into a local optimum and then get wiped out when it's invalidated by changing environmental conditions that a more basal but less specialized species would have been able to handle, for example.

Sure. Optimization involves going uphill - but you might be climbing a mountain that is sinking into the sea. How... (read more)

2Nornagest10y
The question's more about what function's generating the fitness landscape you're looking at (using "fitness" now in the sense of "fitness function"). "Survival" isn't a bad way to characterize that fitness function -- more than adequate for eighth-grade science, for example. But it's a short-tern and highly specialized kind of survival, and generalizing from the word's intuitive meaning can really get you into trouble when you start thinking about, for example, death.
4[anonymous]10y
Don't be silly. Any human who hasn't fallen straight into the Valley of Bad (Pseudo-)Rationality can tell you they've got all kinds of goals other than survival. If you think you don't, I recommend you possibly think about how you're choosing what to eat for breakfast each morning, since I guarantee your morning meals are not survival-optimal.
2Nornagest10y
Even to the extent that natural selection can be said to be care about anything, saying that survival is that thing is kind of misleading. It's perfectly normal for populations to hill-climb themselves into a local optimum and then get wiped out when it's invalidated by changing environmental conditions that a more basal but less specialized species would have been able to handle, for example. (Pandas are a good example, or would be if we didn't think they were cute.)
8ialdabaoth10y
True, but nature's goals are not our own. The reason sexual reproduction is acceptable is that Nature doesn't care about the outcome, as long as the outcome includes 'be fruitful and multiply'. If we have an agent with its own goals, it will need more robust strategies to avoid its descendants' behaviors falling back to Nature's fundamental Darwinian imperatives.
8ialdabaoth10y
Aren't these actually the same question? "Exploitation by parasites" is actually a behavior, so it's a subset of the general trust question.

I'm pretty sure that we suck at prediction - compared to evaluation and tree-pruining. Prediction is where our machines need to improve the most.

search is not the same problem as prediction

It is when what you are predicting is the results of a search. Prediction covers searching.

It is interesting that his view of AI is apparently that of a prediction tool [...] rather than of a world optimizer.

If you can predict well enough, you can pass the Turing test - with a little training data.

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.

Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.

While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".

0nshepperd10y
Don't be obnoxious. I linked to two posts that discuss the issue in depth. There's no need to reduce my comment to one meaningless word.

Doesn't someone have to hit the ball back for it to be "tennis"? If anyone does so, we can then compare reference classes - and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?

0nshepperd10y
Looks like someone already did. And I'm not just suggesting this is not productive, I'm saying it's not productive. My reasoning is standard: see here and also here.
1Brian_Tomasik10y
If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world. At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn't impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.

As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]

What, a new thinking technology? You can't be serious.

4nshepperd10y
Yes, let's engage in reference class tennis instead of thinking about object level features.
3passive_fist10y
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)

whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.

As the Wikipedians often say, "citation needed". The first "AI" was built decades ago. It evidently failed to "take over the world". Possibly someday a machine will take over the world - but it may not be the first one built.

2Brian_Tomasik10y
In the opening sentence I used the (perhaps unwise) abbreviation "artificial general intelligence (AI)" because I meant AGI throughout the piece, but I wanted to be able to say just "AI" for convenience. Maybe I should have said "AGI" instead.

I didn't buy the alleged advantage of a noise free environment. We've known since von-Neumann's paper titled:

PROBABILISTIC LOGICS AND THE SYNTHESIS OF RELIABLE ORGANISMS FROM UNRELIABLE COMPONENTS

...that you can use unreliable computing components to perform reliable computation - with whatever level of precision and reliability that you like.

Plus the costs of attaining global synchrony and determinism are large and massively limit the performance of modern CPU cores. Parallel systems are the only way to attain large computing capacities - and you can'... (read more)

The point I was trying to make was more along the lines that choosing which parameters to model allows you to control the outcome you get. Those who want to recruit people to causes associated with preventing the coming robot apocalypse can selectively include competitive factors, and ignore factors leading to cooperation - in order to obtain their desired outcome.

Today, machines are instrumental in killing lots of people, but many of them also have features like air bags and bumpers, which show that the manufacturers and their customers are interested in... (read more)

People have predicted that corporations will be amoral, ruthless psychopaths too. This is what you get when you leave things like reputations out of your models.

Skimping on safety features can save you money. However, a reputation for privacy breaches, security problems and accidents doesn't do you much good. Why model the first effect while ignoring the second one? Oh yes: the axe that needs grinding.

There are techniques for managing reputation, and those techniques are also amoral. For example, a powerful psychopath caring about his reputation may use legal threats and/or assassination against people who want to report about his evil acts. Alternatively, he may spread false rumors about his competitors. He may pay or manipulate people to create a positive image of him.

Just because the reputation is used, it does not guarantee the results will be moral.

Reputational concerns apply to psychopaths too, and that's why not all of them turn violent. However it doesn't prevent all of them from turning violent.

Bayesian methods certainly require relative parsimony, in the sense that the model complexity needs to be small compared to the quantity of information being modeled.

Not really. Bayesian methods can model random noise. Then the model is of the same size as the data being modeled.

I often use simple models–because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts!

Recommended reading: Boyd and Richerson's Simple Models of Complex Phenomena.

The reason the Solomonoff prior doesn't apply to social sciences is because knowing the area of applicability gives you more information.

That doesn't mean it doesn't apply! "Knowing the area of applicability" is just some information you can update on after starting with a prior.

Losing information isn't a crime. The virtues of simple models go beyond Occam's razor. Often, replacing a complex world with a complex model barely counts as progress - since complex models are hard to use and hard to understand.

Parsimony is good except when it loses information, but if you're losing information you're not being parsimonious correctly.

So: Hamilton's rule is not being parsimonious "correctly"?

0drethelin10y
probably not. I'm not exactly sure what you mean by this question since I don't full understand hamilton's rule but in general evolutionary stuff only needs to be close enough to correct rather than actually correct.

Shane Legg prepared this graph.

It was enough to convince him that there was some super-exponential synergy:

There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis.

I think we understand why humans are built like that. Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments. Humans using cultural symbionts to adapt to local regions of space-time is a special case of this general principle.

Instead of the cognitive niche, the cultural niche seems more relevant to humans.

0ChrisHallquist10y
Ah, that's a good way to put it. But it should lead us to question the value of "software" improvements that aren't about being better-adapted to the local environment.

On the other hand, I think the evolutionary heuristic casts doubt on the value of many other proposals for improving rationality. Many such proposals seem like things that, if they worked, humans could have evolved to do already. So why haven't we?

Most such things would have had to evolve by cultural evolution. Organic evolution makes our hardware, cultural evolution makes our software. Rationality is mostly software - evolution can't program such things in at the hardware level very easily.

Cultural evolution has only just got started. Education is sti... (read more)

8Furslid10y
I think that this is an application of the changing circumstances argument to culture. For most of human history the challenges faced by cultures were along the lines of "How can we keep 90% of the population working hard at agriculture?" "How can we have a military ready to mobilize against threats?" "How can we maintain cultural unity with no printing press or mass media?" and "How can we prevent criminality within our culture?" Individual rationality does not necessarily solve these problems in a pre-industrial society better than blind duty, conformity and superstitious dread. It's been less than 200 years since these problems stopped being the most pressing concerns, so it's not surprising that our culture hasn't evolved to create rational individuals.
0ChrisHallquist10y
It's been suggested that the Flynn effect is mostly a matter of people learning a kind of abstract reasoning that's useful in the modern world, but wasn't so useful previously. There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis. Hmmm... I may need to do a post on the cognitive niche hypothesis at some point.
2fubarobfusco10y
I thought a lot of that was accounted for by nutrition and other health factors — vaccination and decline in lead exposure come to mind.
2torekp10y
Cultural evolution is also an answer to

I usually try to avoid the term "moral realism" - due to associated ambiguities - and abuse of the term "realism".

The thesis says:

more or less any level of intelligence could in principle be combined with more or less any final goal.

The "in principle" still allows for the possibility of a naturalistic view of morality grounding moral truths. For example, we could have the concept of: the morality that advanced evolutionary systems tend to converge on - despite the orthogonality thesis.

It doesn't say what is likely to happen. It says what might happen in principle. It's a big difference.

2ChrisHallquist10y
Note that on Eliezer's view, nothing like "the morality that advanced evolutionary systems tend to converge on" is required for moral realism. Do you think it's required?

We're just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for "baseless", well... we've spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom's Superintelligence book.

It's not mudslinging, it's Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.

Machine intelligence is important. Who gets to build it using what methodology is also likely to have a signif... (read more)

It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.

3lukeprog10y
Oh I see what you mean. Well, I certainly agree with that!
Load More