timtyler

timtyler's Posts

Sorted by New

timtyler's Comments

An additional problem with Solomonoff induction

We don't think it has exactly the probability of 0, do we?

It isn't a testable hypothesis. Why would anyone attempt to assign probabilities to it?

An additional problem with Solomonoff induction

Hypercomputation doesn't exist. There's no evidence for it - and nor will there ever be. It's an irrelevance that few care about. Solomonoff induction is right about this.

A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.

The genie knows, but doesn't care

Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years - with their "UnFriendly" peers and their "UnFriendly" institutions. Evidently, "Friendliness" is not necessary for human flourishing.

Another Critique of Effective Altruism

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

The genie knows, but doesn't care

Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.

Failure is a necessary part of mapping out the area where success is possible.

The genie knows, but doesn't care

Being Friendly is of instrumental value to barely any goals. [...]

This is not really true. See Kropotkin and Margulis on the value of mutualism and cooperation.

A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk

Uploads first? It just seems silly to me.

The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(

Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.

Overall, I think I would have preferred Robopocalypse.

Critiquing Gary Taubes, Part 4: What Causes Obesity?

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.

Results from MIRI's December workshop

To learn more about this, see "Scientific Induction in Probabilistic Mathematics", written up by Jeremy Hahn

This line:

Choose a random sentence from S, with the probability that O is chosen proportional to u(O) - 2^-length(O).

...looks like a subtraction operation to the reader. Perhaps use "i.e." instead.

The paper appears to be arguing against the applicability of the universal prior to mathematics.

However, why not just accept the universal prior - and then update on learning the laws of mathematics?

Load More