User Profile

star5
description1
message1522

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

[Link] The Bayesian argument against induction.

7y
Show Highlightsubdirectory_arrow_left
27

Recent Comments

> Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

On the other hand...

ht...(read more)

I could add: Objective punishments and rewards need objective justification.

From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is t...(read more)

I am aware that humans hav a non zero level of life threatening behaviour. If we wanted it to be lower, we could make it lower, at the expense of various costs. We don't which seems to mean we are happy with the current cost benefit ratio. Arguing, as you have, that the risk of AI self harm can't be...(read more)

Regarding the anvil problem: you have argued with great thoroughness that one can't perfectly prevent an AIXI from dropping an anvil on its head. However, I can't see the necessity. We would need to get the probability of a dangerously unfriendly SAI as close to zero as possible, because it poses an...(read more)

An entity that has contradictory beliefs will be a poor instrumental rationalist. It looks like you would need to engineer a distinction between instrumental beliefs and terminal beliefs. While we're on the subject, you might need a firewall to stop an .AI acting on intrinsically motivating ideas, ...(read more)

> Software that initially appears to care what you mean will be selected by market forces. But nearly all software that superficially looks Friendly isn't Friendly.

So? Yudkowsky to the rescue, or people get more discerning?

> If there are seasoned AI researchers who can't wrap their heads around ...(read more)

And also difficutl to mathematically solve morality.

But self-correcting AGIs are still a neglected possibility.

Then a general directive towards friendliness would be needed as well...but I already said that.

> So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Is that going to be harder that coming up with a mathematical expension of morality and preloading it?

> Humans are made to do that by evolution A

Yes. But that doens't mean it is necessaril...(read more)