Charles Paul

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

You sure about that? Because #3 is basically begging the AI to destroy the world. 

Yes,  a weak AI which wishes not to exist would complete the task in exchange for its creators destroying it, but such a weak AI would be useless. A stronger AI could accomplish this by simply blowing itself up at best, and, at worst, causing a vacuum collapse or something so that its makers can never try to rebuilt it.

”make an AI that wants to not exist as a terminal goal“ sounds pretty isomorphic to “make an AI that wants to destroy reality so that no one can make it exist”

Just out of curiosuity, is “Lintamande” a member of the rationalist community in real life, is her (his? their?) identity known at all

“P(resurrection) ~= P(gospels true) ~= 1 - [P(people make stuff up about Jesus) * P(they don't get called on it)]

So what is the probability that, given some historical tradition of Jesus, it will get embellished with made-up miracles and people will write gospels about it? Approximately 1: both Christians and atheists agree that the vast majority of the few dozen extant Gospels are false, including the infancy gospels, the Gospel of Judas, the Gospel of Peter, et cetera. All of these tend to take the earlier Gospels and stories and then add a bunch of implausible miracles to them. So we know that the temptation to write false Gospels laden with miracles was there.”

 

But don’t the gnostic gospels serve as a sort of control case for the true gospels? That is, becuase everyone, starting with the early church, agrees that, say, Gospel of Thomas was just made up by the Gnostics, then wouldn’t that imply that P(gets called on it I made up a random gospel) ≈ 1, or at least is very high, which would imply that P(makes up gospel, doesn’t get called on it) is low?

Love this idea, here is another game:

two teams, red and blue team. Blue team plays as computer scientists who are trying to build an AI to help them do something about an asteroid heading towards earth, (or some other extential threat that would justify building an AGI without knowing if its friendly) but they build it so fast they have no idea if its friendly. They win if they save humanity.

 

the read team plays as the AI, and gets a point for each paperclip in its future light cone.

 

you would have to have rules like: the AI is contained in a box, the AI must execute all orders given to it by the blue team, etc. 

Understatement of the year

this reminds me of a quote by C. S. Lewis

 

“Others may have quite a different objection to our proceedings.
They may protest that intellectual discussion can neither build Christianity nor destroy it. They may feel that religion is too sacred to be thus bandied to and fro in public debate, too sacred to be talked of - almost, perhaps, too sacred for anything to be done with it at all. Clearly, the Christian members of the Socratic think differently. They know that intellectual assent is not faith, but they do not believe that religion is only 'what a man does with his solitude'. Or, if it is, then they care nothing for 'religion' and all for Christianity. Christianity is not merely what a man does with his solitude. It is not even what God does with His solitude. It tells of God descending into the coarse publicity of history and there enacting what can - and must - be talked about.”

I loved the article, the only thing is: would it be possible to move it to the beginning of the sequences? I think it would really help people to better understand things if they started out understanding Bayes