Posts

Sorted by New

Wiki Contributions

Comments

Step one involves figuring out the fundamental laws of physics. Step two is input a complete description of your hardware. Step three is to construct a proof. I'm not sure how to order these in terms of difficulty.

After a fair bit of thought, I don't. I don't think one can really categorize it as purely spur of the moment though-it lasted quite a while. Perhaps inducing a 'let the AI out of the box phase' would be a more accurate description.

I feel like the unpacking/packing biases ought to be something that should be easier to get around than some other biases. Fermi estimates do work (to some extent). I somewhat wonder if perhaps giving log probabilities would help more.

Oh, obviously there are causal reasons for why guess culture develops. If there wasn't, it wouldn't occur. I agree that having a social cost to denying a request can lead to this phenomenon, as your example clearly shows. I don't think that stops it from being silly.

I feel ask and tell culture are fairly similar in comparison to guess culture. Tell culture seems to me to be just ask culture a bit more explaining, which seems like a move in the right direction, balanced by time and energy constraints. Guess culture just seems rather silly.

What I meant by this is the gravitational influence of N particles is the sum of the gravitational influences of each of the individual particles, and is therefore a strict function of their individual gravitational influences. If you give me any collection of particles, and tell me nothing except their gravitational fields, I can tell you the gravitational field of the system of particles. If you tell me the intelligence of each of your neurons (0), I cannot determine your intelligence.

I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?

I would be willing to consider it if you agreed to secrecy and raised it to 1000$. You would still have to talk to Tuxedage though.

I'm not completely sure. And I can't say much more than that without violating the rules. I would be more interested in how I feel in a week or so.

A better mind than Tuxedage could almost certainly keep up the 'feel' of a flurry of arguments even with a schedule of breaks. I myself have had people feel irritated at me where even if I talk to them with days in between that I seem to do so. If I can do so accidentally I'm certain a superintelligence could do it reliably.

Also, I'm unsure of how much an AI could gather from a single human's text input. I know that I at least miss a lot of information that goes past me that I could in theory pick up.

An AI using timeless decision theory could easily compensate for having multiple AIs with unshared memory just by attempting to determine what the other AIs would say.

Load More