All of GMHowe's Comments + Replies

Bad intent is a disposition, not a feeling

That may well be true, but I should clarify that neither of my hypotheticals require or suggest that bad faith communication was more common in the past. They do suggest that assumptions of bad faith may have been significantly more common than actual bad faith, and that this hypersensitivity may have been adaptive in the ancestral environment but be maladaptive now.

Bad intent is a disposition, not a feeling

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

You may not be wrong but I don't think it would necessarily be surprising. We adapted under social conditions that are radically different than exist today. It may no longer be adaptive.

Hypothesis: In small tribes and family groups assumptions of bad faith may have served to help negotiate away from unreasonable positions while strong familial ties and respected third parties mos... (read more)

0Benquo5yMy guess is that our exposure to bad faith communication is more frequent than in the past, rather than less, because of mass media; many more messages we receive are from people who do not expect to have to get along with us in twenty years.
Crony Beliefs

I really liked this post. I thought it was well written and thought provoking.

I do want to push back a bit on one thing though. You write:

What makes for a crony belief is how we're rewarded for it. And the problem with beliefs about climate change is that we have no way to act on them — by which I mean there are no actions we can take whose payoffs (for us as individuals) depend on whether our beliefs are true or false.

It is true that most of us probably won't take actions whose payoffs depend on beliefs about global warming, but it is not true that th... (read more)

Scott Aaronson: Common knowledge and Aumann's agreement theorem

Maybe I'm confused, in the 'muddy children puzzle' it seems it would be common knowledge from the start that at least 98 children have muddy foreheads. Each child sees 99 muddy foreheads. Each child could reason that every other child must see at least 98 muddy foreheads. 100 minus their own forehead which they cannot see minus the other child's forehead which the other child cannot see equals 98.

What am I missing?

1gjm6yCommon knowledge means I know, and I know that you know, and I know that you know that he knows, and she knows that I know that you know that he knows, and so on -- any number of iterations. Each child sees 99 muddy foreheads and therefore knows n >= 99. Each child can tell that each other child knows n >= 98. But, e.g., it isn't true that A knows B knows C knows that n >= 98; only that A knows B knows C knows that n>=97: each link in the chain reduces the number by 1. So for no k>0 is it common knowledge that n>=k.
Rationality Quotes Thread August 2015

Desire is a contract you make with yourself to be unhappy until you get what you want.

Naval Ravikant

2Viliam6ySometimes you are born into an existing contract.
[Link]: The Unreasonable Effectiveness of Recurrent Neural Networks

You can see more results here: Image Annotation Viewer

Judging generously, but based on only about two dozen or so image captions, I estimate it gives a passably accurate caption about one third of the time. This may be impressive given the simplicity of the model, but it doesn't seem unreasonably effective to me, and I don't immediately see the relevance to strong AI.

Why isn't the following decision theory optimal?

Let's say you precommit to never paying off blackmailers. The advantage of this is that you are no longer an attractive target for blackmailers since they will never get paid off. However if someone blackmails you anyway, your precommitment now puts you at a disadvantage, so now (NDT)you would act as if you had a precommitment to comply with the blackmailers all along since at this point that would be an advantageous precommitment to have made.

0internety7yI think my definition of NDT above was worded badly. The problematic part is "if he had previously known he'd be in his currently situation." Consider this definition: You should always make the decision that a CDT-agent would have wished he had precommitted to, if he previously considered the possibility of his current situation and had the opportunity to costlessly precommit to a decision. The key is that the NDT agent isn't behaving as if he knew for sure that he'd end up blackmailed when he made his precommitment (since his precommitment affects the probability of his being blackmailed), but rather he's acting "as if" he precommitted to some behavior based on reasonable estimates of the likelihood of his being kidnapped in various cases.
2Epictetus7yIt seems that this is more of a bluff than a true precommitment.
0DanielLC7yBut if you pay the blackmailer, then you didn't precommit to not paying him, in which case you'll wish you did since then you probably wouldn't get blackmailed. You'll act as if you precommit if and only of you do not. Perhaps you'd end up precommitting to some probability of paying the blackmailer?
4VoiceOfRa7yThe harder part is precisely defining what constitutes blackmail.
Request for Steelman: Non-correspondence concepts of truth

It's a funny joke but beside the point. Knowing that he is in a balloon about 30 feet above a field is actually very useful. It's just useless to tell him what he clearly already knows.

Open thread, Mar. 16 - Mar. 22, 2015

I recall a SF story that took place on a rotating space station orbiting Earth that had several oddities. The station had greater than Earth gravity. Each section was connected to the next by a confusing set of corridors. The protagonist did some experiments draining water out of a large vat and discovered a coriolis effect.

So spoiler alert it turned out that the space station was a colossal fraud. It was actually on a massive centrifuge on Earth.

Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116

Due to the finite speed of sound, the explosion would have had to occur approximately 20 seconds before they heard it. So if Voldemort's death was coincident with the explosion it would had to have happened about 20 seconds before Harry said it did.

She'd just about decided that this had to all be a prank in unbelievably poor taste, when a distant but sharp CRACK filled the air. [...] "It worked," Harry Potter gasped aloud, "she got him, he's gone." [...] "I think it's in that direction." Harry Potter pointed in the rough dir

... (read more)
False thermodynamic miracles

Why would it backtrack (or what do you mean by backtrack)? Eventually, it observes that w = false (that "ON" went through unchanged) and that its actions are no longer beneficial, so it just stops doing anything, right? The process terminates or it goes to standby?

I think the presumption is that the case where the "ON" signal goes thru normally and the case where the "ON" signal is overwritten by a thermodynamic miracle... into exactly the same "ON" signal are equivalent. That is that after the "ON" sign... (read more)

I tried my hardest to win in an AI box experiment, and I failed. Here are the logs.

I was not aware of Tuxedage's ruleset. However any ruleset that allows for the AI to win without being explicitly released by the gatekeeper is problematic.

If asd had won due to the gatekeeper leaving it would only have demonstrated that being unpleasant can cause people to disengage from conversation, which is different from demonstrating that it is possible to convince a person to release a potentially dangerous AI.

0wobster1097yI kind of agree upon reflection. Tuxedage's ruleset seems tailored for games where there is money on the line, and in that case it feels very unfair to say GK can leave right away. GK would be heavily incentivized to leave immediately, since that would get GK's charity a guaranteed donation.
I tried my hardest to win in an AI box experiment, and I failed. Here are the logs.

That's not really in the spirit of the experiment. For the AI to win the gatekeeper must explicitly release the AI. If the gatekeeper fails to abide by the rules that merely invalidates the experiment.

0wobster1097yIn Tuxedage's rule set, if the gatekeeper leaves before 2 hours, it counts as an AI win. So it's a viable strategy. However --- I am sure that it would work against some opponents, but my feeling is it would not work against people on Less Wrong. It was a good try though.
3passive_fist7yNot just that, it's a futile strategy, cause you just encourage them to look away from the monitor and do nothing for 2 hours (which is entirely fair game).
A List of Nuances

Everything is actually about signalling.

Counterclaim: Not everything is actually about signalling.

Almost everything can be pressed into use as a signal in some way. You can conspicuously overpay for things to signal affluence or good taste or whatever. Or you can put excessive amounts of effort into something to signal commitment or the right stuff or whatever. That almost everything can be used as a signal does not mean that almost everything is being used primarily as a signal all of the time.

Signalling only makes sense in a social environment, so thi... (read more)