james.lucassen

Wiki Contributions

Comments

Using Rationality in Mafia

In my experience, trying to apply rationality to hidden-role games such as Mafia tends to break them pretty quickly - not in the sense of making rationalist players extremely powerful, just in the much less fun sense of making the game basically unrecognizable and a lot less fun. I played a hidden role game called Secret Hitler with a group of friends, a few of whom were familiar with some Sequences content, and the meta very quickly shifted towards a boring fixed point.

The problem is that rationality is all about being asymmetric towards truth, which is great for playing town but terrible for playing mafia. After a couple games, people will start to know when you're town and when you're mafia, because you can't really use rationalist stuff when you're mafia. So then in the interest of preserving your ability to play mafia, you can't play transparently as town. Behavioral signal fades, optimal strategies start becoming widely known, choices go away, game gets less interesting.

There can definitely be room for twists and turns (we've had some really clever players dance perfectly around the meta), but it basically becomes a game of trying to guess everyone's Simulacrum Level. Personally, I find the shouting and wild accusations more fun ¯\_(ツ)_/¯

Prefer the British Style of Quotation Mark Punctuation over the American

That's great. If I ever attempt to design my own conlang, I'm using this rule.

Three enigmas at the heart of our reasoning

The first enigma seems like it's either very closely related or identical to Hume's problem of induction. If that is a fair-rephrasing, then I think it's not entirely true that the key problem is that the use of empiricism cannot be justified by empiricism or refuted by empiricism. Principles like "don't believe in kludgy unwieldy things" and "empiricism is a good foundation for belief" can in fact be supported by empiricism - because those heuristics have worked well in the past, and helped us build houses and whatnot.

I think the key problem is that empiricism both supports and refutes the claim "I know empiricism works because empirically it's always worked well in the past". This statement is empirically supported because empiricism has worked well in the past, but it's also circular, and circular reasoning has not generally worked well in the past.

This can also be re-phrased as a conflict between object-level and meta-reasoning. On the object level, empiricism supports empiricism. But on the meta level, empiricism rejects circular reasoning.

Taboo "Outside View"

This is great. Feels like a very good catch. Attempting to start a comment thread doing a post-mortem of why this happened and what measures might make this sort of clarity-losing definition drift happen less in the future.


One thing I am a bit surprised by is that the definition on the tag page for inside/outside view was very clearly the original definition, and included a link to the Wikipedia for reference class forecasting in the second sentence. This suggests that the drifted definition was probably not held as an explicit belief by a large number of highly involved LessWrongers. This in turn makes two different mechanisms seem most plausible to me:

  1. Maybe there was sort of a doublethink thing going on among experienced LW folks that made everyone use "outside view" differently in practice than how they explicitly would have defined it if asked. This would probably be mostly driven by status dynamics, and attempts to solve would just be a special case of trying to find ways to not create applause lights.
  2. Maybe the mistake was mainly among relatively new/inexperienced LW folks who tried to infer the definition from context rather than checking the tag page. In that case, attempts to solve would mostly look like increasing the legibility of discourse within LW to new/inexperienced readers, possibly by making the tag definition pages more clickable or just decreasing the proliferation of jargon.
Against intelligence

I think a lot of this discussion becomes clearer if we taboo "intelligence" as something like "ability to search and select a high-ranked option from a large pool of strategies".

  • Agree that the rate-limiting step for a superhuman intelligence trying to affect the world will probably be stuff that does not scale very well with intelligence, like large-scale transport, construction, smelting widgets, etc. However, I'm not sure it would be so severe a limitation as to produce situations like what you describe, where a superhuman intelligence sits around for a month waiting for more niobium. The more strategies you are able to search over, the more likely it is that you'll hit on a faster way of getting niobium.
  • Agree that being able to maneuver in human society and simulate/manipulate humans socially would probably be much more difficult for a non-human intelligence than some other tasks humans might think of as equally difficult, since humans have a bunch of special-purpose mechanisms for that kind of thing. That being said, I'm not convinced it is so hard as to be practically impossible for any non-human to do. The amount of search power it took evolution to find those abilities isn't so staggering that it could never be matched.
  • I'm pretty surprised by the position that "intelligence is [not] incredibly useful for, well, anything". This seems much more extreme than the position that "intelligence won't solve literally everything", and like it requires an alternative explanation of the success of homo sapiens. 

Thank you for posting this! There's a lot of stuff I'm not mentioning because confirming agreements all the time makes for a lot of comment clutter, but there's plenty of stuff to chew on here. In particular, the historical rate of scientific progress seems like a real puzzle that requires some explanation.