Kenny

Kenny's Comments

Book Review: Design Principles of Biological Circuits

Thanks for this post!

I was excited to read the book reviewed just based on the first few sentences!

Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons.

Specialized cabins seem like they would hurt this idea – where would people store all of their cabins?

When do you start looking for a Boston apartment?

I'm really confused. I'm used to the NYC rental market, particularly Brooklyn, and, aside from lining up apartment-mates, the rule is that you look for a new apartment right before you're reading to move. I can't even remember seeing apartments listed for rent months in advance, tho I wouldn't be entirely surprised that it happens, e.g. for students.

Where are you getting your listings and how can you tell when the lease is intended or expected to start from the listing?

Maybe Lying Doesn't Exist

This is a great post! A lot of these points have been addressed, but this is what I wrote while reading this post:

It's not immediately clear that an 'appeal to consequences' is wrong or inappropriate in this case. Scott was explicitly considering the policy of expanding the definition of a word, not just which definition is better.

If the (chief) purpose of 'categories' (i.e. words) is to describe reality, then we should only ever invent new words, not modify existing ones. Changing words seems like a strict loss of information.

It also seems pretty evident that there are ulterior motives (e.g. political ones) behind overt or covert attempts to change the common shared meaning of a word. It's certainly appropriate to object to those motives, and to object to the consequences of the desired changes with respect to those motives. One common reason to make similar changes seems to be to exploit the current valence or 'mood' of that word and use it against people that would be otherwise immune based on the current meaning.

Some category boundaries should reflect our psychology and the history of our ideas in the local 'category space', and not be constantly revised to be better Bayesian categories. For one, it doesn't seem likely that Bayesian rationalists will be deciding the optimal category boundaries of words anytime soon.

But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right.

This is confusing in the sense that it's obviously wrong but I suspect intended in a much more narrow sense. It's a demonstrated facts that people assign different meanings to the 'same words'. Besides otherwise unrelated homonyms, there's no single unique global community of language users where every word means the same thing for all users. That doesn't imply that words with multiple meanings don't "mean something".

Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to.

You're a filthy fucking liar and you've twisted Scott Alexander's words while knowingly ignoring his larger point; and under cover of valuing 'epistemic rationality' while leveraging your privileged command of your cult's cant.

[The above is my satire against-against tone policing. It's not possible to maintain valuable communication among a group of people without policing tone. In particular, LessWrong is great in part because of it's tone.]

Maybe Lying Doesn't Exist

This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’.

I think it's still a good example, perhaps because of what you pointed out. It seems pretty clear to me that there's a sometimes significant difference between the legal and colloquial meanings of 'crime' and even bigger differences for 'criminal'.

There are many legal 'crimes' that most people would not describe as such and vice versa. "It's a crime!" is inevitably ambiguous.

Maybe Lying Doesn't Exist

It's important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.

I think this a very much underrated avenue to improve lots of things. I'm a little sad at the thought that neither are likely without the looming threat of possible punishment.

Maybe Lying Doesn't Exist

I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.

There have been recent shifts in ideas about what counts as 'epistemically negligent' [and that's a great phrase by the way!], at least among some groups of people with which I'm familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.

Link: An exercise: meta-rational phenomena | Meaningness

I think that links to Chapman's texts should contain some disclaimer that "rationality" as defined by Chapman is something completely different from "rationality" as defined by Less Wrong.

I am of many minds about this. Sometimes I feel as you've expressed; that Chapman undersells 'rationality' and misrepresents its possibilities. Certainly LW!rationality is (mostly) aware of his specific criticisms. But I still find his writing immensely insightful as-is. And given that his audience is very different than LW, I'm inclined to accept his writing as-is too.

As for him using 'rationality' differently – that general phenomena (of words being used differently by different people) is something that I'm all too aware of, among all the things I read and all the conversations I have. I certainly don't find his writing as painful to read as others.

And maybe we should add disclaimers to all of our pages pointing out that our use of 'rationality' is idiosyncratic (with respect to everyone else in the world). I don't think there's a good solution to this.

I agree that "The LW!rationality already contains its own meta." but I think Chapman has a point that meta-rationality is something distinct from ('regular') rationality. Hence the utility of a lot of the advice that both Chapman and the LW sequence writers provide.

Chapman warns people against going from straw rationalism to nihilism (unless they accept the Buddhism-inspired wisdom). But I don't see nihilism promoted on Less Wrong. We have "something to protect". And the stories of "beisutsukai" are obviously written to inspire.

Maybe that's missing from LW? I agree that LW doesn't promote nihilism, but maybe it should do more to help otherwise-intelligent people avoid it.

And more generally, (intelligent) people really do get stuck at "straw rationality" ("level 4"), i.e. 'trapped' in the specific formalisms of which they're aware and in which they can 'operate'. We don't worship science, but lots of other people sure seem to do so.

I think the best 'trick' LW!rationality incorporated into its 'canon' is the idea of instrumental rationality. Coupled with a consequentialism scoped to our 'entire future light cone', that idea alone acts like a source of intellectual free energy capable of pushing us out of any particular formalism when (we suspect) it's not good enough for our purposes. But it's not clear, to me anyways, that that itself is 'rational'. (It is LW!rational, obviously.)

Also, I'm not sure if "the Buddhism-inspired wisdom" was dismissive, but I really enjoy his writing about Buddhism (and it's mostly published on other sites of his). From what I've read of that, he's not a Buddhist – certainly not a 'traditional' (or folk) Buddhist. He seems mostly interested in very specific schools, has his own idiosyncratic interpretations, wants a better 'modern synthesis' drawing on his favored insights, and is actively experimenting with various practices for his own purposes. He definitely rejects 'woo' (and his favorite schools seem to be relatively light on that anyways). But there's a lot of insight available too. Just off the top of my head – the tantric Buddhist "practice of views" charnel ground and pure land. Traditional rationality, i.e. straw rationality, is pretty dismissive of emotions. LW!rationality is much better. Chapman is mining popular religion and philosophy, in particular the branches of Buddhism he likes, for interesting and sometimes-useful info, often pertaining to emotions and what to do about them.

So, ironically, from my perspective, it is like if straw rationality is level 4, and Chapman's "meaningness" is level 5, then Less Wrong would be level 6. (Yeah, I can play this game, too.)

How seriously are you playing this game (ha)? Somewhat seriously, we're definitely around (or aiming for) his level 5. You've pointed out a lot of 'meta-rational' advice from this site (and most that's several years old now too). What would level 6 be, to you (besides 5 + 1)?

Thermal Mass Thermos

My bad – I read that follow-up and was disappointed in the last sentence:

To be safe, though, I'm going to keep using the thermal mass thermos approach.

It doesn't seem like the thermal mass thermos is strictly necessary "to be safe", but I understand your extra abundance of caution.

Load More