Vaniver

Wiki Contributions

Comments

FWIW I like this comment much more than some of the others you've written on this page, because it feels like it's gotten past the communication difficulty and foregrounds your objection.

I am a little suspicious of the word 'should' in the parent comment. I think we have differing models of reader buy-in / how authors should manage it, where you're expecting it to be more correlated with "how much you want them to read the post" than I am. 

This line was also quite salient to me:

that makes it hard to take [an audience who actually gives a crap and doesn't need to be infinitely "sold" on every little thing] for granted

There's an ongoing tradeoff-fight of which things should be appropriate context ('taken for granted') in which posts. The ideal careful reader does have to be finitely sold on the appropriate details, and writing with them in mind helps make writing posts sharpen thinking. We have the distribution of potential readers that we actually have.

I want to simultaneously uphold and affirm (you writing for the audience and assumed context you want) and (that not obviously being the 'rationalist' position or 'LessWrong position'). When 'should' comes out in a discussion like this, it typically seems to me like it's failing to note the distinction or attempting to set the norm or obvious position (in a way that opposition naturally arises). [Most of the time you instead write about Duncan culture, where it seems appropriate.]

I think this has been one of the sources of conflict between Duncan and the mod team, yes.

I mean, I have a deep and complicated view, and this is a deep and complicated view, and compressing down the combination of those into "agree" or "disagree" seems like it loses most of the detail. For someone coming to LW with very little context, this seems like a fine introduction to me. It generally seems like straightforward corollaries from "the map is not the territory".

Does it seem to me like the generators of why I write what I write / how I understand what I read? Well... not that close, with the understanding that introspection is weak and often things can be seen more clearly from the outside. I'll also note as an example that I did not sign on to Sabien's Sins when it was originally posted.

Some specific comments:

I have a mixed view of 3 and 4, in that I think there's a precision/cost tradeoff with being explicit or operationalizing beliefs. That said, I think the typical reader would benefit from moving in that direction, especially when writing online.

I think 5 is a fine description of 'rationalist discourse between allies'. I think the caveat (i.e. the first sentence of the longer explanation) is important enough that it probably should have made it into the guideline, somehow.

I think 6 blends together a problem (jumping to conclusions) and a cure (maintaining at least two hypotheses). Not only will that cure not work especially well for everyone, it's very easy to deploy gracelessly ("I do have two hypotheses, they're either evil or stupid!"). Other cures, like being willing to ask "say more?", seem like they might be equally useful.

I think 10 (and to a lesser extent, 7) seem like they're probably directionally correct for many people, but are pointing at an important area with deep skill and saying "be careful" instead of being, like, actually reliable guidelines.

Uh, I think their point that the site UI is ignoring the part that explicitly says "stop reading here", and thus your "unless you ignore the part" is irrelevant to the post's perceived length, and that it would be reasonable for a reader to filter whether or not they read the post by something on the second line they see, and not filter posts based on a sentence that's 1500 words in. [IMO your stronger defense is that the introductory paragraphs try to make clear that the necessary payload of the post is small and frontloaded.]

I'm dinging the gears to ascension on writing clarity, here, but... were you doing a bit, or should I also be dinging you on reading comprehension / ability to model multiple hypotheses? [Like, to be clear, I don't think your first response was clearly bad, but it felt like something had gone wrong when you repeated the same first line in your second response.]

Good GPUs feels kind of orthogonal.

IMO it's much easier to support high investment numbers in "AI" if you consider lots of semiconductor / AI hardware startup stuff as "AI investments". My suspicion is that while GPUs were primarily a crypto thing for the last few years, the main growth outlook driving more investment is them being an AI thing. 

Given how hard it is to make people see such things even with explicit information, even when they are curious? Seems tough.

One interesting example here is Ian Malcolm from Jurassic Park (played by Jeff Goldblum); they put a character on-screen explaining part of the philosophical problem behind the experiment. ("Life finds a way" is, if you squint at it, a statement that the park creators had insufficient security mindset.) But I think basically no one was convinced by this, or found it compelling. Maybe if they had someone respond to "we took every possible precaution" with "no, you took every precaution that you imagined necessary, and reality is telling you that you were hopelessly confused." it would be more likely to land?

My guess it would end up on a snarky quotes list and not actually convince many people, but I might be underestimating the Pointy Haired Boss effect here. (Supposedly Dilbert cartoons made it much easier for people to anonymously criticize bad decision-making in offices, leading to bosses behaving better so as to not be made fun of.)

This suggests that their network isn't really big enough to capture legality and good strategy at the same time, I guess?

I'm not sure this is a network size issue. I think it's also plausible that there are multiple different rules of Othello compatible with high-level Othello play, and so it's not obvious which rules are actually generating the dataset, whereas with random play there's a crisper distinction between legal and illegal moves. (Like, lots of legal moves will never be played in the high-level play dataset, but not in the random dataset.)

Unfortunately, there's no way to publicly examine, measure, or discuss group differences in a way that doesn't disproportionately attract those who'd misinterpret and misuse this against the group(s) in question, and therefore without those groups legitimately feeling attacked by those giving visibility to the topic.

I feel like this might be downstream of activist-caused evaporative cooling, tho. If you say that anyone who studies group differences must be motivated by animus, people unmotivated by animus will be disproportionately likely to leave the field.

One of the current controversies in medicine is over whether race should be used as a factor in diagnostic decisions (one example). Now, you or I as patients might want our doctors to use all available information to provide us with the best treatment possible, and you or I as people interested in good outcomes for all might be worried that this will lead to people being predictably mistreated (which, if set according to population-level averages, will disproportionately affect minority groups). It seems pretty implausible to me that the people who set up race-sensitive diagnostics and treatments for kidney diseases were motivated by ill will towards individuals or groups, and much more likely that they were motivated by good will.

Similarly, you could imagine people who want to come up with policies and procedures which are motivated by good will towards everyone, and want to use the most effective information available to do so. Is it really productive to militate them out of existence?

Load More