All of komponisto's Comments + Replies

From the perspective taken in this post, "location" means observer-moment: the entire submanifold of "simultaneous" locations in your sense is represented by a single point in the space I mean.

(To be sure, both your space and mine are "Tegmark V" spaces; "Tegmark V" here is not a specific mathematical object, but an interpretation-type.)

A subsequent post may provide helpful context.

It seems hard to envision a society wherein belonging and esteem could be satisfied via physical cognition

Not hard to envision at all; only hard, perhaps, to implement. It shouldn't take all that much imagination to summon the thought of a society in which people were better rewarded with status (and all its trappings) for things like solving mathematical problems, or composing complexly-structured music, as opposed to all the various generalized forms of pure politics that determine the lion's share of status in the world we know, than they actually ar... (read more)

Like others, you seem to be interpreting my comments as if they were stating conclusions intended to be only one or two inferential steps away (from your current epistemic state). This is not at all necessarily the case!

In particular, when I state a proposition X, I expect readers not only to ask themselves whether they already think X is true (i.e. conditioned on all their knowledge before my statement), but also to ask themselves why I might believe X. To engage, in other words, in at least a cursory search for inferential chains leading to X -- resultin... (read more)

It seems hard to envision a society wherein belonging and esteem could be satisfied via physical cognition, at least until we can make building an AIBO pet dog robot in one's garage a common enough pasttime. So, the only realistic possibility for a meaningful change is in how self-actualization is pursued. But is it actually true that "social" paths to self-actualization are less collectively desirable than "physical" paths to the same? Well, for a start, there are certainly "fine things in life" that are best understood in social terms; for a handy example that fits squarely in the realm of art, consider so-called "literary" fiction. Now I obviously cannot claim that writing literary fiction could ever be considered an "achievement" of the purest sort (in my preferred sense), since its value is not something that can be generally assessed in any widely-agreed upon way. And yet, it is certainly the case that, to the extent that works of literary fiction are widely considered to be valuable accomplishments, this is due to what they imply about the social universe, as opposed to the physical one! The belief that I am implicitly denying here seems to be, as quoted directly from the parent comment: "To effectively create value requires skill in analytical/"near-mode" thinking" (emphasis added). And that's certainly true in many cases (it's also true, as you rightly point out, that many of the "finer things in life" are far from entirely social!) but not in general. This matters here, because it seems to lead you to incorrect conclusions about what exactly makes "self-actualization" value-creating and collectively desirable. It's not the absence of "social cognition" in its entirety but rather, of a few undesirable aspects of social interaction that are rather more pervasive at the level of "esteem" and "belonging". Vassar's essay is even quite clear that these aspects exist, and are important to his point!

With no more context than your earlier comment where (so far as I know) you first used the term [...] I am just saying that it seems unreasonable to complain of someone "rounding off concepts" when you have made no apparent effort to clarify what you do mean

In my original comment, I linked to the essay that was the source of the concepts of "physical" and "social cognition" as I used them in that comment. Without the context of that essay, there is no reason to expect my remarks in this discussion to be intelligible.

OK. Then I have a confession and three complaints. The confession is that when I wrote what I did above, I hadn't noticed that you offered that link as further explanation of the term "physical cognition". The complaints are (1) that having now followed the link, I think it leaves the meaning of "physical cognition" still less than perfectly clear; and, in so far as it does explain what the term means, (2a) it actually seems not so far from "real-world achievement" and (2b) I think it probably doesn't include music and art. (Whereas in your usage it seems like it must.) I'll elaborate a bit. Here is what that essay says about physical and social cognition. (The only actual instance of those terms is at the end of the second quoted paragraph, but I think the preceding stuff is necessary to make sense of that.) So. We start with Maslow's hierarchy. Vassar (the author of the essay) puts a division between the two "lowest" levels, which he calls "physical" (meaning that they are concerned with our physical needs and wants) and the next two, which he calls "social" (meaning that they are concerned with our interactions with others). The topmost level ("self-actualization") I think Vassar classifies as "social", which I think mostly indicates that his terminology isn't great. And he says that attempts to address needs and wants higher up in the Maslow hierarchy tend to involve vague fuzzy socailly-mediated things, which may "be fairly easily hacked" and "constitute a poor foundation for universal cognition" by comparison with activity directed at the lower levels, which tend to involve precise specific details and "constitute a good substrate for digital, and thus potentially abstract, cognition". And, finally, he says that the lower-level ones seem to be endorsed over the higher-level by the likes of Newton and Feynman. All fair enough (though I'm not at all convinced). But now you want to say that artistic endeavour belongs in the category of "physical cognition"? R

The expected return from a reader doing something like that is way too low, even in a community like this one. Most new ideas are wrong, and if your idea is wrong then people trying to traverse the same inferential path will get nowhere

I disagree with these statements. (Even in the case of "most new ideas are wrong", I would ADBOC.)

You're basically just stating the view that "false positives are a bigger problem than false negatives", which I already disagreed with explicitly (as applied to this context) in my previous comment.


... (read more)
There is a kind of pleasure, when one performs a complex movement "just so", that attracts some people to e.g. martial arts without the goal of learning to defend themselves. (It was so with me, but, well, socioeconomic reasons.) There's a kind of a message that some people get out of poetry, besides the 'prosaic sense' of it, which sometimes gets related in another piece of poetry or even a very different way. I used to wonder, what exactly is its impact on different people's understanding of the whole, & might not 'understanding' be an umbrella word for some orthogonal things... Some of which get called 'spiritual' for lack of a better term:)

From the fourth paragraph:

These programs seem to have been disfavored by history's great scientific innovators, who tend to make statements like "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble..." or "What do you care what other people think", which sound like endorsements of physical over social cognition. 

For some reason, it's not overly surprising to me that both Isaac Newton and Richard Feynman would directly endorse physical cognition - what with them being natural philosophers/physicists. It's less clear however that such "physical cognition" is directly relevant to e.g. music composition, except inasmuch as both physics and music composition are linked to self-actualization - as opposed to 'mere' love, belonging and self-esteem, which (if pursued in excess, due to a lack of "self-actualizing" pursuits) might "lead[] to increased unethical behavior" or "produce anti-social narcissism" according to the essay you link to.

As I said above,

populism seems anticorrelated with both good aesthetics and good science

Thus, by "a society that tied status more closely to such skills", I do not mean the typical conditions leading to, and resulting from, a peasant revolt.

Do you think you use the term physical cognition in the way it's used in the literature?

"The literature" that is relevant here consists of Michael Vassar's 2013 Edge essay.

It's relevant in the way that it doesn't use the term "physical cognition"?

I don't understand what "physical cognition" in this context points to

See here. (This was linked in the original comment...)

Sorry, still don't understand it. gjm has a fairly detailed list of complaints and I concur with them.

I do mean ballet and piano, and also the kind of "the kind of hacking background that Wei Dai has".

I did not expect this to be completely outside of your hypothesis space, in the way it appears to be. This is worth reflecting on.

Creating a distinct new concept in one's mind is an expensive operation (with both short term and long term costs), so I think it's only to be expected that people will try to match a supposedly new concept to an existing one and see if they can get away with just reusing the existing concept.

Right, but I was reacting to a prior history with that particular commenter, who has been especially prone to doing this (very often where, in my view, it isn't appropriate).

But also: I regard concept-creation as being a large part of what we're in the business of ... (read more)

2Wei Dai7y
I don't think that's a reasonable expectation or norm. The expected return from a reader doing something like that is way too low, even in a community like this one. Most new ideas are wrong, and if your idea is wrong then people trying to traverse the same inferential path will get nowhere, and not even know if its their own fault or not. If you write it down then people can figure out where you went wrong and point it out. Even if your idea is right and your reader can be sure of that, why shouldn't you write an good explanation once, which will then save time for potentially hundreds or thousands of readers? By trying to save that time for yourself, you cause other people to waste their time, and then you end up having to answer their confusions and perhaps not even save time for yourself. You could make an exception to this if you just had a new idea and you want to find out if anyone else already had a similar idea or can see an obvious flaw in it, before deciding to invest more time into explaining it fully, but that doesn't seem to be what you're doing here. I have some uncertainty here, but not that much. I took one semester of piano and one semester of electronic music in high school, and it was intuitively clear that the return from that time spent wasn't nearly as valuable as say reading science fiction or economics textbooks. There's obviously a lot of individual differences here, so if my kid naturally has an interest or talent in music or art and wants to study it, I'm not going to stop her. But if your position is that we should more vigorously encourage an interest in artistic pursuits, I'm going to need more evidence and/or better arguments. This is totally unclear to me. I guess even if it's true, it would be hard for me to figure out on my own since I probably haven't studied music enough to be familiar with the kind of "physicality" that you're talking about. Nor do I understand what forms of thought you're suggesting is related to such physic

Thanks for the link; that'll be useful to refer to.

Of course, I on the contrary do think the hierarchy of needs is suggestive of this, as evidenced by the fact that I specifically interpreted it that way!

Is this approximately right?

Probably close enough for present purposes.

I still think that if someone is doing math or programming, they already have their dose of "games with nature" there.

Of course, but these pursuits themselves are often described as artistic in character, especially by their most elite practitioners.

I update that if actual upper-class people want their child to play piano, there may be actually a very healthy instinct behind that. (Or may be just blindly copying what their neighbors do.)

They probably are copying w... (read more)

Mao prohibited farm ownership and no amount of understanding the actual skill of baking or growing crops would have convinced him that private ownership is a good idea.

What makes you so sure of this? More to the point, what makes you sure that a society that tied status more closely to such skills wouldn't have promoted someone better than Mao to the top?

Lysenko's success is also not simply about lack of farming knowledge but about having an intellectual climate that's not well-fitted from separating true theories from those that aren't.

The point he... (read more)

There have been enough revolutions and (temporarily successful) peasant revolts to demonstrate how that usually turns out. Lenin famously said that "Any cook should be able to run the country" and I don't think it worked well.
Mao was the son of a farmer. Mao actually worked on his father farm instead of learning the piano and was bullied for his farmer background in high school. I don't think good aesthetics tell you about how to grow crops or bake bread.

It seems like the ideal leisure activities, then, should combine the social games with games against nature.

Exactly! Hence arts (and sports).

Generally speaking, whenever we think of something as being "technical", we're talking about the involvement of physical cognition. Art is social, yes, but it is also highly technical.

(in the sense in which I understand "physical cognition" -- the body is intimately involved

That is not what I meant -- as the excerpt you quoted was intended to communicate.

Musical composition is one of the archetypal instances of a physical-cognition-loaded activity (in the sense that I mean), and yet there your physical tools are a pencil/pen and paper (or, sometimes, indeed, a mouse).

So what do you mean, then? I don't understand what "physical cognition" in this context points to. What is the word "physical" doing in there? It failed.
Do you think you use the term physical cognition in the way it's used in the literature? Or do you think you use it in a different way?

I would describe this more generally as real-world achievement, which is a lot clearer than a label like "physical cognition"

There you go again, compulsively trying to round concepts off to something else!

"Real-world achievement" is considerably less clear as a way of pointing to what I am trying to point to than "physical cognition". It evokes all kinds of distracting side-issues about what constitutes the "real world". (Is pure mathematics "real-world achievement"? et cetera, et cetera).

I can't tell w... (read more)

For me, I don't see how "physical cognition" is better, because just what "physical" means here is as unclear to me as what "real-world" means in bogus's comment, and in rather similar ways. Is doing pure mathematics "physical cognition"? What about physics? With no more context than your earlier comment where (so far as I know) you first used the term, I'd have taken "physical cognition" to mean something like "applying one's brain directly to the real world in ways involving planning and subtlety and the like", with playing a musical instrument being an example. But I now have the impression that you intend it more broadly than that, perhaps including e.g. musical composition (even if done in one's head). But exactly what you mean remains unclear to me, as does why (if I'm understanding you right) you consider "physical cognition" a more fruitful category of things to lump together than "real-world achievement". (Note for the avoidance of doubt: I am not claiming that "physical cognition" is not a more fruitful category, nor that bogus's thinking in this area is better than yours, nor anything of that kind. I am just saying that it seems unreasonable to complain of someone "rounding off concepts" when you have made no apparent effort to clarify what you do mean, and that your specific objection to "real-world achievement" seems to apply equally to "physical cognition".)
It would be helpful if you try to define what you mean with "art" or "physical cognition" if you see people thinking you mean something different than you do.
Nope. More formally, I'm saying that the relation between the "physical" nature of cognition and the social benefits you talk about is essentially screened off by the more immediate fact that such physical activities are far more likely to feature a widely-agreed standard of achievement. Thus, the fact that humanities scholarship is in some sense "non-physical" (which it obviously is, since it is properly about human cultures, as opposed to physical phenomena such as the mechanics of playing an instrument) is practically irrelevant to whether or not we should consider it to be "intellectually stimulating", at least inasmuch as the merit of such scholarship is sometimes widely agreed upon. To some extent, these issues seem to be unavoidable. One reason why pure math academia is in such a "bad" shape socially is that it is only directly valued by a tiny minority. Within the subculture that values it, though, achievement is reasonably clear and thus it can at least escape the negative connotations of "social cognition". A similar situation seems to apply in newly-composed "serious" music, even though the subculture that values that might be even smaller, and the standard of "what makes this new piece worthwhile enough that I should be paying attention to it" somewhat less than clear.
3Wei Dai7y
Creating a distinct new concept in one's mind is an expensive operation (with both short term and long term costs), so I think it's only to be expected that people will try to match a supposedly new concept to an existing one and see if they can get away with just reusing the existing concept. I suggest that if you don't want people to do that, you should define your new concept as clearly as possible, give lots of both positive and negative examples, explain how it differs from any nearby concepts that people might try to "round off" to, and why it makes sense to organize one's thinking in terms of the new concept. (It would also help to give it a googleable name so people can find all that information. Right now, Google defines physical cognition as "Physical cognition, or 'folk physics', is a common sense understanding of the physical world around us and how different objects interact with each other." which is obviously not what you're talking about.) I think I've avoided rounding off your physical cognition to an existing concept, but I still don't understand how the concept is defined exactly or why it's a useful way of organizing one's thinking as it relates to the question of what kinds of children's activities are most valuable. Clearly there are distinct skills within what you call physical cognition, and all those skills are not equally valuable, nor does practicing one physical cognition skill improve all physical cognition skills equally (e.g., if you practice math skills you improve math skills more than piano skills, and vice versa). Given that, why does it make sense to group a bunch of different skills together into "physical cognition" and then say that practicing piano is valuable because it exercises physical cognition? Wouldn't it make more sense to talk about exactly what skills are improved by practicing piano, and how valuable the increase of those specific skills are?

Maybe it would help if we taboo art. What do you mean with the term when ballet and playing the piano are art but the kind of hacking you find at a hackerspace isn't?

I was not, in fact, using the term in such a way, but you failed to notice this! This is cliché-rounding.

The first line of your post is a quote about teaching ballet and piano to children as opposed to the kind of hacking background that Wai Dai has. Why use the term "art" when you don't mean ballet and piano, without making it explicit that you don't mean it?

You seem to have misunderstood my comment as some kind of salvo in a STEM vs. arts rivalry, with the result that your comment reads like a counter-attack in such a battle. This is probably due to cliché-rounding.

In point of fact, a perceived opposition between STEM and arts is a manifestation of the very thing I was complaining about. Thus, to have written the kind of comment that you appear to be responding to would have been the very last of my intentions.

I would direct your attention to the sentence immediately following the excerpt you quoted:

That is

... (read more)
If I go in a hackerspace I don't see performance that can be modeled as an ordered list but rather as a highly complex graph. A ballet competition, on the other hand, does produce an ordered list. Maybe it would help if we taboo art. What do you mean with the term when ballet and playing the piano are art but the kind of hacking you find at a hackerspace isn't?

Why do artistic pursuits constitute practice in physical cognition as opposed to social cognition? It seems obvious to me that artistic pursuits are (among other things) a type of status signaling, so I'm confused why you're contrasting the two

Artistic pursuits involve a synthesis of physical and social cognition. (This is essential to their nature and is what makes them special among human activities.) There is certainly a social aspect, but it's crucial that that isn't all there is. That there is also a physical aspect is also pretty obvious, if you c... (read more)

For certain arts -- e.g. music -- this is true (in the sense in which I understand "physical cognition" -- the body is intimately involved). But a counter-example would be something like digital art where your tools are on Photoshop palettes. The physical skill involved is moving a mouse and I don't think this qualifies. And yet, digital art is highly "technical".

Basically, Maslow's hierarchy of needs is a myth, and everyone would be better off forgetting about it entirely.

Not necessarily; it depends on what one's default or alternative theory would be. Let's be Bayesian, after all.

As I interpret it, "Maslow's hierarchy of needs" is little more than the claim that people's goals depend on their internal sense of security and status (in addition to whatever else they might depend on).

When I speak about it, I'm usually talking about something like a spectrum of exogenous vs. endogenous motivation: at one... (read more)

Self-determination theory is the standard alternative theory I usually point to (which also incorporates the spectrum of exogenous vs. endogenous motivation, but which I don't think the hierarchy of needs as usually conceived does).

The first and last sentences of the parent comment do not follow from the statements in between.

That sort of subject is inherently implicit in the kind of decision-theoretic questions that MIRI-style AI research involves. More generally, when one is thinking about astronomical-scale questions, and aggregating utilities, and so on, it is a matter of course that cosmically bad outcomes are as much of a theoretical possibility as cosmically good outcomes.

Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven't been conventional wisdom here.

Right, I took this idea to be one of the main contributions of the article, and assumed that this was one of the reasons why cousin_it felt it was important and novel.

What Alex said doesn't seem to refute or change what I said.

But also: I disagree with the parent. I take conventional wisdom here to include support for MIRI's agent foundations agenda, which includes decision theory, which includes the study of such risks (even if only indirectly or implicitly).

As the expression about knowing "how the sausage is made" attests, generally the more people learn about it, the less they like it.

Of course, veganism is very far from being an immediate consequence of disliking factory farming. (Similarly, refusing to pay taxes is very far from being an immediate consequence of disliking government policy.)

That's not obvious to me. I agree that the more people are exposed to anti-factory-farming propaganda, the more they are influenced by it, but that's not quite the same thing, is it?

Decision theory (which includes the study of risks of that sort) has long been a core component of AI-alignment research.

No, it doesn't. Decision theory deals with abstract utility functions. It can talk about outcomes A, B, and C where A is preferred to B and B is preferred to C, but doesn't care whether A represents the status quo, B represents death, and C represents extreme suffering, or whether A represents gaining lots of wealth and status, B represents the status quo, and C represents death, so long as the ratios of utility differences are the same in each case. Decision theory has nothing to do with the study of s-risks.
That doesn't seem to refute or change what Alex said?

I feel a weird disconnect on reading comments like this. I thought s-risks were a part of conventional wisdom on here all along. (We even had an infamous scandal that concerned one class of such risks!) Scott didn't "see it before the rest of us" -- he was drawing on an existing, and by now classical, memeplex.

It's like when some people spoke as if nobody had ever thought of AI risk until Bostrom wrote Superintelligence -- even though that book just summarized what people (not least of whom Bostrom himself) had already been saying for years.

Huh, I feel very differently. For AI risk specifically, I thought the conventional wisdom was always "if AI goes wrong, the most likely outcome is that we'll all just die, and the next most likely outcome is that we get a future which somehow goes against our values even if it makes us very happy." And besides AI risk, other x-risks haven't really been discussed at all on LW. I don't recall seeing any argument for s-risks being a particularly plausible category of risks, let alone one of the most important ones. It's true that there was That One Scandal, but the reaction to that was quite literally Let's Never Talk About This Again - or alternatively Let's Keep Bringing This Up To Complain About How It Was Handled, depending on the person in question - but then people always only seemed to be talking about that specific incident and argument. I never saw anyone draw the conclusion that "hey, this looks like an important subcategory of x-risks that warrants separate investigation and dedicated work to avoid".

I guess I didn't think about it carefully before. I assumed that s-risks were much less likely than x-risks (true) so it's okay not to worry about them (false). The mistake was that logical leap.

In terms of utility, the landscape of possible human-built superintelligences might look like a big flat plain (paperclippers and other things that kill everyone without fuss), with a tall sharp peak (FAI) surrounded by a pit that's astronomically deeper (many almost-FAIs and other designs that sound natural to humans). The pit needs to be compared to the peak, not the plain. If the pit is more likely, I'd rather have the plain.

Was it obvious to you all along?

Thanks for voicing this sentiment I had upon reading the original comment. My impression was that negative utilitarian viewpoints / things of this sort had been trending for far longer than cousin_it's comment might suggest.
Yes, but the claim that that risk needs to be taken seriously is certainly not conventional wisdom around here.
Fair enough. I guess I didn't think carefully about it before. I assumed that s-risks were much less likely than x-risks (true) and so they could be discounted (false). It seems like the right way to imagine the landscape of superintelligences is a vast flat plain (paperclippers and other things that kill everyone without fuss) with a tall thin peak (FAIs) surrounded by a pit that's astronomically deeper (FAI-adjacent and other designs). The right comparison is between the peak and the pit, because if the pit is more likely, I'd rather have the plain.

Piano and ballet seem like upper-class costly signalling. "I am so rich I can spend tons of time doing unproductive activities."

Well, no need to speculate about a future Malthusian dystopia, since it appears to be already here, psychologically!

Allow me to refer you to this comment of mine, and the ensuing discussion, on Sarah Constantin's blog. Artistic pursuits may be "upper-class", but they are not unproductive. They serve to keep the upper classes practiced in physical cognition, counteracting a tendency to shift entirely into soc... (read more)

I would describe this more generally as real-world achievement, which is a lot clearer than a label like "physical cognition". Eric S. Raymond has a nice post which details how the beneficial effects of having a shared standard of achievement can play out socially, at least in the strictly technical realm. Oh, and by the way, good scholarship can definitely count as legitimate "achievement" in many circumstances. This most likely explains how even the most stereotypical "humanities academia" can sometimes manage to be both intellectually engaging and socially healthy. Yes, there are lots of worrying dynamics in the "X Studies" part of academia, but sometimes good work still happens there.
There are ballet competitions and I think parents do care about how their children's perform in them. The kind of parent that forces their child to play piano every day also cares about performance. The kind of hacking that Wei Dai did that lead him to write the b-money paper also isn't about winning. It's exploring ideas and having fun with them. Having a kid spent time with computer programming means that he's much more likely to engage in innovation than having the kid spent time with piano or ballet. Both piano and ballet are heavily codified and don't encourage innovation. Most discussions on LessWrong are also not about direct winning but about free exploration. The fact that people spend their free time chatting on LessWrong instead of working for the Man, suggests they already understand that working for the Man isn't everything.
Upper class folks don't spend all their time in consumption and gossip, with art as their only lifeline to the real world. They do business and politics as well.
Ok, allow me to say it using my own words: Roughly, human pursuits can be divided into "social games" such as gossip or conspiracies, which are usually zero-sum, or even negative-sum as they often compete in sacrificing to Moloch everything that does not provide immediate social value, and "games with nature" such as work, science, but also sports and that part of art which requires skill e.g. playing the piano (as opposed to "modern art" which is merely about who makes a media hype around you, so it requires allies instead of technical skills). The word "game" is used here as in "game theory", i.e. it may or may not refer to playful activities. And there is a risk that when people climb the social ladder, they lose touch with "games with nature", because they delegate it to people lower than them on the social ladder. With the horrifying consequence that people who rule the world may actually understand it the least. I mean, they certainly understand the social aspects of the world, that's what they specialize at, so they are good at e.g. organizing a revolution; but they have no idea how to grow grain or cook bread, so the revolution is typically followed by bread shortage and lot of suffering. Having upper-class people spend some time doing "games with nature" may keep them more sane, and as a result keep the whole society more sane. But, frankly, the "games with nature" are typically motivated, directly or indirectly, by survival (you grow grain and cook bread to avoid starvation, you learn science inter alie to achieve job safety which is to avoid starvation), and this motivation does not apply to the upper class. Having them do sports or (skill-based) art may be the only chance to get them in contact with non-social aspects of reality. Of these two, sports are more about body, and are quite repetitive, while art is more about mind and creativity. Is this approximately right? I still think that if someone is doing math or programming, they already have the
0Wei Dai7y
I'm having trouble understanding this. Why do artistic pursuits constitute practice in physical cognition as opposed to social cognition? It seems obvious to me that artistic pursuits are (among other things) a type of status signaling, so I'm confused why you're contrasting the two. Please explain? (Aside from not being sure how valid the Maslow hierarchy is) I agree with this. But I don't see art/music/dance classes as a particularly good way to prepare most kids to fulfill their level 4 and 5 needs, mostly because there is too much competition from other parents pushing their kids into artistic pursuits. The amount of talent, time, and effort needed to achieve recognition or a feeling of accomplishment seem too high, compared to other possible pursuits.

You don't seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428's comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.

In particular, the speech is not being allowed "to the chagrin of all other users". I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.

Needless to say, to be allowed is not to be approved.

Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models -- perhaps even different ontologies -- of the situation, informed by different sets of experiences and preoccupations.

communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.

To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.

What it looks like to me is that LW and its associated "institutions" and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically ... (read more)

All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.

Is the implication that they're not reasonable under the assumption that truth, too, trades off against other values?

What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.

Above, I made it sound like it the overshoot... (read more)

I agree with all of this. (Except "this is obviously false," but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)

I don't think "Did you know symptoms X and Y are signs of clinical mental illness Z?" is appreciably different from "You very possibly have mental illness Z", which is the practical way that "You have mental illness Z" would actually be phrased in most contexts where this would be likely to come up.

Nevertheless, your first and third paragraphs seem right.

In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it. The phrasing of information matters.

Because you've publicly expressed assent with extreme bluntness

Who said anything about "extreme"?

You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic's comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the "Ber... (read more)

Well, you've left me pretty confused about the level of importance you place on good-faith discussion norms :P

My other comment should hopefully clarify things, as least with regard to politicization in particular.

To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people's mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course... (read more)

Your principal mistake lies here:

"socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect

Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people "diplomatic" communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary dire... (read more)

As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW. It's fairly common for this cost to go down with practice. Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness. I'm not necessarily claiming that you or any specific person is acting this way; I'm just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it. That's a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they've made to other's emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.

See this comment; most particularly, the final bullet point.


What convinced you of this?

A constellation of related realizations.

  • A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

  • A sense that I myself, despite being capable of producing interestin

... (read more)

Cool. Let's play.

I notice you make a number of claims, but that of the ones I disagree with, none of them have "crux nature" for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn't change my stance.

(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I'll focus on offering you a pathway by which you could convince me.)

But if I dig a bit, I think I see a hint of a possible double crux.... (read more)

All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible. But people don't choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, "from now on our goal is going to be X," regardless what X is, unless it is already their goal. Thus a community that says, "our goal is truth," does not automatically have the goal of truth, unless it is already their goal. Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is "an internet forum concerned with truth-seeking," nor is it helpful to talk about what LW is "supposed to be optimizing for." It is doing what it is actually doing, not necessarily what people say it is doing. That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that "Truthseeking tends to arise in violence-free environments," is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.

I'm gonna address these thoughts as they apply to this situation. Because you've publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won't tell you you should kill yourself).

A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interest

... (read more)

norms of good discourse are more important than the content of arguments

In what represents a considerable change of belief on my part, this now strikes me as very probably false.

I'm open. Clarify?

For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Dunc... (read more)

As someone who doesn't live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I'm part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it's not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it's out in the open so I may face the full consequences of my mistakes. I know lots of people mentioned in '18239018038528017428' comment. I either didn't know those things about them, or I wouldn't characterize what I did know in such terms. Based on their claims, '18239018038528017428' seems to have more intimate knowledge than I do, and I'd guess is also in or around the Bay Area rationality community as well. Yet they're on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn't been established other than "works at MIRI/CFAR", and what they're doing is just insulting and accusing regular people like the rest of us on the internet. They're not facing the consequences of their actions. The information provided isn't primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of '18239018038528017428's comment were to express frustration, slander cert
Yeah but exposure therapy doesn't work like that though. If people are too sensitive, you can't just rub their faces in the thing they're sensitive about and expect them to change. In fact, what you'd want to desensitize people is the exact opposite - really tight conversation norms that still let people push slightly outside their comfort zone.
I'm also curious to hear what made you update. It's true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the "hold off on proposing solutions" essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than "hold off on proposing solutions".) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn't clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through. Here's another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence "The world would be concretely better off if the author, and anyone like him, killed themselves." Would you tell them to add it in or not? If not, I suspect there's status quo bias, or something like it, in operation here. Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well. I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any
What convinced you of this?

I agree, and wish to state for the record, that to be told one is "overthinking" is about the least helpful (certainly least actionable) criticism one can receive.

In many cases, the one who says this wishes to communicate that their knowledge is tacit, and to contrast this with the other's attempt to use explicit reasoning. But tacit knowledge does not magically appear when you stop "thinking"!

What is the best way? It's not like you can trick them into it.

A more serious issue, I would have thought, would be that the "professional help" won't actually be effective.

If you don't have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information. "Did you symptoms X and Y are signs of clinical mental illness Z?" is likely more effective than telling the person "You have mental illness Z." If the other person doesn't feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it's more likely that they will end up doing what's right afterwards.

Here's what it looks like to me, after a bit of reflection: you're in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).

In this sort of situation, I don't think it's necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it's probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever ... (read more)

4[DEACTIVATED] Duncan Sabien7y
I was not aiming to do "that above." To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.

I do not reach the point of telling the...humans I know that they're e.g. dumb or wrong or sick or confused

If you'll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.

Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they're wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your... (read more)

Duncan's original wording here was fine. The phrase "telling the humans I know that they're dumb or wrong or sick or confused" is meant in the sense of "socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect". To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that's a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims. I'm frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they've written are all ways of socially discouraging someone from doing something. I think that Duncan's comment was fine, I certainly think that he didn't need to apologize for it, and I'm fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.
5[DEACTIVATED] Duncan Sabien7y
This is a fair point. I absolutely do hold as my "daily bread" letting people know when my sense is that they're wrong or confused, but it becomes trickier when you're talking about very LARGE topics that represent a large portion of someone's identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through. I don't have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that'll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.
In most cases calling someone sick when the person suffers from a mental issue isn't the best way to get them to seek professional help for it.

"Which few did you have in mind, Majesty?"

It plays back at the link! (Synthesized rendering, but not too bad.)

This was the point of putting it on MuseScore (otherwise I would have just linked a PDF I had already typeset with Finale).

how does this compare to your other work?

If we take this piece to be broadly similar to works like this or this (yes I know: as if), then my other work might be compared to something like this or this.

At least, it will once it exists. (I currently only really have an undergraduate portfolio's worth of "other work", and barely that. In progress!)

My reaction to these pieces (the former, especially).
I see. That's really neat! Thanks! Isn't there some choir-for-hire somewhere on the web who will record a score for a couple of bucks? It seems that logically, there should be.
Really? There seems a little overlap to me, but plenty of mismatch as well. Like, MM says Bayesians are on crack, as one of the main points of the article.

Oh, you meant "might made right".

See Scott's "The Goddess of Everything Else" for a poetical exposition on the subject.
Yes, that's a better way of putting it, thanks.

might makes right

Might is perhaps a necessary condition for right, but I would not be inclined to call it a sufficient one.

... says the guy whose ancestors successfully survived and reproduced.

This is also why the distinction between "triad" and "modality" is rather beside the point, in practical usage.

Not at all. It strongly implicates the distinction between the chord model and the line model of musical data; thinking of the Stufe as a triad has the severely unfortunate effect of encouraging the chord model. This is why almost no one has noticed that Schenkerian theory, like Westergaardian theory, uses the line model. It is for this reason that I am so insistent on the distinction between Stufen and triads, and what you... (read more)

Load More