All of sarahconstantin's Comments + Replies

My current theory is that self-esteem isn't about yourself at all!

Self-esteem is your estimate of how much help/support/contribution/love you can get from others.

This explains why a person needs to feel a certain amount of "confidence" before trying something that is obviously their best bet. By "confidence" we basically just mean "support from other people or the expectation of same." The kinds of things that people usually need "confidence" to do are difficult and involve the risk of public failure and blame, even if they're clearly the best option from an individual perspective.

This makes a lot of sense to me. It fits in with my sense that:

  • people with low self-esteem don’t expect others to help/support them long-term, as they feel they’ll be ‘found out’ as being worthless/terrible
  • whenever anyone is helpful to them they respond disproportionately effusively and say how kind the other person is.

In future I will model surprising low self-esteem as failing to accurately read signals about their level of respect/power. And that people with appropriately low levels of low self-esteem should focus on being useful to the people and communities around them.

Basically, AI professionals seem to be trying to manage the hype cycle carefully.

Ignorant people tend to be more all-or-nothing than experts. By default, they'll see AI as "totally unimportant or fictional", "a panacea, perfect in every way" or "a catastrophe, terrible in every way." And they won't distinguish between different kinds of AI.

Currently, the hype cycle has gone from "professionals are aware that deep learning is useful" (c. 2013) to "deep learning is AI and it is wonderful in every way ... (read more)

2Eli Tyre2y
What? How exactly is this a way of dealing with the hype bubble bursting? It seems like if it bursts for AI, it bursts for "AI governance"? Am I missing something?

Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. "This new thing has great opportunities and great risks; you need guidance to navigate and govern it" is a great advertisement for universities and think tanks. Which doesn't mean AI, narrow or strong, doesn't actually have great opportunities and risks! But nonprofits and academics aren't imm... (read more)

Some examples of valuable true things I've learned from Michael:

  • Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
  • Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
  • Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything fro
... (read more)
2Dr_Manhattan4y
http://paulgraham.com/genius.html [http://paulgraham.com/genius.html] seems to be promoting a similar idea

Thanks! Here are my reactions/questions:

Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.

Seems right to me, as I was never tied to such a narrative in the first place.

Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.

What kind of risks is he talking about here? Also does he

... (read more)

I'm not actually asking for people to do a thing for me, at this point. I think the closest to a request I have here is "please discuss the general topic and help me think about how to apply or fix these thoughts."

I don't think all communication is about requests (that's a kind of straw-NVC) only that when you are making a request it's often easier to get what you want by asking than by indirectly pressuring.

-1Elo5y
The salient part for me in your post was: Start with the request - don't build the argument case. If necessary, come back with layers of consideration, validation, justification. Explain in detail, expound considerations and concerns. But start with the request and trust the other party to listen to your request or seek justification if they need it. Trust is hard. But I'm in support of your post.

That's flattering to Rawls, but is it actually what he meant?

Or did he just assume that you don't need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?

Can you explain why return on cash vs. return on equity matters?

2Benquo5y
Oops, I used the wrong term. I meant return on marginal investment vs returns on equities as an investment vehicle (i.e. stock market returns). Will edit to clarify.

I'm struck by the assumption in this essay that you have a clear distinction between your own values and other people's.

I think that having a clear sense of personal identity can be difficult and not everyone may be able to hold on to their own perspective. I am concerned that this might be especially hard in an era of social media, when opinions are shared almost as soon as they are formed. Think of a blog/tumblr/fb that consists almost entirely of content copied from other sources: it is nominally a space curated/created by "you", bu... (read more)

2Benquo5y
Related: Be secretly wrong [http://benjaminrosshoffman.com/be-secretly-wrong/]

Note that the examples in the essay of mechanisms that produce inefficiency are union work rules, non-compete agreements between firms, tariffs, and occupational licensing laws. The former three are not federal regulations on industries, and so would not show up in a comparison of industry dynamism vs. regulatory stringency.

4Benquo5y
The vast majority of occupational licensing laws in the US are not federal regulation either.
2Benquo5y
Yeah, the gap between companies' marginal return on cash investment and sector-specific [EDITED TO SAY: stock market performance] would be a better metric. This would still be incomplete; it leaves out factors like: * "Hollywood accounting" (systematic underassessment of profits in order to extract more money from counterparties with compensation tied to profits) * Labor's share of income also being elevated by e.g. union work rules Overall it seems like summary statistics are a pretty limited tool here relative to engaging with the concrete details of what's going on in specific situations. (This is why I love Jane Jacobs so much.)

Ok, this is a counterargument I want to make sure I understand.

Is the following a good representation of what you believe?

When you divide GDP by a commodity price, when the commodity has a nearly-fixed supply (like gold or land) we'd expect the price of the commodity to go up over time in a society that's getting richer -- in other words, if you have better tech and better and more abundant goods, but not more gold or land, you'd expect that other goods would become cheaper relative to gold or land. Thus, a GDP/gold or GDP/land value that d
... (read more)
9paulfchristiano5y
Yes. The detailed dynamics depend a lot on the particular commodity, and how elastic we expect demand to be; for example, over the long run I expect GDP/oil to go way up as we move to better substitutes, but over a short period where there aren't good substitutes it could stay flat.

I agree that Carnegie's US Steel is not the type of "monopoly" that I consider socially harmful. I seem to remember that there is empirical evidence (though I don't know where) that monopolies due to superior product quality/price are actually fragile, and long-term monopolies must be maintained by legal privileges to survive. (If anybody remembers where, I'd appreciate a reference.)

3quanticle5y
Leaving aside Carnegie Steel, would you consider either AT&T or Microsoft to be socially beneficial monopolies?

In this context, thinking about whether you are "good" is not "constructive."

Thinking about whether you're doing something "constructive" is, by contrast, extremely constructive.

2lionhearted (Sebastian Marshall)5y
Well said. At the risk of asking elaboration of an obvious point, do you have any examples of when this has paid off for you? Or, perhaps write a top-level post? On the one hand it's very easy to get one's mind around what you wrote... but I'd speculate there might have been some non-obvious takeaways? It's a fascinating point. It'd be cool to read more about your perspective on it.

Here's my trajectory:

1.) Worry a lot about "I'm not good"

2.) Improve in some dimensions, also refactor my moral priorities so that I no longer believe some of my 'bad traits' are really bad

3.) Still worry a lot about "I'm not good" where "good" refers to some eldritch horror that I no longer literally endorse

4.) Learn the mental motion of going "fuck it", where I just rest my brain and self-soothe. Do that until I deeply do not give a fuck whether I'm good or not.

5.) Notice a mild but c... (read more)

In this context, thinking about whether you are "good" is not "constructive."

Thinking about whether you're doing something "constructive" is, by contrast, extremely constructive.

I'm a little more optimistic about calorie restriction mimetics than Aubrey, but I think everybody sensible has pretty low confidence about this.

Practical constraints. The main contributor to the cost of a lifespan study is the cost of upkeep for the mice -- so it's proportional to number of mice and length of the study. Testing 50 compounds at once means raising 50x the money at once (which is out of reach at the moment) and may also run into constraints of the capacity of labs/CROs.

Yep, that is my position.

(I've talked a bunch with Aubrey de Grey and he is very much supportive of the LRI's program. We're complements, not substitutes.)

Thanks; I think I was just wrong here, I didn't think of that.

7Raemon5y
I think it might be good to update the title of the post with a [Edit: Updated] tag or something. Might also be useful for the disclaimer you added to the top to include a more complete explanation of your current take on the situation (although maybe a bit more work, and depends on your what your actual current take is). AFAICT, Ben's comment was a decent synthesis of the point you were originally intending to make and gwern's counterpoint. While I don't think the point as originally worded was correct, it was still pointing at a fact that I don't think most people really have fully integrated into their model, and that seemed like something important enough that just saying 'refuted' didn't quite seem right either.

This is not normal behavior on her part. This is domestic violence. The standard advice is to leave people who hit you. Possibly after clearly stating that you are not okay with being hit and you will leave if it continues, and giving her a chance to change her ways. Maybe she should work with a professional to help with her anger problems. But there is a significant risk that a person who regularly attacks you will escalate.

Vaniver is right.

The mainstream biogerontology perspective is that there's an evolutionarily conserved "survival program", probably developed for surviving famines, that can slow the aging process somewhat. This is the stuff you'll find in Cynthia Kenyon's research, for instance. The hope is that you can find drugs that stimulate these pathways, and thereby slow down the incidence of age-related diseases. This is the approach LRI is taking.

The SENS position, as I understand, is that this won't work. As you go up from yeast t... (read more)

3ChristianKl5y
As far as I understood SENS position, it's: Approaches like getting a few substances that act on a few pathways might add a few years of life but they won't provide long-term immortality. Do you think it's different and that you actually have different assumptions about the amount of life-years that can be gained by a few of the substances that you might test?

I really don't relate to the externalization people use about "lotus-eating", like, "Facebook is making me addicted, even though I want to be productive." Implicitly that means the "real" me is into "good" meaningful stuff. And that's not how it feels. It feels like I have very strong drives towards the bad stuff (like "contacting exes to annoy them") and Facebook is just a tool that enables me to do what I want, which is why I deleted my account a year ago, because some of my wants harm other pe... (read more)

6Elo5y
When I want to do something cravey I work on my quantified self stuff. Forms and graphs.

I can usually tailor the level of jargon correctly. What I can't do that well is figure out how to not make my presence burdensome -- I can feel that I need to "come up with something to say" that makes it worth talking to me, and I'm not great at coming up with those quickly. (When a kid says "tell me a story", I can't do that either. I'm great at discussions, where you have to speak off the cuff in relation to some subject, but open-ended improv is hell.)

2Elo5y
You have the option of repeating back to people what they have just said or asked. "your question was xyz". "oh you want x". It's good for validation of what they say, it's good for giving you a bit more time to talk. You might like to read the book "impro" to understand how spontaneous responses are supposed to work.
2ESRogs5y
Interesting -- I wouldn't have connected that to seeing from another's point of view. Is there a connection there I'm missing? They seem like separate skills to me.

I really like this.

Let me try to apply it to an example in my own life. I'm frequently telling people about a project I'm working on. I'd like it to be well received, to make a good impression, and also to enlist help or advice.

This is probably consultation, collaboration, or delegation, depending on whom I'm talking to, right?

And "how to win people to your way of thinking" clearly seems to apply.

"Never say you're wrong" confuses me -- yes, there are people you can't afford to flatly contradict, but what d... (read more)

2Elo5y
You want to make sure you are doing collaborative truth seeking not adversarial truth seeking. Some statements to bring people to the same side in the intent to seek truth can help. "we agree that we both want to get to the bottom of this, and with the right evidence we'd definitely change our minds. Let's work together to find out what the true state of affairs is. Make smaller claims. I make a lot of progress by only doing small steps. you may find that you have made inferential leaps in reasoning if you make smaller steps. Vulnerability suggests you need to dare greatly and take the risk in sharing the things that are hard to say - in order to reap the rewards of connecting well. You need to be able to say, "I understand that you say x" and have them agree. You don't have to say, "you are so right when you say x"
2Elo5y
Silly question. Which "this"? It depends what you want. The 4 were specifically about making a decision. Say, "should we fire Bob" or "we are firing you" vs "this is your chance to tell us why we aren't going to fire you". Where it's important to be on the same page about the type of conversation you are having. For telling people about a project, you probably want to share and connect. Give some understanding, be vulnerable. The rest are the summary of the book htwfim. I disagree with a few of them and find nvc better. "you are wrong" is a lot like "your map is wrong". It is less accusatory to say instead "my map says this". Optimum form would be, "I am confused. If I am understanding what you said correctly you said x, where my understanding was y. Can you explain why you think x?"
2ESRogs5y
Which parts of this are hardest -- are you able to do things like guess at when a jargon word would be unfamiliar to the person you're talking to? Or is the difficulty more in having to remind yourself something like, "Oh right, some people honestly believe X," and then ask yourself, "So what would that make them think about Y?" while the conversation is ongoing?

Wanted to make a testable prediction that would be resolved soon.

9sarahconstantin5y
And...yep, 33% objective response rates, which is great. https://www.google.com/amp/s/immuno-oncologynews.com/2018/04/20/dynavax-immunotherapy-and-keytruda-fight-head-and-neck-cancer-trial-shows/%3famp [https://www.google.com/amp/s/immuno-oncologynews.com/2018/04/20/dynavax-immunotherapy-and-keytruda-fight-head-and-neck-cancer-trial-shows/%3famp]

You took the update “subjective emotional states aren’t very important, because they can happen when objectively everything is fine.” From the same observation, I took the update “objective conditions aren’t very important, because I can still feel lousy when objectively everything is fine, or great when it isn’t.” Is there a reason you took the former approach?

7lionhearted (Sebastian Marshall)5y
From a problem-solving perspective, conditions like dehydration, lack of sleep, and similar seem to actually need remedying objectively. Subjective states, not always true that anything needs to change objectively.

"You can't pick winners in drug development" rhymes with a cluster of memes that are popular in the zeitgeist today:

  • "Complicated things can't be understood from first principles"
  • "Collecting a lot of data without models is better than building models"
  • "People don't engage in abstract reasoning much, they do things by feel and instinct"
  • "Don't overthink it"
  • "What it means to be human" refers to what distinguishes us from machines, not what distinguishes us from animals

Once you clari... (read more)

7romeostevensit5y
Evidence in support of first principles reasoning generally resorts to cherry picking IME. In contrast, when I look through what methodology I can find on breakthrough thinkers in biographies and autobiographies, I find something less like 'a flash of inside view brilliance' and more like 'tried something over and over again in the presence of feedback loops and kept trying to find simple models that would explain most/the core of the data' (to account for noise in the data gathering process). Once a simple model was found, tested/extended to establish the domain of validity. These thinkers themselves seem to often point out multiple false starts where elegant inside view models were developed but eventually needed to be abandoned. We don't see as many of those looking back since people rarely record them unless their abandonment was noisy. Scott points to several in the history of depression models IIRC. Which I suppose is to say that I don't think you can pick winners using first principles reasoning even though first principles reasoning is how we move forward. Like an exploratory/confirmatory thing. I do agree that 'thinking isn't so great' serves much more as an excuse to avoid the 99% perspiration than it is a claim about the 1% inspiration. The 'thinking isn't so great' can be helpful when it helps point people towards the idea that 'summon sapience' includes more than symbolic based analytic techniques. Presence is expensive, especially at first. So people try to avoid it.

I don't believe so (at least I've never heard of a public one; sometimes large companies have internal prediction markets).

1ryan_b5y
Related through the prediction of which drugs will succeed and which won't: are you familiar with Roger M. Stein? He does financial engineering research with MIT, and has done some work related to different ways to fund drug research. In particular, he suggested a fund for securities made up of pharmaceutical IP which would work by re-securitizing a batch of drugs after each stage in the trials (as I understand it). The pitch for the fund in a TED talk is here [https://www.youtube.com/watch?v=9H38oQBw2HU]. The list of publications from his website are here: http://www.rogermstein.com/publication-list/ [http://www.rogermstein.com/publication-list/] I think the relevant papers are “Commercializing biomedical research through securitization techniques [http://www.rogermstein.com/wp-content/uploads/FernandezSteinLo_NBT_2012.pdf],” “Can Financial Engineering Cure Cancer? [http://www.rogermstein.com/wp-content/uploads/AER2013_Pub1.pdf]”, and “Financing drug discovery for orphan diseases [http://www.rogermstein.com/wp-content/uploads/1-s2.0-S1359644613004030-main-1.pdf]". My knowledge of finance is not good, so I am hoping someone can verify whether this passes the sniff test.

Yes, I think this absolutely does count as rhetoric in the classical sense (being concise, expressing the right points, good body language and good delivery.)

See here: https://en.m.wikipedia.org/wiki/De_Oratore https://en.m.wikipedia.org/wiki/Rhetoric

It’s not meaningless if you view rhetoric as “how to speak well” rather than “how to speak artificially and misleadingly.”

The best persuasive speakers I’ve ever seen in person are, unsurprisingly, lawyers. I saw Robert P. George speak once and thought “This is an atom bomb in the form of a man; I want that power.”

It’s not mere demagoguery. There’s structure to the arguments. And I’m pretty sure the same places that trained him to make arguments also trained him to speak effectively.

I read it and didn’t see much there I didn’t know before, but I can give it another shot.

I kind of want to learn elocution — any thoughts on how?

2lbThingrb5y
Toastmasters? Never tried them myself, but I get the impression that they aim to do pretty much the thing you're looking for.

This matches my experience.

I think academic math has a problem where it's more culturally valorized to *be really smart* than to teach well, to the point that effective communication actually gets stigmatized as catering too much to dumb people.

Having left academic math, I am no longer terrified of revealing my stupidity, so I can now admit that I learned intro probability theory from a class in the operations research department (that used an actual syllabus and lecture notes! unlike most math classes!), that I learned more about solving ODEs from m... (read more)

5totallybogus5y
I don't think that's the issue exactly. My guess is that academic math has a culture of teaching something quite different from what most applied practitioners actually want. The culture is to focus really hard on how you reliably prove new results, and to get as quickly as possible to the frontier of things that are still a subject of research and aren't quite "done" just yet. Under this POV, focusing on detailed explanations about existing knowledge, even really effective ones, might just be a waste of time and effort that's better spent elsewhere!

I did update away from believing Dragon Army was dangerous after visiting there.

It was all people whom I’d expect to be able to hold their own against strong personalities. And they clearly didn’t have a strongly authoritarian culture by global or historical standards.

Thanks.

I'll explicitly note that I felt extremely fairly treated by your criticisms at the time of the original post, and found constructive value in them. You are not in the reference class I was snarking at above.

I do wish more of the people who had strong concerns had taken steps such as investigating who else was involved; the list was known to within a few people even back when the charter first went up, but as far as I can recall no one publicly stated that they thought that might be crucial.

The gland thing seems weird to me. Most internet sources associate the first chakra with the adrenals, which sit on top of the kidneys and aren’t physically anywhere near the usually pictured location of the first chakra (at the base of your spine.) Most sources associate second chakra with the ovaries (which are in the right place) or testes (which aren’t, afaik, in anyone’s lower belly area.) I’d been thinking of second chakra as basically my uterus, but altering hip posture is relevant for e.g. relieving uterine pain.

Chakra stuff that seems empirically true to me:

*The pelvic floor (first chakra) is connected to the emotional sense of safety. You instinctively tighten it when nervous and relax it when secure.

Learning to feel and move your pelvic floor is useful; the correct way to push in childbirth is to tense your abs but relax your pelvic floor, and having practiced this a lot beforehand was helpful for me since it's counterintuitive for most people.

*Your abs (roughly third chakra) are obviously connected to willpower and strength, since you use them for all f... (read more)

1leggi4y
Spurred on by your willingness to consider the anatomy of your body and it's connection to what you experience ... Try focusing on: the 5 main muscles made easy [https://www.lesswrong.com/posts/mThK4M25rARrPrj8X/the-5-main-muscles-made-easy] as you move (and rest). Build the connection, feel for their relative state and positioning. I've come to believe chakras are trying to describe the feeling/experience of the body when it is being used 'correctly' and that these muscles are the key - if I can just get a few people to give it some thought. Working with these muscles, regaining my range of movement and releasing the tensions both physical and mental has made life so much better.
3Elo5y
This matches Daniel Goleman from. His book emotional intelligence. There's a link between Physiological state and emotional state. And it goes in both directions. This is evident when you can calm down by concentrating on your breathing. Among other examples.

I'd really love a friendly, narrative-style introduction to military history or space travel. (Audio, video, book, blog, in-person infodumping, all welcome.)

I have simple questions like "what are the parts of a rocket and how do they work?" and "what actually made Napoleon a great general?" and would like a level of depth that's higher than children's books but lower than Wikipedia.

Can you clarify what you mean by "renunciation of the self"?

In David Chapman's writing, I think he makes the claim that selves do not exist, and he's a Tantra practitioner. (My perspective is that he has a different definition of "exist" than me, but that we're pointing at the same observations.) He doesn't believe in, and Tantra doesn't preach, a renunciate lifestyle -- they think it's okay to eat meat, have sex, earn money, and so on.

8David_Chapman5y
I would not say that selves don't exist (although it's possible that I have done so somewhere, sloppily). Rather, that selves are both nebulous and patterned ("empty forms," in Tantric terminology). Probably the clearest summary of that I've written so far is "Selfness [https://meaningness.com/self]," which is supposed to be the introduction to a chapter of the Meaningness book that does not yet otherwise exist. Renouncing the self is characteristic of Sutrayana [https://vividness.live/2013/10/22/sutrayana/].

Things I've learned over the years related to this:

  1. If someone argues with great force and convincingness that you shouldn't help them because they'll never get better, believe them.
  2. Never, ever hire someone for a job they can't do as a "favor." This causes more trouble than simply giving them money.
  3. You can be friends with someone who has serious life problems. (It would be awful and callous if you couldn't.) But you need to both acknowledge that you probably won't be able to singlehandedly fix their whole life, and
... (read more)

I'm not sure exactly what "weird" is made of, on a gears level.

I do have a guess, taken from personal experience. I remember recently being in the orientation session for a new job, and I resolved to fit in and get along with people. Within a few days, though, I was "out of step" with the group and was very clearly more isolated than others.

It wasn't that I had done something especially shocking or unconventional, though. I decided that I wanted to get up early and exercise, so I went to the gym one morning; as a result,... (read more)

6Connor_Flexman5y
I’m not confident these are the right gears, and you might be asking for refined gears than mine, but my working hypothesis is something like: The umbrella concept of weirdness is about whether people can predict your actions, since this is extremely useful information to track for a social animal. Predictability and therefore weirdness are tracked on a variety of levels—you can be weird because of your sleep schedule, or weird because of your nervous tics and body language, or weird because you talk in a very normal manner about the impending alien rapture, or weird just because you’re a foreigner. The weirdness of an action registers as flags on various mental levels to help you predict when that person later might not do the canonical action, and it registers with a magnitude and some metadata to help you track their weird trait(s) for inner simming. To answer the question of how much disconformity is “enough” to be labeled weird, I have to hand-wave and say that typical people’s social neural nets just get very good at inferring what infractions correspond to how much likelihood of what level of difficulty coordinating with them. (If this is the meat of the question, I could say more later). Unfortunately, “weird" has had some semantic drift since unpredictable often happens to correlate with “being a less valuable ally” in a variety of ways for systemic or intrinsic reasons. Two important subtypes of weird that this is evident in are 1) the people whom you talk about that are just kind of loners, and 2) the people who actually provide frequent disvalue. The loners are “weird" because they can and do take actions the group hasn’t decided on, which makes them harder to coordinate with and significantly less predictable. But this also correlates with them being weird in other ways, and so it is rightly seen as Bayesian evidence for other problems by their peers—and further, people who sometimes leave the group are just less valuable allies (for dependability, fo

The problems with believing in fate or Providence start to become real when bad things happen to you.

If you imagine that the universe is conspiring to help you when things go right, you can also imagine that the universe is conspiring to hurt you when things go wrong, and that’s terrifying. Ordinary failure and misfortune is easier to recover from than the creeping fear that you’ve angered God. I’ve been there; it sucks.

That seems to depend on the nature of the belief, though. Some people with a belief of fate seem to gain strength from it even during misfortune, thinking not "the universe is out to get me", but something like "well I guess this was the universe's way of [setting me on a better path / reminding me not to take for granted what I have / insert-some-other-benefit-here]".

If you have sufficiently strong faith in the universe being benevolent, you can probably find some positive angle from any event and focus on that.

Ok, here’s a 2x2 that captures a lot of the variation in OP:

abstract/concrete x intuitive/methodical.

Intuitive vs. Methodical is what Atiyah, Klein, and Poincare are talking about. Abstract vs Concrete is what Gowers, Rota, and Dyson are talking about.

Abstract and intuitive is like Grothendieck.

Concrete and intuitive is like geometry or combinatorics.

Concrete and methodical is like analysis.

Abstract and methodical — I don’t know what goes in this space.

4PeterBorah5y
This seems good. I was definitely getting the sense there were at least two axes, and these seem to capture a lot of it. Could Abstract/Methodical be something like Russell and Whitehead's Principia Mathematica? Also, I'm interested that Concrete/Methodical is analysis, given the Corn post. I would have expected it to be Intuitive? (I don't actually do higher math, so I don't know from personal experience.)

The chakra thing sounds right; another way of putting it is that algebra is more verbal & geometry is more visual/spatial. (IMO, analysis is visual/spatial too.)

9Qiaochu_Yuan5y
I'm not sure what my brain is doing when it thinks about analysis but I'm not convinced it's visuospatial. More concretely, let's suppose I'm trying to analyze the asymptotic behavior of some function which is a sum of terms that have different growth rates, say f(x)=1x+ex. What I could be doing, if I were doing this visually, is visualizing the the asymptotic behavior of ex ("grows fast as x gets big") as a curve that curves up really fast, and similarly for 1x ("grows fast as x gets small, goes to zero as x gets big") as a curve that starts big and gets small. But I think that's not what I'm actually doing first, although that is a mode of thought I can use and find helpful. I think I am actually working with something like a primitive felt sense of bigness and smallness, which is not about visual tallness (to triangulate, another metaphor for bigness that isn't visual is weight: ex gets "heavy" and 1x gets "light"). Not sure though, because the visual thought also happens pretty quickly after this and everything is correlated (heavy things are large in my visual field, etc.).

Thanks for quantifying!

Yep, I can do that too.

I eat corn like an an analyst, and I am an analyst. I also use vim over emacs, like Lisp, and find object-oriented programming weirdly distasteful.

However I don’t think analysis and algebra are usually lumped together and opposed to geometry; my understanding was that traditionally algebra, analysis, and geometry were the three main fields of math.

I tend to think of the distinctions within math as about how much we posit that we know about the objects we work with. The objects of study of mathematical logic are very general and thus can be very “pervers... (read more)

3drossbucket5y
No, I also definitely wouldn't lump mathematical analysis in with algebra... I've edited the post now as that was confusing, also see this reply [https://www.lesserwrong.com/posts/5QnvHZpy4pGgCo3Pp/two-types-of-mathematician#CspHMKc7bw8hmTHpR]. Your 'how much we know about the objects' distinction is a good one and I'll think about it. Also vim over emacs for me, though I'm not actually great at either. I've never used Lisp or Haskell so can't say. Objects aren't distasteful for me in themselves, and I find Javascript-style prototypal inheritance fits my head well (it's concrete-to-abstract, 'examples first'), but I find Java-style object-oriented programming annoying to get my head around.

Okay, I think that’s a difference between us. I hear that kind of language not as saying something denotatively, but as more like “casting a spell” on the audience. It doesn’t throw up the “error: that doesn’t make sense/seem fair” response because I’m not expecting it to be communication in the first place.

Someone who wants me to relax, say, and is putting verbal and nonverbal optimization pressure into getting me to relax, is going to cause me to relax, just because I want to be compliant in general. For me, only a totally expressionless and artificial... (read more)

Ah!

You aren’t in fact charmed (or overawed) by people who use feelings-heavy, mystical, or salesy talk — you instead hear it as an explicit/denotative request for you to be charmed, which you think is unjustified. Is that right?

2cousin_it5y
Yes!

I’m trying to pinpoint where you think asking leading questions like “how do you feel” is different from smiles, dance, and poetry. They do seem different, but I’m not sure why.

4cousin_it5y
Smiles and poetry appeal to the PR department. Asking "how do you feel" is a request to bypass the PR department. Many of my comments in these threads (like the fish comment, or the one about hippie dreams) are trying to argue that no one is entitled to bypass anyone else's PR department. You've got to go through proper channels. If you're charming, then charm me.

Just posting to record that this post successfully alarmed me, by raising the possibility that I might be missing really important things.

Yeah, to me it feels like "sure, you can do 'magic' and make me cry and hug and shudder, but that has very little to do with my long-term behavior patterns, it's just a transient effect." It feels like being flipped onto the mat by a skilled martial artist; I'm being a guinea pig for someone to demonstrate a cool trick.

6PeterBorah5y
My experience is that the cluster of experiences around "cry and hug and shudder" are what it feels like to become aware of something that's important to my system 1, and that those moments are intervention points for shifting system 1's heuristics. Progress on reducing akrasia, unendorsed social anxiety, etc. has often come from moments like that. I don't know you well, but I model you as someone with strong willpower and a general "mind over matter" attitude. This may make it less salient what your system 1 is up to?

Yep!

You can prompt someone to "open up" about their desires or inner experiences in order to know them better, and knowing them better allows you to more precisely and smoothly do nice things for them.

Can this feel scary and vulnerable? Yep! I totally feel uncomfortable when someone is learning all about me in order to, unprompted, do me favors. Somebody who wanted to hurt me could definitely use that knowledge maliciously. It's just that sometimes that fear is unfounded.

Load More