This is a special post for short-form writing by sudo -i. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.
Tabooing the word, or tabooing the action? I'd love to read an exploration of what types of signaling one COULD avoid - certainly we can recognize and apply less weight to them, but many (the majority, IMO) of signals are unavoidable, because there are actual goals and evaluations they impact.
I think LW folks often underestimate the importance of serendipity, especially for pre-paradigmatic fields like AI Alignment.
You want people learning functional programming and compiler design and writing kernels and playing around with new things, instead of just learning the innards of ML models and reading other people’s alignment research.
You even want people to go very deep into tangential things, and become expert kernel designers or embedded systems engineers. This is how people become capable.
People really should try to not have depression. Depression is bad for your productivity. Being depressed for eg a year means you lose a year of time, AND it might be bad for your IQ too.
A lot of EAs get depressed or have gotten depressed. This is bad. We should intervene early to stop it.
I think that there should be someone EAs reach out to when they’re depressed (maybe this is Julia Wise?), and then they get told the ways they’re probably right and wrong so their brain can update a bit, and a reasonable action plan to get them on therapy or meds or whatever.
I think this is probably good to just 80/20 with like a weekend of work? So that there’s a basic default action plan for what to do when someone goes “hi designated community person, I’m depressed.”
I don't disagree, but I don't think it's limited to EA or Rationalist community members, and I wouldn't expect that designated group helper contacts will reach most of the people who need it. It's been my experience (for myself and for a number of friends) that when someone can use this kind of help, they tend not to "reach out" for it.
Your framing of "we should intervene" may have more promise. Having specific advice on HOW lay-people can intervene would go a long way toward shifting our norms of discourse from "you seem depressed, maybe you should seek help" to "this framing may indicate a depressive episode or negative emotional feedback loop - please take a look at <this page/thread> to help figure out who you can talk with about it".
Sometimes when you purchase an item, the cashier will randomly ask you if you’d like additional related items. For example, when purchasing a hamburger, you may be asked if you’d like fries.
It is usually a horrible idea to agree to these add-ons, since the cashier does not inform you of the price. I would like fries for free, but not for $100, and not even for $5.
The cashier’s decision to withhold pricing information from you should be evidence that you do not, in fact, want to agree to the deal.
For most LW readers, it's usually a bad idea, because many of us obsessively put cognitive effort into unimportant choices like what to order at a hamburger restaurant, and reminders or offers of additional things don't add any information or change our modeling of our preferences, so are useless. For some, they may not be aware that fries were not automatic, or may not have considered whether they want fries (at the posted price, or if price is the decider, they can ask), and the reminder adds salience to the question, so they legitimately add fries. Still others feel it as (a light, but real) pressure to fit in or please the cashier by accepting, and accept the add-on out of guilt or whatever.
Some of these reasons are "successes" in terms of mutually-beneficial trade, some are "predatory" in that the vendor makes more money and the customer doesn't get the value they'd hoped. Many are "irrelevant" in that they waste a small amount of time and change no decisions.
I think your heuristic of "decline all non-solicited offers" is pretty strong, in most aspects of the world.
Did some math today, and remembered what I love about it. Being able to just learn, without the pressure and anxiety of school, is so wonderfully joyful. I'm going back to basics, and making sure that I understand absolutely everything.
I'm feeling very excited about my future. I'm going to learn so much. I'm going to have so much fun. I'm going to get so good.
When I first started college, I set myself the goal of looking, by now, like an absolute wizard to me from a year ago. To be advanced enough to be indistinguishable from magic.
A year in, I now can do things that I couldn't have done a year ago. I'm more lucid, I'm more skilled, I'm more capable, and I'm mature than I was a year ago. I think I did it.
I'm setting myself the same goal again. I'm so excited to hit it out of the park.
I was watching some clips of Aaron Gwin's (American professional mountain bike racer) riding recently. Reflecting on how amazing humans are. How good we can get, with training and discipline.
Yeah, most sane humans seem to have a deep-seated drive for comparisons with others. And numeric public comparisons trigger this to a great degree. GPA is competition-porn. Karma, for some, is social status junk-food.
This measure ALSO has some real value in feedback to you, and in signaling for future academic endeavors. The trick, like with any modern over-stimulus, is in convincing your system 1 to weight the input appropriately.
Signalling Considered Harmful.
I want to write an essay about how we so dramatically overvalue signalling that it might be good to completely taboo it for oneself.
Tabooing the word, or tabooing the action? I'd love to read an exploration of what types of signaling one COULD avoid - certainly we can recognize and apply less weight to them, but many (the majority, IMO) of signals are unavoidable, because there are actual goals and evaluations they impact.
Action, but also advice to do the action. I think you're somewhat right.
It seems like an important concept to me. Tabooing it sounds to me like becoming less aware of why things happen.
I need to think about this.
I think LW folks often underestimate the importance of serendipity, especially for pre-paradigmatic fields like AI Alignment.
You want people learning functional programming and compiler design and writing kernels and playing around with new things, instead of just learning the innards of ML models and reading other people’s alignment research.
You even want people to go very deep into tangential things, and become expert kernel designers or embedded systems engineers. This is how people become capable.
People really should try to not have depression. Depression is bad for your productivity. Being depressed for eg a year means you lose a year of time, AND it might be bad for your IQ too.
A lot of EAs get depressed or have gotten depressed. This is bad. We should intervene early to stop it.
I think that there should be someone EAs reach out to when they’re depressed (maybe this is Julia Wise?), and then they get told the ways they’re probably right and wrong so their brain can update a bit, and a reasonable action plan to get them on therapy or meds or whatever.
I think this is probably good to just 80/20 with like a weekend of work? So that there’s a basic default action plan for what to do when someone goes “hi designated community person, I’m depressed.”
I don't disagree, but I don't think it's limited to EA or Rationalist community members, and I wouldn't expect that designated group helper contacts will reach most of the people who need it. It's been my experience (for myself and for a number of friends) that when someone can use this kind of help, they tend not to "reach out" for it.
Your framing of "we should intervene" may have more promise. Having specific advice on HOW lay-people can intervene would go a long way toward shifting our norms of discourse from "you seem depressed, maybe you should seek help" to "this framing may indicate a depressive episode or negative emotional feedback loop - please take a look at <this page/thread> to help figure out who you can talk with about it".
Sometimes when you purchase an item, the cashier will randomly ask you if you’d like additional related items. For example, when purchasing a hamburger, you may be asked if you’d like fries.
It is usually a horrible idea to agree to these add-ons, since the cashier does not inform you of the price. I would like fries for free, but not for $100, and not even for $5.
The cashier’s decision to withhold pricing information from you should be evidence that you do not, in fact, want to agree to the deal.
You could always ask.
I ignore upsells because I've already decided what I want and ordered that, whether it's extra fries or a hotel room upgrade.
For most LW readers, it's usually a bad idea, because many of us obsessively put cognitive effort into unimportant choices like what to order at a hamburger restaurant, and reminders or offers of additional things don't add any information or change our modeling of our preferences, so are useless. For some, they may not be aware that fries were not automatic, or may not have considered whether they want fries (at the posted price, or if price is the decider, they can ask), and the reminder adds salience to the question, so they legitimately add fries. Still others feel it as (a light, but real) pressure to fit in or please the cashier by accepting, and accept the add-on out of guilt or whatever.
Some of these reasons are "successes" in terms of mutually-beneficial trade, some are "predatory" in that the vendor makes more money and the customer doesn't get the value they'd hoped. Many are "irrelevant" in that they waste a small amount of time and change no decisions.
I think your heuristic of "decline all non-solicited offers" is pretty strong, in most aspects of the world.
Did some math today, and remembered what I love about it. Being able to just learn, without the pressure and anxiety of school, is so wonderfully joyful. I'm going back to basics, and making sure that I understand absolutely everything.
I'm feeling very excited about my future. I'm going to learn so much. I'm going to have so much fun. I'm going to get so good.
When I first started college, I set myself the goal of looking, by now, like an absolute wizard to me from a year ago. To be advanced enough to be indistinguishable from magic.
A year in, I now can do things that I couldn't have done a year ago. I'm more lucid, I'm more skilled, I'm more capable, and I'm mature than I was a year ago. I think I did it.
I'm setting myself the same goal again. I'm so excited to hit it out of the park.
I was watching some clips of Aaron Gwin's (American professional mountain bike racer) riding recently. Reflecting on how amazing humans are. How good we can get, with training and discipline.
It takes a certain degree of maturity and thought to see that a lot of advice from high profile advice-givers are bad.
It could be valuable to do point-by-point critiques of popular advice by high profile technologists.
Enlightened:
Terminal goal -> Instrumental goal -> Planning -> Execution
Buffoonery:
Terminal goal -> Instrumental goal -> Planning -> wait what did [insert famous person] do? Guess I need to get a PhD.
There's something really tyrannical about externally imposed KPIs.
I can't stop thinking about my GPA even if I make a conscious choice to stop optimizing for it.
Choosing to not optimize for it actually made it worse. A lower number is louder in my mind.
There's something about a number being used for sorting that completely short circuits my brain, and makes me agonize over it.
Yeah, most sane humans seem to have a deep-seated drive for comparisons with others. And numeric public comparisons trigger this to a great degree. GPA is competition-porn. Karma, for some, is social status junk-food.
This measure ALSO has some real value in feedback to you, and in signaling for future academic endeavors. The trick, like with any modern over-stimulus, is in convincing your system 1 to weight the input appropriately.