All of riceissa's Comments + Replies

I found that if my tongue doesn’t block my mouth I can only breathe through my mouth and if it does I can only breathe through my nose.

Huh, this isn't what happens when I try it. If I keep my tongue out or at the base of my mouth, I can still definitely choose whether to make the air go through my nose or mouth. If I try to block my mouth with my tongue, that does obstruct the airflow through my mouth but I can still breathe mostly okay (even if I plug my nose).

I used to have a model of breathing that went something like this: when breathing in, the lungs somehow get bigger, creating lower air pressure inside the lungs causing air to flow in. Then when breathing out the lungs get smaller, creating higher air pressure inside the lungs and causing air to flow out. How do the lungs get bigger and smaller? Eventually I learned that there's a muscle called the diaphragm that is attached to the bottom of the lungs (??) that pulls or pushes the lungs. If I keep my nose plugged but my mouth open, the air will travel thro... (read more)

2ChristianKl5d
This is the kind of question that ChatGPT can answer really well. 
1O O5d
I did some quick experimentation. I found that if my tongue doesn’t block my mouth I can only breathe through my mouth and if it does I can only breathe through my nose. I then didn’t block my mouth airway with my tongue and blocked my mouth with my hand. It seems air doesn’t go through my nose in that case unless I breathe in really hard, in which case I hear and feel something opening in the back of my nose. I’m guessing there is another valve in the nose. If you had allergies growing up you’d already know all of this..

I don't think "throw every explanation possible" is the right takeaway from your experience. To me, it seems like the teacher was failing to model what you were getting stuck on, and so the takeaway would be something more like "try to model the learner better, so as to produce better (not more!) explanations".

"Throw every explanation possible" might still be learning-complete in some sense, so might be worth exploring.

Back in the 2010s, EAs spent a long time dunking on doctors for not having such a high impact (I'm going off memory here, but I think "instead of becoming a doctor, why don't you do X instead" was a common career pitch). I basically mostly unreflectively agreed with these opinions for a long time, and still think that doctors have less impact compared to stuff like x-risk reduction. But after having more personal experience dealing with the medical world (3 primary care doctors, ~10 specialist doctors, 2 psychiatrists, 2 naturopaths, 3 therapists, 2 nutrit... (read more)

2Viliam24d
The model for dunking on doctors was something like: there is a limited number of doctor positions, so even if the hypothetical best doctor ever chooses a different career, it will not mean fewer doctors; it will just mean that the second best doctor will take their place instead. But the second best doctor ever is also a very good doctor, so the difference in the outcome will be very small. Now, I am not sure if I remember the argument correctly. But if I do, it is obviously flawed. Because not only the previous job of doctor#1 is now taken by doctor#2, but also the previous job of doctor#2 is now taken by doctor#3, etc. Until we reach the hypothetical limit, and the previous job of doctor#N is now taken by a person who previously wouldn't get the license, but now they will become doctor#N+1. So the overall change for the field of medicine is losing doctor#1 and getting doctor#N+1 (and shifting the remaining doctors). The difference between doctor#1 (the best doctor ever) and doctor#N+1 (who barely gets the license), multiplied by the length of their careers, could indeed mean a difference of many lives saved. It is just not really visible, because all those lives are not saved at the same place, but distributed along the chain. The same reasoning also applies to the effective altruists, of course. It's just, there is no guarantee that the hypothetical best doctor ever will become the most impactful effective altruist ever. They might as well become a mediocre one.

Typographers focus almost exclusively on designing texts that are meant to be read linearly (and typography guidelines follow this as well, telling writers to limit line length, use a certain font size, etc.). But if you look at the actual stuff happening in the reader's mind as they interact with a book or webpage, linear reading is only one of many possible ways of interacting with a text. In particular, searching for things, flipping around, cross-referencing, and other "movement" tasks are quite common. For such movement tasks, the standard typographic... (read more)

A while ago a PDF article was posted in the EA space (written by people who are pretty deep into EA) which used the Computer Modern font (the default font used in LaTeX) but which was clearly created using Microsoft Word. The cynical interpretation is that (on some level) the authors wanted to deceive readers into thinking that LaTeX was used to typeset the paper when in fact it was not. I do believe such deception will work, because very few people seem to know anything about typography. (I don't claim to be much better; I've learned just a little bit more than the default state of zero knowledge.) I wonder how people feel about this sort of thing.

5Steven Byrnes1mo
I vote for "it's fine". Maybe the authors just like how that font looks. And if somebody is judging a paper based on how it was typeset, I think that's stupid and I don't care if they get the wrong answer.

I found this Wikipedia article pretty interesting. Even in a supposedly copyright-maximalist country like the US, the font shapes themselves cannot be copyrighted, and design patents only last 15 years. Popular fonts like Helvetica have clones available for free. Other countries like Japan are similar, even though a full Japanese font requires designing 50,000+ glyphs! That is an insane amount of work that someone else can just take by copying all the shapes and repackaging it as a free font. In my experience there are only like a few main Japanese fonts, ... (read more)

Not quite sure what you are asking, but if you mean something like taking some of my points and editing them into your own post, that's fine with me.

This post was apparently translated to Chinese, and there is some discussion there. I can't quite tell if it's actually humans writing the comments (and Chrome's translation is just not very good) or if the content and discussion is all AI-generated.

Here's the list I came up with when I did something similar (I was thinking about written explanations in general, which I called "word explanations" on that page). I have an older attempt here. And here's a similar thing I did for a specific textbook.

2niplav1mo
Nice. Would you mind if I took inspiration from your list (crediting you of course).

I didn't log the time I spent on the original blog post, and it's kinda hard to assign hours to this since most of the reading and thinking for the post happened while working on the modeling aspects of the MTAIR project. If I count just the time I sat down to write the blog post, I would guess maybe less than 20 hours.

As for the "convert the post to paper" part, I did log that time and it came out to 89 hours, so David's estimate of "perhaps another 100 hours" is fairly accurate.

4Davidmanheim3mo
I probably put in an extra 20-60 hours, so the total is probably closer to 150 - which surprises me. I will add that a lot of the conversion time was dealing with writing more, LaTeX figures and citations, which were all, I think, substantive valuable additions. (Changing to a more scholarly style was not substantively valuable, nor was struggling with latex margins and TikZ for the diagrams, and both took some part of the time.)

This post by Brian Hamrick makes a similar point about how organizational mottos should prioritize a single thing (but leaves the "large company" part implicit).

Can someone say more about what is meant by credit allocation in this conversation? The credit allocation section here just talks about BATNAs and I don't see how BATNAs are related to what I imagine "credit allocation" might mean. I searched Michael Vassar's Twitter account but there are only three instances of the term and I couldn't quickly understand any of the tweets. I also don't understand what "being able to talk about deceptive behavior" has to do with credit allocation.

I upvoted this post because I think it's talking about some important stuff in ways (or tone or something) I somehow like better than what some previous posts in the same general area have done.

But also something feels really iffy about the way the word "fun" is used in this post. If I think back to the only time in my life I actually had fun, which was my childhood, I sure did not have fun in the ways described in this post. I had fun by riding bikes (but never once stopping to get curious about how bike gears work), playing Pokemon with my friends (but n... (read more)

6TsviBT4mo
Thanks. You make an important point. Fun in general is a broader thing than playful thinking (and deeper and more sacred in some ways), so playful thinking doesn't at all encompass all of fun. Fun and playful thinking are related though; playful thinking is supposed to be fun, and at least for me, the issue with playful thinking is that the fun is being stifled. So following on your last paragraph, the deeper thing is fun simpliciter. Another point, only hinted at by the phrase "serious play", is that the concept of playful thinking is not supposed to imply unseriousness.  Seriousness is not the same as explicit-usefulness-justification, because play can be serious but it's almost impossible for activity driven by explicit-usefulness-justification to be genuine, fully deep fun. (It can be somewhat fun, and some people are blessed to have explicit-usefulness-justifications that spur them into activity that then becomes genuine, fully deep fun. I can sort of do that but not fully, especially because my explicit-usefulness-justifications are pretty demanding and don't want me getting confused about what counts as success.) Serious play, in its seriousness, can involve instruction and taste. It could involve a mentor giving you harsh feedback. It could involve, for example, you saying to yourself: the thing I'm learning about  right now, in the way I'm learning about it, does it access [what intuitively feels like] the living, underlying, hidden structure of the world? And then modifying how you're engaging to heighten that sense. It could involve your case of learning a mode of thinking from someone else.    My two cents (although I'm worried about intruding on this, and worried about other people retroactively intruding such that the process is distorted): if at some point you realize that you've gained a lot on claiming it as your own, it would be very valuable to describe that to others. (If you'll allow a flight of fancy: We can only send messages backwards in t

I agree. It seems awfully convenient that the all of the “fun” described in this post involve the legibly-impressive topics of physics and mathematics. Most people, even highly technically competent people, aren’t intrinsically drawn to play with intellectually prestigious tasks. They find fun in sports, drawing, dancing, etc. Even when they take adopt an attitude of intellectual inquiry to their play, the insights generated from drawing techniques or dance moves are far less obviously applicable to working on alignment than the insights generated from stu... (read more)

I think it's often easiest/most tempting to comment specifically on a sketchy thing that someone says instead of being like "I basically agree with you based on your strongest arguments" and leaving it at that (because the latter doesn't seem like it's adding any value). (I think there's been quite a bit of discussion about the psychology of nitpicking, which is similar to but distinct from the behavior you mention, though I can't find a good link right now.) Of course it would be better to give both one's overall epistemic state plus any specific counter-... (read more)

2Daniel Kokotajlo4mo
Yeah idk, what you say makes sense too. But in at least some cases it seemed like the takeaway they had at the end of the conversation, their overall update or views on timelines, was generated by averaging the plausibility of the various arguments rather than by summing them or doing something more complex. (And to be clear I'm not complaining that this is unreasonable! For reasons Ronny and others have mentioned, sometimes this is a good heuristic to follow.)

This doesn't seem to be what I or the people I regularly interact with do... I wish people would give some examples or link to conversations where this is happening.

My own silly counter-model is that people take the sum, but the later terms of the sum only get added if the running total stays above some level of plausibility. This accounts for idea inoculation (where people stop listening to arguments for something because they have already heard of an absurd version of the idea). It also explains the effect Ronny mentions about how "you may very quickly find that everyone perceives the anti-T-ers as being much more reasonable": people stopped listening to the popular-and-low-quality arguments in favor of T.

3Daniel Kokotajlo4mo
I've noticed it happening a bunch in conversations about timelines. People ask me why my timelines are 'short' and I start rattling off reasons, typically in order from most to least important, and then often I've got the distinct impression that I would have been more persuasive if I had just given the two most important reasons instead of proceeding down the list. As soon as I say something somewhat dubious, people pounce, and make the whole discussion about that, and then if I can't convince them on that point they reject the whole set of arguments. Sometimes this can be easily explained by motivated cognition. But it's happened often enough with people who seem fairly friendly & unbiased & curious (as opposed to skeptical) that I don't think that's the only thing that's going on. I think Ronny's explanation is what's going on in those cases.

I bought these after seeing Wei Dai's post. Everyone in my family and in-person friend group refuses to wear this mask because it makes them look like a duck (besides one person, who refuses to wear it because it is harder to breathe through compared to a surgical mask). So I am the only one wearing this mask. So I agree with your assessment that "the main problem is that they make you look a bit like a duck", but I would add that this is apparently a very strong effect. People really would prefer to be less comfortable or increase their risk of COVID than to look weird.

1Rana Dexsin4mo
I wonder if anyone's tried targeting the avian furry market with these, since those would seem to be the most obvious class of “people who might not mind looking like a duck”. I can't seem to get a good grip on that intersection via search engines, mostly because I get a lot more results related to the non-protective masks more visibly associated with that group.

I think I agree with everything in your comment. Seems like there was less disagreement here than I initially thought. Moving on... :)

I think it's often hard to tell whether something is a psychological problem for an individual or instead a cultural problem with the group. Past social progress can be framed as "society used to think certain individuals had a psychological problem, but then it turned out that the society's rules/norms/culture was the problem". It currently seems to me that a lot of what people view as "psychological problems" are actually an individual's way of saying "something about the culture I find myself in doesn't seem right". I read this post as kinda ignoring this whole issue and making it seem like it's obvious whose problem it is, which I think avoids the hard core of these situations.

Thanks for the comment!

I agree with you that there are situations where the issue comes from a cultural norm rather than psychological problems. That's one reason for the last part of this post, where we point out to generally positive and productive norms that try to avoid these cultural problems and make it possible to discuss them. (One of the issue I see in my own life with cultural norms is that they are way harder to discuss when in addition psychological problems compound them and make them feel sore and emotional). But you might be right that it's ... (read more)

I think if the umbrella blog post on which the user's shortform posts (which are just comments) get added was created before 2022-06-23 then it won't have agree/disagree votes, whereas ones created on or after that date do?

If you're pasting sensitive data such as a password or card number for regular entry of that password, consider other options such as using the browser autofill or a password manager.

Some password managers like KeePassXC automatically clear the clipboard after 10 seconds or when you close the program (whichever comes first).

Some stuff I've encountered that I mostly haven't looked much into and haven't really tried but seem potentially useful to me: heart rate variability biofeedback training, getting sunlight at specific times of day, photobiomodulation (e.g. Vielight), red light therapy, neurofeedback, transcranial magnetic stimulation, specific supplement regimes (example), green powders like Athletic Greens, certain kinds of meditation.

Agreed on epistemically questionable info. I've seen a range of canned advice including defeatist ones.

Lynette's post was interesting because I think I also have something like POTS, but her post is very unlike something I would write myself, and I wouldn't have found the post useful when I was starting out (I actually probably even read the post when it first came out and probably didn't find it useful). I am puzzled at what this means for how generalizable people's experiences are.

And thanks, I'd be interested in introductions to potential collaborators!

Agreed on the epistemic standards of random health groups, and yeah, I'd be interested in a Discord server. I am aware of this Facebook group, if you use Facebook, though it's not very active.

I've been having a mysterious chronic health problem for the past several years and have learned a bunch of things that I wish I knew back when all of this started. I am thinking about how to write down what I've learned so others can benefit, but what's tricky here is that while the knowledge I've gained seems wide-ranging, it's also extremely specific to whatever my problems are, so I don't know how well it generalizes to other people. I welcome suggestions on how to make my efforts more useful to others. I also welcome pointers to books/articles/posts t... (read more)

2avturchin1mo
Yes. I have something like me cfs and all you said resonate well. 
1Pat Myron8mo
mind elaborating? 
4wunan9mo
I'm also dealing with chronic illness and can relate to everything you listed. I've been thinking that a discord server specifically for people with chronic illness in the rationality community might be helpful to make it easier for us to share notes and help each other. There are different discord servers for various conditions unaffiliated with the rationality community, but they tend to not have great epistemic standards and generally have a different approach than what I'm looking for. Do you have any interest in a discord server?
4VipulNaik9mo
Somewhat related, though different in various ways, is this post by Bryan Caplan: https://www.econlib.org/the-cause-of-what-i-feel-is-what-i-do-how-i-eliminate-pain/ [https://www.econlib.org/the-cause-of-what-i-feel-is-what-i-do-how-i-eliminate-pain/]
6mingyuan9mo
This all seems great honestly, I would love if there were more posts about this kind of thing. I'm especially into the rationality lessons angle (first bullet point), but the rest all seems useful too.  I've seen a lot of people face this situation and have to figure it out from scratch, and I don't think much has been written about this kind of thing on LessWrong (though there is this [https://www.lesswrong.com/posts/LPbFoQ2AbNeLyzJmu/being-productive-with-chronic-health-conditions]). Sure lots has been written about it in general / not on LessWrong, but I found the vast majority of that to be extremely epistemically questionable, and/or to be really defeatist, like, "just accept that you will spend the remaining decades of your life entirely bed-ridden". I would say that I'd be interested in collaborating on a sequence about this, but I am already way overcommitted. But I could ask some rationalist friends who have gone through this, if you wanted collaborators.

Seems like you were right, and the Peter in question is Peter Eckersley. I just saw in this post:

The Alignment Problem is dedicated to him, after he convinced his friend Brian Christian of it.

That post did not link to a source, but I found this tweet where Brian Christian says:

His influence in both my intellectual and personal life is incalculable. I dedicated The Alignment Problem to him; I knew for many years that I would.

Did you end up running it through your internal infohazard review and if so what was the result?

You have my permission!

I see, thank you for the response!

I am curious what you think of my old comment here that I made on Anna's post (some related discussion here).

4[DEACTIVATED] Duncan Sabien10mo
According to me, that is a succinct and exactly apt summary.

For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or "correct course" based on student confusion. This ability to "use a knowledgeable human" in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which i... (read more)

1MSRayne1y
Ah, I see! My immediate instinct is to say "okay, design a narrow AI to play the role of a teacher" but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that's really not the point lol. There is something to be said for interactivity though. In my experience, the best explanations I've seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively. Also, something like a market for explanations might be desirable. What you'd need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)

That's very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I'll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )

Cool! I just shared my draft post with you that goes into detail about the "exposition as science" strategy (ETA for everyone else: the post has now been published); if that post seems interesting to you, I'd be happy to discuss more with you (or you c... (read more)

2adamShimi1y
Thanks! I will look at the post soonish. Sorry for the delay in answering, I was in holidays this week. ^^

Doesn't do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn't do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I'm not understanding what you're modeling as ridiculous.

My understanding of the history is ... (read more)

Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?

Do you have a sense of where your anxiety/distractability/"minor mental health problems" came from?

3iceplant1y
I honestly don't know. I'm inclined to think there's a strong genetic component since almost all of my genetic first cousins have some level of clinical anxiety/depression/adhd traits. Possible the unschooling/family dynamic played a role too, but it's hard to tell.

What was the chain of events leading up to you discovering LessWrong/the rationality community?

7iceplant1y
A friend in college who was very involved in the community kept bring up interesting ideas/events he encountered. When I was commuting a lot, I started listening to the Rationally Speaking and SSC/ACX podcasts, then started following Zvi's covid updates and engaging with 80k hours career coaching. I still don't feel like I'm "part" of the community, and would like to be more involved!

Vipul Naik has discovered that Alfred Marshall had basically the same idea (he even used the phrase "burn the mathematics"!) way back in 1906 (!), although he only described the procedure as a way to do economics research, rather than for decision-making. I've edited the wiki page to incorporate this information.

Lately I have been daydreaming about a mathematical monastery. I don't know how coherent the idea is, and would be curious to hear feedback.

A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one's relationship to math.

  • Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to
... (read more)
1[comment deleted]1y

What does "±8 relationships" mean? Is that a shorthand for 0±8, and if so, does that mean you're giving the range 0-8, or are you also claiming you've potentially had a negative number of relationships (and if so what does that mean)? Or does it mean "8±n relationships", for some value of n?

2niplav2y
I interpreted it the way I would normally write "~8 relationships".

I collected more links a while back at https://causeprioritization.org/Eliezer_Yudkowsky_on_the_Great_Stagnation though most of it is not on LW so can't be tagged.

Author's note: this essay was originally published pseudonymously in 2017. It's now being permanently rehosted at this link. I'll be rehosting a small number of other upper-quintile essays from that era over the coming weeks.

Have you explained anywhere what brought you back to posting regularly on LessWrong/why you are now okay with hosting these essays on LessWrong? Did the problems you see with LessWrong get fixed in the time since when you deleted your old content? (I haven't noticed any changes in the culture or moderation of LessWrong in that timefram... (read more)

7[DEACTIVATED] Duncan Sabien2y
I have not explained, no. I think this comment is entirely prosocial, and is not breaking any rules of mine.  That being said, I don't have a legible answer prepared, and my partner's about to have surgery, so I think the best way to get one out of me is to bug me through other channels in a week or something.

I recently added some spaced repetition prompts to this essay so that while you read the essay you can answer questions, and if you sign up with the Orbit service you can also get email reminders to answer the prompts over time. Here's my version with these prompts. (My version also has working footnotes.)

Thanks. I read the linked book review but the goals seem pretty different (automating teaching with the Digital Tutor vs trying to quickly distill and convey expert experience (without attempting to automate anything) with the stuff in Accelerated Expertise). My personal interest in "science of learning" stuff is to make self-study of math (and other technical subjects) more enjoyable/rewarding/efficient/effective, so the emphasis on automation was a key part of why the Digital Tutor caught my attention. I probably won't read through Accelerated Expertise, but I would be curious if anyone else finds anything interesting there.

Robert Heaton calls this (or a similar enough idea) the Made-Up-Award Principle.

Maybe this? (There are a few subthreads on that post that mention linear regression.)

I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao's Analysis) for a few years now and have really enjoyed being a part of it.

Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.

Such a Discord server has elements of university courses, Math Stack Exchange, Redd... (read more)

2Raemon2y
This is a pretty cool concept. 

I learned about the abundance of available resources this past spring.

I'm curious what this is referring to.

There's apparently a lot of funding looking for useful ways to reduce AI X-risk right now.

Load More