All Posts

Sorted by Magic (New & Upvoted)

Sunday, August 18th 2019
Sun, Aug 18th 2019

Shortform [Beta]
39Buck12h I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour. It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions. For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall. I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are. There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something. Conversational moves I particularly like: * "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about." * "Why don't people talk about X?" * "What should I read to learn more about X, based on what you know about me from this conversation?" All of the above are way faster with a live human than with the internet. I think that doing this for an hour or two weekly will make me substantially more knowl
21Swimmer9634h Fiction writing ramble, 1 of ??: I've been trying to introspect lately on my fiction writing process. My brain is opaque about what it's doing, which I guess makes sense – I've been telling stories since literally before I knew how to write, and I've never had any formal instruction in it. Current question: why is it so intensely a superstimulus for me? Writing fiction is pretty much the only thing I will eagerly and endorsedly do for 16 hours straight. My best and most enjoyable writing periods feel very much like "chasing the shiny" – what "the shiny" is, is hard to describe, but it feels more "substantial" than other second-to-second dopamine-hit-seeking behaviours like chasing links on Wikipedia or TV Tropes. Somewhere between pushing towards an exercise high, and drinking water when I'm thirsty, whereas Internet dopamine-seeking is more like eating sugar. Thinking about a story I'm writing, and especially talking meta about it with friends, is also extremely rewarding, to the point that sometimes I'll find myself off in daydreams or having a chat conversation instead of, you know, actually writing. Reaching the shiny thing on the page is effortful, much of the time – the feeling is often that it's "far away", and I'm sort of hill-climbing towards it, but often spotting smaller shiny things along the way. The big shiny thing can be a particularly clever or cool plot event that I'm setting the groundwork for, a felt sense theme-shaped-thing I'm groping towards, a lesson I want to convey, or the resolution of foreshadowing I've been setting up for dozens of chapters. (Or, let's be honest, my biggest self-indulgence in writing: gratuitous medical emergencies). The opportunities snatched up along the way can be an amusing interaction between characters, a bit of neat worldbuilding my brain generated on the spot, a snippet of dialogue that feels deliciously in-character, or just a sentence or word use that feels poetic and satisfying. (Things I'm not chasing t
12Evan Rysdam12h I'm told that there was a period of history where only the priests were literate and therefore only they could read the Bible. Or maybe it was written in Latin and only they knew how to read it, or something. Anyway, as a result, they were free to interpret it any way they liked, and they used that power to control the masses. Goodness me, it's a good thing we Have Science Now and can use it to free ourselves from the overbearing grip of Religion! Oh, totally unrelatedly, the average modern person is scientifically illiterate and absorbs their knowledge of what is "scientific" through a handful of big news sources and through cultural osmosis. Hmm. [https://i.imgur.com/yw4hlST.png] Moral: Be wary of packages labeled "science" and be especially wary of social pressure to believe implausible-sounding claims just because they're "scientific". There are many ways for that beautiful name to get glued onto random memes.
9Benito19h I've finally moved into a period of my life where I can set guardrails around my slack without sacrificing the things I care about most. I currently am pushing it to the limit, doing work during work hours, and not doing work outside work hours. I'm eating very regularly, 9am, 2pm, 7pm. I'm going to sleep around 9-10, and getting up early. I have time to pick up my hobby of classical music. At the same time, I'm also restricting the ability of my phone to steal my attention. All social media is blocked except for 2 hours on Saturday, which is going quite well. I've found Tristan Harris's advice [https://humanetech.com/resources/take-control/] immensely useful - my phone is increasingly not something that I give all of my free attention to, but instead something I give deliberate attention and then stop using. Tasks, not scrolling. Now I have weekends and mornings though, and I'm not sure what to do with myself. I am looking to get excited about something, instead of sitting, passively listening to a comedy podcast while playing a game on my phone. But I realise I don't have easy alternative options - Netflix is really accessible. I suppose one of the things that a Sabbath is supposed to be is an alarm, showing that something is up, and at the minute I've not got enough things I want to do for leisure that don't also feel a bit like work. So I'm making lists of things I might like (cooking, reading, improv, etc) and I'll try those.
5mr-hire4h I've had a draft sitting in my posts section for months about shallow, deep, and transfer learning. Just made a Twitter thread that gets at the basics. And figured I'd post here to gauge interest in a longer post with examples. Love kindle, love Evernote. But never highlight good ideas. It's level one reading. Instead use written notes and link important ideas to previous concepts you know. Level 1: What's important? What does this mean? Level 2: How does this link to compare/contrast to previous concepts or experiences? Do I believe this? Level 3: How is this a metaphor for seemingly unrelated concepts? How can this frame my thinking? 4 questions to get to level 2: * How is this similar to other things I know? * How is this different from other things I know? * What previous experiences can I relate this to? * In what circumstances would I use this knowledge? How would I use it? 3 Questions to ask to get to level 3: * How does it feel to view the world through this lens? * How does this explain everything? * What is this a metaphor for?
Load More (5/7)

Saturday, August 17th 2019
Sat, Aug 17th 2019

Shortform [Beta]
20Raemon2d I've posted this on Facebook a couple times but seems perhaps worth mentioning once on LW: A couple weeks ago I registered the domain LessLong.com and redirected it to LessWrong.com/shortform. :P
12Benito2d I block all the big social networks from my phone and laptop, except for 2 hours on Saturday, and I noticed that when I check Facebook on Saturday, the notifications are always boring and not something I care about. Then I scroll through the newsfeed for a bit and it quickly becomes all boring too. And I was surprised. Could it be that, all the hype and narrative aside, I actually just wasn’t interested in what was happening on Facebook? That I could remove it from my life and just not really be missing anything? On my walk home from work today I realised that this wasn’t the case. Facebook has interesting posts I want to follow, but they’re not in my notifications. They’re sparsely distributed in my newsfeed, such that they appear a few times per week, randomly. I can get a lot of value from Facebook, but not by checking once per week - only by checking it all the time. That’s how the game is played. Anyway, I am not trading all of my attention away for such small amounts of value. So it remains blocked.
12Davis_Kingsley2d One concept people talk about in game design is "pendulum swing", where something that is too powerful or too weak is overcorrected in balance patches and becomes the opposite -- something too powerful becomes too weak, while something too weak becomes too powerful. A similar concept can be present in other sectors as well -- often, noticing one problem can lead to an overcorrection that brings you the opposite problem. For instance, an early stage organization might notice that they aren't systematic enough in their processes, overcorrect, and become too rigid and doctrinaire. (Duncan Sabien uses this concept of pendulum swing a lot, and while I was aware of it prior to his use he's done a lot to bring it to attention as a relevant rationality concept.)
10Jacobian19h There's been a lot of noise lately about affirmative consent, a standard of consent which requires explicit verbal confirmation for every escalation of romantic or sexual interaction. It has been adopted as a standard by many college campuses [https://endrapeoncampus.org/yes-means-yes], and efforts have been made to turn it into actual law [https://www.city-journal.org/affirmative-consent]. Most of the discussion has centered around the use of affirmative consent as a legal standard, and as such it is quite terrible: unfair, unjust, and impossible to interpret in a consistent way that stops bad behavior without criminalizing normal conduct. But, what I haven't seen mentioned, is that adopting affirmative consent as a loosely enforced social norm is really good for nerds. If you're not great at reading body language and subtle signs, the expectation that you'll ask for verbal consent makes flirting much easier. You're no longer an awkward geek, you're hip to the times! I've personally erred on the side of asking explicitly in the past, and I think it has worked out great for me. Most women were happy to give consent when asked, the momentary awkwardness of asking quickly forgotten. A few said "no", in which case it's a good thing I asked! And I doubt that even a single one was poised so evenly on the fence that asking for verbal consent turned her off me. What do y'all think? And is this actually making life better or worse for women who date nerds?
9Raemon1d I think there's a preformal / formal / post-formal thing going on with Double Crux. My impression is the CFAR folk who created the doublecrux framework see it less as a formal process you should stick to, and more as a general set of guiding principles. The formal process is mostly there to keep you oriented in the right direction. But I see people (sometimes me) trying to use it as a rough set of guiding principles, and then easily slipping back into all the usual failure modes of not understanding each other, or not really taking seriously the possibility that they might be the wrong one. Right now in some contexts I've come across as a bit anal about sticking to "Formal Doublecrux rules". Model share. Check for cruxes. Recurse until you find common cruxes. Look for experiments you can actually run to gain new evidence. Aim to converge on truth. And it does clearly seem that these steps aren't always the best approach for a given conversation. But I often perceive what feel like basic errors, which would have been caught if you were following the formal rules. So I'm currently, like, on a crusade to make sure the people around me that I end up in the most disagreements with are able to nail the Formal Doublecrux Framework, and once we're all roughly on that page I'll trust us to do a post-formal version of it where we trust each other to get the basics right, so we can relax about the rules.
Load More (5/6)

Friday, August 16th 2019
Fri, Aug 16th 2019

No posts for August 16th 2019
Shortform [Beta]
14Matthew Barnett2d I get the feeling that for AI safety, some people believe that it's crucially important to be an expert in a whole bunch of fields of math in order to make any progress. In the past I took this advice and tried to deeply study computability theory, set theory, type theory -- with the hopes of it someday giving me greater insight into AI safety. Now, I think I was taking a wrong approach. To be fair, I still think being an expert in a whole bunch of fields of math is probably useful, especially if you want very strong abilities to reason about complicated systems. But, my model for the way I frame my learning is much different now. I think my main model which describes my current perspective is that I think employing a lazy style of learning is superior for AI safety work. Lazy [https://en.wikipedia.org/wiki/Lazy_evaluation]is meant in the computer science sense of only learning something when it seems like you need to know it in order to understand something important. I will contrast this with the model that one should learn a set of solid foundations first before going any further. Obviously neither model can be absolutely correct in an extreme sense. I don't, as a silly example, think that people who can't do basic arithmetic should go into AI safety before building a foundation in math. And on the other side of the spectrum, I think it would be absurd to think that one should become a world renowned mathematician before reading their first AI safety paper. That said, even though both models are wrong, I think my current preference is for the lazy model rather than the foundation model. Here are some points in favor of both, informed by my first-person experience. Points in favor of the foundations model: * If you don't have solid foundations in mathematics, you may not even be aware of things that you are missing. * Having solid foundations in mathematics will help you to think rigorously about things rather than having a vague non-reductionis
4Matthew Barnett2d Sometimes people will propose ideas, and then those ideas are met immediately after with harsh criticism. A very common tendency for humans is to defend our ideas and work against these criticisms, which often gets us into a state that people refer to as "defensive." According to common wisdom, being in a defensive state is a bad thing. The rationale here is that we shouldn't get too attached to our own ideas. If we do get attached, we become liable to become crackpots [https://www.lesswrong.com/posts/qRWfvgJG75ESLRNu9/the-crackpot-offer]who can't give an idea up because it would make them look bad if we did. Therefore, the common wisdom advocates treating ideas as being handed to us by a tablet from the clouds rather than a product of our brain's thinking habits. Taking this advice allows us to detach ourselves from our ideas so that we don't confuse criticism with insults. However, I think the exact opposite failure mode is not often enough pointed out and guarded against. Specifically, the failure mode is being too willing to abandon beliefs based on surface level counterarguments. To alleviate this I suggest we shouldn't be so ready to give up our ideas in the face of criticism. This might sound irrational -- why should we get attached to our beliefs? I'm certainly not advocating that we should actually associate criticism with insults to our character or intelligence. Instead, my argument is that the process of defensively defending against criticism generates a productive adversarial structure. Consider two people. Person A desperately wants to believe proposition X, and person B desperately wants to believe not X. If B comes up to A and says, "Your belief in X is unfounded. Here are the reasons..." Person A can either admit defeat, or fall into defensive mode. If A admits defeat, they might indeed get closer to the truth. On the other hand, if A gets into defensive mode, they might also get closer to the truth in the process of desperately for evidence

Thursday, August 15th 2019
Thu, Aug 15th 2019

Shortform [Beta]
21DonyChristie4d I'm really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.) The guiding question is, "What upgrades would make my life easier?" In contrast with the question that is more typically asked: "How do I achieve this hard thing?" It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don't really identify with). Part of that is a sense of... naughtiness? Like we're supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There's something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents? The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones. (This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. ) Of course
13Matthew Barnett3d I think there are some serious low hanging fruits for making people productive that I haven't seen anyone write about (not that I've looked very hard). Let me just introduce a proof of concept: Final exams in university are typically about 3 hours long. And many people are able to do multiple finals in a single day, performing well on all of them. During a final exam, I notice that I am substantially more productive than usual. I make sure that every minute counts: I double check everything and think deeply about each problem, making sure not to cut corners unless absolutely required because of time constraints. Also, if I start daydreaming, then I am able to immediately notice that I'm doing so and cut it out. I also believe that this is the experience of most other students in university who care even a little bit about their grade. Therefore, it seems like we have an example of an activity that can just automatically produce deep work. I can think of a few reasons why final exams would bring out the best of our productivity: 1. We care about our grade in the course, and the few hours in that room are the most impactful to our grade. 2. We are in an environment where distractions are explicitly prohibited, so we can't make excuses to ourselves about why we need to check Facebook or whatever. 3. There is a clock at the front of the room which makes us feel like time is limited. We can't just sit there doing nothing because then time will just slip away. 4. Every problem you do well on benefits you by a little bit, meaning that there's a gradient of success rather than a binary pass or fail (though sometimes it's binary). This means that we care a lot about optimizing every second because we can always do slightly better. If we wanted to do deep work for some other desired task, all four of these reasons seem like they could be replicable. Here is one idea (related to my own studying), although I'm sure I can come up with a better one if I thought deeply
8Raemon4d Draft/WIP: The Working Memory Hypothesis re: Intellectual Progress Strong claim, medium felt So I'm working with the hypothesis that working memory (or something related) is a major bottleneck on progress within a given field. This has implications on what sort of things fields need. Basic idea is that you generally need to create new concepts out of existing sub-concepts. You can only create a concept if you can hold the requisite sub-concepts in your head at once. Default working memory limits is 4-7 chunks. You can expand that somewhat by writing things down on whiteboards. I'm not sure exactly what the practical limits are. But there is some upper limit on how complex a concept people can work with, even the smartest people in the field. So there are some ideas you just can't invent, if they require 30 moving parts to consider at once. If an idea has 30 moving parts, it's necessary to find some way to conceptualize it as having fewer parts. Fortunately, this is possible. When I first learn a new concept, it's unfamiliar, and I have to track all of it's individual pieces separately – if it has 5 sub-concepts it takes 5 chunks of working memory. But eventually I learn to think of it as a single concept, and then it only takes 1. So, an important aspect of a field is distillation – improving explanations to help people think of clusters-of-ideas as a single concept they can reason about as if it took up only one chunk. Distillation is a sort of tedious process though. So part of why Bell Labs is a great idea is that you can have all these specialists of different fields working in the same building, and whenever one of them wants to learn something they can just walk down the hall and ask the guy who invented it "hey, how does this work", and they get an explanation. And if they don't understand it fully, they can immediately, frictionlessly, ask followup questions. This doesn't just help them understand new ideas – it reduces those ideas in conceptual
4FactorialCode3d I notice that there's a fair bit of "thread necromancy" on LessWrong. I don't think it's a bad thing, but I think it would be cool to have an option to filter comments based on the time gap between when the post was made and when the comment was made. That way it's easier to see to see what the discussion was like around the time when the post was made. On a related note, does LessWrong record when upvotes are made? It would also be cool to have a "time-machine" to see how up-votes and down-votes in a thread evolve over time. Could be good for analysing the behaviour of threads in the short term, and a way to see how community norms change in the long term.

Wednesday, August 14th 2019
Wed, Aug 14th 2019

Shortform [Beta]
13jacobjacob5d What important book that needs fact-checking is nobody fact-checking?
8Hazard4d Noticing an internal dynamic. As a kid I liked to build stuff (little catapults, modify nerf guns, sling shots, etc). I entered a lot of those projects with the mindset of "I'll make this toy and then I can play with it forever and never be bored again!" When I would make the thing and get bored with it, I would be surprised and mildly upset, then forget about it and move to another thing. Now I think that when I was imagining the glorious cool toy future, I was actually imagining a having a bunch of friends to play with (didn't live around many other kids). When I got to middle school and highschool and spent more time around other kids, the idea of "That person's talks like they're cool but they aren't." When I got into sub-cultures centering around a skill or activity (magic) I experienced the more concentrated form, "That person acts like they're good at magic, but couldn't do a show to save their life." I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing." Why, historically, have I gotten so worried when my interests shift? I'm not yet at a point in my life where there are that many logistical constraints (I've switched majors three times in three years without a hitch). I think it's because in the back of my head I expect every possible group or social scene to say, "We only want you if you're all about doing XYZ all the time." And when I'm super excited about XYZ, it's fine. But when I feel like "Yeah, I need a break" I get nervous. Yeah, there is a hard underlying problem of "How to not let your culture become meaningless", but I think my extra-problem is that I gravitated towards the groups that defined themselves by "We put in lots of time mastering this specific hard skill and applying it." Though I expect it to be the case that for the rest of my life I want to have thoughtful engaging discussion with intellectually honest people (a piece of what I want from less wrong), I feel less reason to be sure th
5Slider4d Magic colors and errors Reading Writers guild policy doc there was a principle of "the vase is already broken". The whole document is a lot how you make a red organization and most of the princples are anti-white. The principle makes sense but I found it be foregin to my culture. Things are made to be replaced. And if something is done wrong today we will try to do right the next day. In contrast the blue way is much more familiar with me. Accept only true things, set up things for perpetuity. In the contrast I noticed that the blue thing is focused on avoiding to making mistakes. You slow down enough that everything can be verified. In the red way you speed up so things can be corrected. In blue if you make a mistake you will build a machine that will do damage for to the forseeable future. In contrast in red if you feel the wrong way today you will not be inspired to feel so tomorrow and the energy will whimper. "Energy denial" seemed like a new perpective that I hadn't considered. Other colors probably also have ways to treat errors. Green probably adapts, there are no "mistakes" only "happy little accidents". Oxygen catastrophe is not armageddon but new conditions to build upon. Never say "no" but always "yes and" and just keep the harmony in front of your mind, keep the scene going. Black is proably ready to backstab their friends to death should they turn on them. It's not about what damage the mistake does but whether it proves your character whether you are safe to keep around. On the flip side having good enough goals means arbirarily bad means can be swallowed. White probably does courts and maybe a balance of rights: even if we set a monumentally bad law it will need to resolve itself against other core principles. A bad nomination is limited by the powers of the position being limited.

Tuesday, August 13th 2019
Tue, Aug 13th 2019

Shortform [Beta]
31elityre5d New post: What is mental energy? [https://wordpress.com/post/musingsandroughdrafts.wordpress.com/398] [Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.] There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically). Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it? The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn. My current understanding is that this story is not physiologically realistic. Thinking hard does consume more of the body’s energy than baseline, but not that much more. And we experience mental fatigue long before we even get close to depleting our calorie stores. It isn’t literal energy that is being consumed. [ The Psychology of Fatigue pg.27] So if not that, what is going on here? A few hypotheses: (The first few, are all of a cluster, so I labeled them 1a, 1b, 1c, etc.) Hypothesis 1a: Mental fatigue is a natural control system that redirects our attention to our other goals. The explanation that I’ve heard most frequently in recent years (since it became obvious that much of the literature on ego-depletion was off the mark), is the following: A human mind is composed of a bunch
18Matthew Barnett5d Occasionally, I will ask someone who is very skilled in a certain subject how they became skilled in that subject so that I can copy their expertise. A common response is that I should read a textbook in the subject. Eight years ago, Luke Muehlhauser wrote [https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-textbooks-on-every-subject] , For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, [http://academicearth.org/] peer-reviewed papers, Teaching Company [http://www.teach12.com/]courses, and Cliff's Notes. How inefficient! I've since discovered that textbooks are usually the quickest and best way to learn new material.However, I have repeatedly found that this is not good advice for me [https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing]. I want to briefly list the reasons why I don't find sitting down and reading a textbook that helpful for learning. Perhaps, in doing so, someone else might appear and say, "I agree completely. I feel exactly the same way" or someone might appear to say, "I used to feel that way, but then I tried this..." This is what I have discovered: * When I sit down to read a long textbook, I find myself subconsciously constantly checking how many pages I have read. For instance, if I have been sitting down for over an hour and I find that I have barely made a dent in the first chapter, much less the book, I have a feeling of hopelessness that I'll ever be able to "make it through" the whole thing. * When I try to read a textbook cover to cover, I find myself much more concerned with finishing rather than understanding. I want the satisfaction of being able to say I read the whole thing, every page. This means that I will sometimes cut corners in my understanding just to make it through a difficult part. This ends in disaster once the next chapter requires a sol
14Hazard5d Lol, one reason it's hard to talk to people about something I'm working through when there's a large inferential gap, is that when they misunderstand me and tell me what I think I sometimes believe them.
6avturchin5d Kardashev – the creator of the Kardashev's scale of civilizations – has died at 87. Here [https://www.youtube.com/watch?v=Fj0QLW026yA&t=12s] is his last video, which I recorded in May 2019. He spoke about the possibility of SETI via wormholes.
6Hazard5d [Everything is "free" and we inundate you in advertisements] feels bad. First thought alternative is something like paid subscriptions, or micropayments per thing consumed. But the question is begged, how does anyone find out about the sites they want to subscribe to? If only there was some website aggregator that was free for me to use so that I could browse different possible subscriptions... Oh no. Or if not oh no, it seems like the selling eyeballs model won't go away just because alternatives exist, if only from the "people need to somehow find out about the thing they are paying for" side. I could probably do with getting a stronger sense of why selling eyeballs feels bad. I'm also probably thinking about this too abstractly and could do with getting more concrete.

Monday, August 12th 2019
Mon, Aug 12th 2019

Shortform [Beta]
19adam_scholl7d I was surprised to find a literature review [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5056568/] about probiotics which suggested they may have significant CNS effects. The tl;dr of the review seems to be: 1) You want doses of at least 109 or 1010 CFU, and 2) You want, in particular, the strains B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei. I then sorted [https://docs.google.com/spreadsheets/d/1TVlA8WpO2htb9hhMRZNKHcaZR4C6v-1dzKGAdgsPdxE/edit?usp=sharing] the top 15 results on Amazon for "probiotic" by these desiderata, and found that this one [https://smile.amazon.com/gp/product/B07KLVKQLL/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1] seems to be best. Some points of uncertainty: * Probiotic manufacturers generally don't disclose the strain proportions of their products, so there's some chance they mostly include e.g. whatever's cheapest, plus a smattering of other stuff. * One of the reviewed studies [https://sci-hub.tw/https://www.ncbi.nlm.nih.gov/pubmed/17151594] suggests L. casei may impair memory. I couldn't find a product that didn't have L. casei but did have at least 109 CFU of each other recommended strain, so if you take the L. casei/memory concern seriously your best option [https://docs.google.com/spreadsheets/d/1TVlA8WpO2htb9hhMRZNKHcaZR4C6v-1dzKGAdgsPdxE/edit#gid=1471634937&range=A1] might be combining this [https://smile.amazon.com/Probiotic-Prebiotic-Probiotics-Digestive-Prebiotics/dp/B071L8D4DQ/ref=sr_1_3?keywords=longum&qid=1565569519&s=gateway&sr=8-3] and this [https://smile.amazon.com/Pure-Encapsulations-PureProbiotic-Allergen-Free-Gastrointestinal/dp/B00H578NZ0/ref=sr_1_4?keywords=Bifidobacterium+breve&qid=1565570107&s=gateway&sr=8-4] .
9William_Darwin6d Looking for HARSH criticism on some musings about the application of leverage points from systems analysis to the idea generation process: ##Leverage Points Systems analysis is defined by the Merriam-Webster dictionary as "the process of studying a procedure or business in order to identify its goals and purposes and create systems and procedures that will achieve them in an efficient way". The concept of 'leverage points' is synonymous with the idea of single points of power - whether that be a single hero in an action movie or a miracle cure for disease. Change these points and you change the system in an enormous way. This type of systems thinking provides a highly applicable toolkit which allows us to view and disassemble the idea generation process, identifying what areas within this 'system' are able to be shifted in small ways to produce large results. By mapping this type of conceptual framework to the creative process, we can systematically break down where opportunities lie in creating more ideas of our own. This is about recognising what it is that Type B thinkers more successfully than those classified as Type A [explained in a redacted introduction for brevity, type A = people who are primarily idea consumers, type B = people who begin to 'produce' ideas]. The leverage points below are listed in order from least effective to most effective. Note that the concept of these points is not a concrete law, this is only a mental model which allows us to map reality as accurately as possible to a system of cognitive behaviour. Note: This list of leverage points is not extensive due to inapplicability to this particular post, all references have been noted at the bottom of this page. ###10. Constants, Parameters and Numbersa. Number of unique situationsTo create more ideas, you must first be exposed to more situations which will provide the basis for creation. Increase the number of books, articles, movies, music, people and experiences you interact with
7Raemon7d I've lately been talking a lot [https://www.lesswrong.com/posts/BmyoYkr7u2oas4ukn/keeping-beliefs-cruxy] about doublecrux [https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement] . It seemed good to note some updates I'd also made over the past few months about debate. For the past few years I've been sort of annoyed at debate because it seems like it doesn't lead people to change their opinions – instead, the entire debate framework seems more likely to prompt people to try to win, meanwhile treating arguments as soldiers and digging in their heels. I felt some frustration at the Hanson/Yudkowsky Foom Debate [https://intelligence.org/files/AIFoomDebate.pdf] because huge amounts of digital ink were spilled, and neither party changed their mind much. The counterpoint that's been pointed out to me lately is: While debate may have that effect, it also produces lots of positive externalities. The process of Hanson and Yudkowsky spelling out their intuitions and arguments and preferred debate frameworks lead to a lot of interested facts and frameworks to chew on.
6Matthew Barnett6d Forgive me for cliche scientism, but I recently realized that I can't think of any major philosophical developments in the last two centuries that occurred within academic philosophy. If I were to try to list major philosophical achievements since 1819, these would likely appear on my list, but none of them were from those trained in philosophy: * A convincing, simple explanation for the apparent design we find in the living world (Darwin and Wallace). * The unification of time and space into one fabric (Einstein) * A solid foundation for axiomatic mathematics (Zermelo and Fraenkel). * A model of computation, and a plausible framework for explaining mental activity (Turing and Church). By contrast, if we go back to previous centuries, I don't have much of an issue citing philosophical achievements from philosophers: * The identification of the pain-pleasure axis as the primary source of value (Bentham). * Advanced notions of causality, reductionism, scientific skepticism (Hume) * Extension of moral sympathies to those in the animal kingdom (too many philosophers to name) * A highlight of the value of wisdom and learned debate (Socrates, and others) Of course, this is probably caused my by bias towards Lesswrong-adjacent philosophy. If I had to pick philosophers who have made major contributions, these people would be on my shortlist: John Stuart Mill, Karl Marx, Thomas Nagel, Derek Parfit, Bertrand Russell, Arthur Schopenhauer.

Sunday, August 11th 2019
Sun, Aug 11th 2019

Shortform [Beta]
3mr-hire7d I had one of my pilot students for the akrasia course I'm working on point out today that something I don't cover in my course is indecision. I used to have a bit of problem with that, but not enough to have sunk a lot of time into determining the qualia and mental moves related to defeating it. Has anyone reading this gone from being really indecisive (and procrastinating because of it) to much more decisive? Or is currently working on making the switch I'd love to talk to you/model you. As a bonus thank you, you'll of course get a free version of the course (along with all the guided meditations and audios) when it's complete.

Saturday, August 10th 2019
Sat, Aug 10th 2019

Shortform [Beta]
3DonyChristie9d Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)

Friday, August 9th 2019
Fri, Aug 9th 2019

Shortform [Beta]
22Unreal9d When I talk about "model-building skill" I think I mean three separate skills: Skill A) Model-building skill proper Skill B) Introspection skill Skill C) Communication skill There are probably a lot of people who are decent at model-building proper. (Situation #1) I'm imagining genius-level programmers. But then when you try to get some insight into what their models actually are or why they do things one way vs another, it's opaque. They don't know how to explicate any of it. They get annoyed by having to try to verbalize their models or even know what they are—they'd rather get back to coding and are frustrated having to "waste time" convincing other people. Then there are other people who might be decent at model-building and introspecting on their models, but when they try to communicate their models to you, it comes out as gibberish (at least to your ears). And asking them questions doesn't seem to go anywhere. (Situation #2) Then there's the situation where people are really articulate and able to communicate very clear, explicit, verbal models—but when it comes to implementing those models on a somatic-emotional level, they run into trouble. But it sounds like they have the model-building skill because they can talk about their models and share genuine insights about them. (Situation #3) An example is having a bunch of insightful theories about social dynamics, but when actually in a situation where they could put those theories into practice, there is some kind of block. The models are not acting like felt models. ... I've been in Situation #3 and Situation #1 before. Overcoming Situation #3 is a scary thing. Being able to see, make sense of, and articulate models (from afar) was a way of distancing myself from reality. It was a preemptive defense mechanism. It helped me feel superior / knowledgable / satisfied. And then I continued to sit and watch rather than participate, engage, run experiments, etc. Or I'd play with "toy models" like ga
15Matthew Barnett9d Related to: The Lottery of Fascinations [https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/], other posts probably When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing. * Professor Quirrell in HPMOR Ch. 73 [http://www.hpmor.com/chapter/73] I will occasionally come across someone who I consider to be extraordinarily productive, and yet when I ask what they did on a particular day they will respond, "Oh I basically did nothing." This is particularly frustrating. If they did nothing, then what was all that work that I saw! I think this comes down to what we mean by doing nothing. There's a literal meaning to doing nothing. It could mean sitting in a chair, staring blankly at a wall, without moving a muscle. More practically, what people mean by doing nothing is that they are doing something unrelated to their stated task, such as checking Facebook, chatting with friends, browsing Reddit etc. When productive people say that they are "doing nothing" it could just be that they are modest, and don't want to signal how productive they really are. On the other hand, I think that there is a real sense in which these productive people truly believe that they are doing nothing. Even if their "doing nothing" was your "doing work", to them it's still a "doing nothing" because they weren't doing the thing they explicitly set out to do. I think, therefore, there is something of a "do nothing" differential, which helps explain why some people are more productive than others. For some people who are less productive than me, their "doing nothing" might just be playing video games. For me, my "doing nothing" is watching people debate the headline of a Reddit news article (and I'm not proud of this). For those more productive than me, perhaps their "doing nothing" is reading blog posts that are tangentially related to what they are working on. For people more productive still, it might be obsessively re-re
13mr-hire9d There's a pattern I've noticed in my self that's quite self-destructive. It goes something like this: * Meet new people that I like, try to hide all my flaws and be really impressive, so they'll love me and accept me. * After getting comfortable with them, noticing that they don't really love me if they don't love the flaws that I haven't been showing them. * Stop taking care of myself, downward spiral, so that I can see they'll take care of me at my worst and I know they REALLY love me. * People justifiably get fed up with me not taking care of myself, and reject me. This triggers the thought that I'm unlovable. * Because I'm not lovable, when I meet new people, I have to hide my flaws in order for them to love me. This pattern is destructive, and has been one of the main things holding me back from becoming as self-sufficient as I'd like. I NEED to be dependent on others to prove they love me. What's interesting about this pattern is how self-defeating it is. Do people not wanting to support me mean that they don't love me? No, it just means that they don't want to support another adult. Does hiding all my flaws help people accept me? No, it just sets me up for a crash later. Does constantly crashing from successful ventures help any of this? No, it makes it harder to seem successful, AND harder to be able to show my flaws without having people run away.
8Matthew Barnett10d I've often wished that conversation norms shifted towards making things more consensual. The problem is that when two people are talking, it's often the case that one party brings up a new topic without realizing that the other party didn't want to talk about that, or doesn't want to hear it. Let me provide an example: Person A and person B are having a conversation about the exam that they just took. Person A bombed the exam, so they are pretty bummed. Person B, however, did great and wants to tell everyone. So then person B comes up to person A and asks "How did you do?" fully expecting to brag the second person A answers. On it's own, this question is benign. This happens frequently without question. On the other hand, if person B had said, "Do you want to talk about the exam?" person A might have said "No." This problem can be alleviated by simply asking people whether they want to talk about certain things. For sensitive topics, like politics and religion, this is already the norm in some places. I think it can be taken further. I suggest the following boundaries, and could probably think of more if pressed: * Ask someone before sharing something that puts you in a positive light. Make it explicit that you are bragging. For example, ask "Can I brag about something?" before doing so. * Ask someone before talking about something that you know there's a high variance of difficulty and success. This applies to a lot of things: school, jobs, marathon running times.

Load More Days