All Posts

Sorted by Magic (New & Upvoted)

Week Of Sunday, December 8th 2019
Week Of Sun, Dec 8th 2019

Frontpage Posts
Personal Blogposts
4[Event]San Francisco Meetup: Projects170 Hawthorne St, San Francisco, CA 94107, USADec 17th
0
Shortform [Beta]
40BrienneYudkowsky1dSuppose you wanted to improve your social relationships on the community level. (I think of this as “my ability to take refuge in the sangha”.) What questions might you answer now, and then again in one year, to track your progress? Here’s what’s come to mind for me so far. I’m probably missing a lot and would really like your help mapping things out. I think it’s a part of the territory I can only just barely perceive at my current level of development. * If something tragic happened to you, such as a car crash that partially paralyzed you or the death of a loved one, how many people can you name whom you'd find it easy and natural to ask for help with figuring out your life afterward? * For how many people is it the case that if they were hospitalized for at least a week you would visit them in the hospital? * Over the past month, how lonely have you felt? * In the past two weeks, how often have you collaborated with someone outside of work? * To what degree do you feel like your friends have your back? * Describe the roll of community in your life. * How do you feel as you try to describe the roll of community in your life? * When's the last time you got angry with someone and confronted them one on one as a result? * When's the last time you apologized to someone? * How strong is your sense that you're building something of personal value with the people around you? * When's the last time you spent more than ten minutes on something that felt motivated by gratitude? * When a big change happens in your life, such as loosing your job or having a baby, how motivated do you feel to share the experience with others? * When you feel motivated to share an experience with others, how satisfied do you tend to be with your attempts to do that? * Do you know the love languages of your five closest friends? To what extent does that influence how you behave toward them? * Does it seem to you that your friends know your love
9aaq2hScattered thoughts on how the rationalist movement has helped me: On the topic of rationalist self-improvement [https://www.lesswrong.com/posts/8xLtE3BwgegJ7WBbf/is-rationalist-self-improvement-real] , I would like to raise the point that simply feeling as though there's a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world. That generates a lot of hedons for me, which then on occasion allow me to "afford" doing other things I wouldn't otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There's a part of me that feels like I'm not just building up this knowledge for myself, but for the future possible good of "my people". I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that's honestly fricking awesome and blows standard delayed-gratification "self improvement" tactics outta the water 10/10 would recommend. Also there's the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again - nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says "mmm yes tasty good word soup i liek thanke" and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief. TLDR rationalists > rationalism
6AlexMennen2dTheorem: Fuzzy beliefs (as in https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v [https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v] ) form a continuous DCPO. (At least I'm pretty sure this is true. I've only given proof sketches so far) The relevant definitions: A fuzzy belief over a set X is a concave function ϕ:ΔX→[0,1] such that sup(ϕ)=1 (where ΔX is the space of probability distributions on X). Fuzzy beliefs are partially ordered by ϕ≤ψ⟺∀μ∈ΔX:ϕ(μ)≥ψ(μ) . The inequalities reverse because we want to think of "more specific"/"less fuzzy" beliefs as "greater", and these are the functions with lower values; the most specific/least fuzzy beliefs are ordinary probability distributions, which are represented as the concave hull of the function assigning 1 to that probability distribution and 0 to all others; these should be the maximal fuzzy beliefs. Note that, because of the order-reversal, the supremum of a set of functions refers to their pointwise infimum. A DCPO (directed-complete partial order) is a partial order in which every directed subset has a supremum. In a DCPO, define x<<y to mean that for every directed set D with supD≥y, ∃d∈D such that d≥x. A DCPO is continuous if for every y , y=sup{x∣x<<y}. Lemma: Fuzzy beliefs are a DCPO. Proof sketch: Given a directed set D , (supD)(μ)=min{d(μ)∣d∈D} is convex, and {μ∣(supD)(μ)=1}=⋂d∈D{μ∣d(μ)=1}. Each of the sets in that intersection are non-empty, hence so are finite intersections of them since D is directed, and hence so is the whole intersection since ΔX is compact. Lemma: ϕ<<ψ iff {μ∣ψ(μ)=1} is contained in the interior of {μ∣ϕ(μ)=1} and for every μ such that ψ(μ)≠1, ϕ(μ)>ψ(μ). Proof sketch: If supD≥ψ, then ⋂d∈D{μ∣d(μ)=1}⊆{μ∣ψ(μ)=1} , so by compactness of ΔX and directedness of D, there should be d∈D such that {μ∣d(μ)=1}⊆int({μ∣ϕ(μ)=1}). Similarly, for each μ such that ψ(μ)≠1, there should be dμ∈D s

Week Of Sunday, December 1st 2019
Week Of Sun, Dec 1st 2019

Frontpage Posts
Shortform [Beta]
53Buck8d[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.] Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.) Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios rather than fractions. Bayes is enormously easier to understand and use when described in terms of ratios. For example: Suppose that 1% of women have a particular type of breast cancer, and a mammogram is 20 times more likely to return a positive result if you do have breast cancer, and you want to know the probability that you have breast cancer if you got that positive result. The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is 1∗20:99∗1 = 20:99, so you have probability of 20/(20+99) of having breast cancer. I think that this is absurdly easier than using the fraction formulation. I think that teaching the fraction formulation is the single biggest didactic mistake that I am aware of in any field. -------------------------------------------------------------------------------- Anyway, a year or so ago I got into the habit of calculating things using Bayes whenever they came up in my life, and I quickly noticed that Bayes seemed surprisingly aggressive to me. For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hyp
42BrienneYudkowsky3dSome advice to my past self about autism: Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens. Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit done and having a good life. It’s just hard to get them all working together. Try leaning into those behaviors and see what’s good about them. For example, you know how when you accidentally do something three times in a row, you then feel compelled to keep doing it the same way at the same time forever? Studying this phenomenon in yourself will lead you to build solid and carefully designed routines that allow you to be a lot more reliably vibrant. You know how some autistic people have one-on-one aides, caretakers, and therapists who assist in their development and day-to-day wellbeing? Read a bit about what those aides do. You’ll notice right away that the state of the art in this area is crap, but try to imagine what professional autism aides might do if they really had things figured out and were spectacular at their jobs. Then devote as many resources as you can spare for a whole year to figuring out how to perform those services for yourself. It seems to me that most of what’s written about autism by neurotypicals severely overemphasizes social stuff. You’ll find almost none of it compelling. Try to understand what’s really going on with autism, and your understanding will immediately start paying off in non-social quality of life improvements. Keep at it, and it’ll eventually start paying off in deep and practical social insights as well (which I know you don’t care about right now, but it’s true). I
34BrienneYudkowsky5dHere’s what Wikipedia has to say about monographs [https://en.wikipedia.org/wiki/Monograph] . “A monograph is a specialist work of writing… or exhibition on a single subject or an aspect of a subject, often by a single author or artist, and usually on a scholarly subject… Unlike a textbook, which surveys the state of knowledge in a field, the main purpose of a monograph is to present primary research and original scholarship ascertaining reliable credibility to the required recipient. This research is presented at length, distinguishing a monograph from an article.” I think it’s a bit of an antiquated term. Either that or it’s chiefly British, because as an American I’ve seldom encountered it. I know the word because Sherlock Holmes is always writing monographs. In *A Study In Scarlet*, he says, “I gathered up some scattered ash from the floor. It was dark in colour and flakey—such an ash as is only made by a Trichinopoly. I have made a special study of cigar ashes—in fact, I have written a monograph upon the subject. I flatter myself that I can distinguish at a glance the ash of any known brand, either of cigar or of tobacco.” He also has a monograph on the use of disguise in crime detection, and another on the utilities of dogs in detective work. When I tried thinking of myself as writing “monographs” on things, I broke though some sort of barrier. The things I wrote turned out less inhibited and more… me. I benefited from them myself more as well. What I mean by “monograph” is probably a little different from what either Sherlock or academia means, but it’s in the same spirit. I think of it as a photo study or a character sketch, but in non-fiction writing form. Here are my guidelines for writing a monograph. 1. Pick a topic you can personally investigate. It doesn’t matter whether it’s “scholarly”. It’s fine if other people have already written dozens of books on the subject, regardless of whether you’ve read them, just as long as you can stick your own
29Ben Pace8dGood posts you might want to nominate in the 2018 Review I'm on track to nominate around 30 posts from 2018, which is a lot. Here is a list of about 30 further posts I looked at that I think were pretty good but didn't make my top list, in the hopes that others who did get value out of the posts will nominate their favourites. Each post has a note I wrote down for myself about the post. * Reasons compute may not drive AI capabilities growth [https://www.lesswrong.com/posts/hSw4MNTc3gAwZWdx9/reasons-compute-may-not-drive-ai-capabilities-growth] * I don’t know if it’s good, but I’d like it to be reviewed to find out. * The Principled-Intelligence Hypothesis [https://www.lesswrong.com/posts/Tusi9getaQ2o6kZsb/the-principled-intelligence-hypothesis] * Very interesting hypothesis generation. Unless it’s clearly falsified, I’d like to see it get built on. * Will AI See Sudden Progress? [https://www.lesswrong.com/posts/AJtfNyBsum6ZzWxKR/will-ai-see-sudden-progress] DONE * I think this post should be considered paired with Paul’s almost-identical post. It’s all exactly one conversation. * Personal Relationships with Goodness [https://www.lesswrong.com/posts/7xQAYvZL8T5L6LWyb/personal-relationships-with-goodness] * This felt like a clear analysis of an idea and coming up with some hypotheses. I don’t think the hypotheses really captures what’s going on, and most of the frames here seem like they’ve caused a lot of people to do a lot of hurt to themselves, but it seemed like progress in that conversation. * Are ethical asymmetries from property rights? [https://www.lesswrong.com/posts/zf4gvjTkbcJ5MGsJk/are-ethical-asymmetries-from-property-rights] * Again, another very interesting hypothesis. * Incorrect Hypotheses Point to Correct Observations [https://www.lesswrong.com/posts/MPj7t2w3nk4s9EYYh/incorrect-hypotheses-point-to-correct-observations]
21Raemon4dOver in this thread, Said asked [https://www.lesswrong.com/posts/5zSbwSDgefTvmWzHZ/affordance-widths#iM4Jfa3ThJcFii2Pm] the reasonable question "who exactly is the target audience with this Best of 2018 book?" I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. But, a quick "best guess" answer for now. I see the overall review process as having two "major phases": * Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting * Phase 2: Compilation and Publication I think the first phase should be oriented entirely around "internal consumption" – figuring out what epistemic standard to hold ourselves to, and how, so that we can do better in the future. (As well as figuring out what ideas we've developed that should be further built upon). Any other benefits are incidental. The final book/sequence is at least somewhat externally facing. I do expect it to be some people's first introduction to LessWrong, and other people's "one thing they read from LW this year". And at least some consideration should be given to those people's reading experience (which will be lacking a lot of context). But my guess is that should come more in the form of context-setting editor commentary than in decisions about what to include. I think “here are the fruits of our labors; take them and make use of them” is more of what I was aiming for. (Although "what standards are we internally holding ourselves to, and what work should we build towards?" is still an important function of the finished product). It'd be nice if people were impressed, but a better frame for that goal is "Outsiders looking in can get an accurate picture of how productive our community is, and what sort of things we do", and maybe they are impressed by that or maybe not. (I re
Load More (5/22)

Week Of Sunday, November 24th 2019
Week Of Sun, Nov 24th 2019

Frontpage Posts
Shortform [Beta]
21Chris_Leong13dHegel - A Very Short Introduction by Peter Singer - Book Review Part 1: Freedom Hegel is a philosopher who is notorious for being incomprehensible. In fact, for one of his books he signed a contract that assigned a massive financial penalty for missing the publishing deadline, so the book ended up being a little rushed. While there was a time when he was dominant in German philosophy, he now seems to be held in relatively poor regard and his main importance is seen to be historical. So he's not a philosopher that I was really planning to spend much time on. Given this, I was quite pleased to discover this book promising to give me A Very Short Introduction, especially since it is written by Peter Singer, a philosopher who write and thinks rather clearly. After reading this book, I still believe that most of what Hegel wrote was pretentious nonsense, but the one idea that struck me as the most interesting was his conception of freedom. A rough definition of freedom might be ensuring that people are able to pursue whatever it is that they prefer. Hegel is not a fan abstract definitions of freedom which treat all preferences the same and don't enquire where they come from. In his perspective, most of our preferences are purely a result of the context in which we exist and so such an abstract definition of freedom is merely the freedom to be subject to social and historical forces. Since we did not choose our desires, he argues that we are not free when we act from our desires. Hegel argues that, "every condition of comfort reveals in turn its discomfort, and these discoveries go on for ever". One such example would be the marketing campaigns to convince us that sweating was embarrassing ( https://www.smithsonianmag.com/…/how-advertisers-convinced…/ [https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.smithsonianmag.com%2Fhistory%2Fhow-advertisers-convinced-americans-they-smelled-bad-12552404%2F%3Ffbclid%3DIwAR1KKh3gEJiwroC7b-Nooykui6_CBL2CsR-zaR-9ExSir591OYpM7ImwWb8&h
12TurnTrout11dFrom my Facebook My life has gotten a lot more insane over the last two years. However, it's also gotten a lot more wonderful, and I want to take time to share how thankful I am for that. Before, life felt like... a thing that you experience, where you score points and accolades and check boxes. It felt kinda fake, but parts of it were nice. I had this nice cozy little box that I lived in, a mental cage circumscribing my entire life. Today, I feel (much more) free. I love how curious I've become, even about "unsophisticated" things. Near dusk, I walked the winter wonderland of Ogden, Utah with my aunt and uncle. I spotted this gorgeous red ornament hanging from a tree, with a hunk of snow stuck to it at north-east orientation. This snow had apparently decided to defy gravity. I just stopped and stared. I was so confused. I'd kinda guessed that the dry snow must induce a huge coefficient of static friction, hence the winter wonderland. But that didn't suffice to explain this. I bounded over and saw the smooth surface was iced, so maybe part of the snow melted in the midday sun, froze as evening advanced, and then the part-ice part-snow chunk stuck much more solidly to the ornament. Maybe that's right, and maybe not. The point is that two years ago, I'd have thought this was just "how the world worked", and it was up to physicists to understand the details. Whatever, right? But now, I'm this starry-eyed kid in a secret shop full of wonderful secrets. Some secrets are already understood by some people, but not by me. A few secrets I am the first to understand. Some secrets remain unknown to all. All of the secrets are enticing. My life isn't always like this; some days are a bit gray and draining. But many days aren't, and I'm so happy about that. Socially, I feel more fascinated by people in general, more eager to hear what's going on in their lives, more curious what it feels like to be them that day. In particular, I've fallen in love with the rationalist and
12ofer14d--Daniel Kahneman, Thinking, Fast and Slow To the extent that the above phenomenon tends to occur, here's a fun story that attempts to explain it: At every moment our brain can choose something to think about (like "that exchange I had with Alice last week"). How does the chosen thought get selected from the thousands of potential thoughts? Let's imagine that the brain assigns an "importance score" to each potential thought, and thoughts with a larger score are more likely to be selected. Since there are thousands of thoughts to choose from, the optimizer's curse [https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it] makes our brain overestimate the importance of the thought that it ends up selecting.
10eigen10dEliezer has the sequences, Scott the Codex; what does Robin Hanson have? Can someone point me to a direction where I could start reading his posts in a manner that makes sense? I found this post: https://www.lesswrong.com/posts/SSkYeEpTrYMErtsfa/what-are-some-of-robin-hanson-s-best-posts [https://www.lesswrong.com/posts/SSkYeEpTrYMErtsfa/what-are-some-of-robin-hanson-s-best-posts] which may be helpful, does someone have an opinion on this?
10romeostevensit10dThe most important inversion I know of is cause and effect. Flip them in your model and see if suddenly the world makes more sense.
Load More (5/18)

Week Of Sunday, November 17th 2019
Week Of Sun, Nov 17th 2019

Frontpage Posts
Shortform [Beta]
15Ruby18dWhy I'm excited by the 2018 Review I generally fear that perhaps some people see LessWrong as a place where people just read and discuss "interesting stuff", not much different from a Sub-Reddit on anime or something. You show up, see what's interesting that week, chat with your friends. LessWrong's content might be considered "more healthy" relative to most internet content and many people say they browse LessWrong to procrastinate but feel less guilty about than other browsing, but the use-case still seems a bit about entertainment. None of the above is really a bad thing, but in my mind, LessWrong is about much more than a place for people to hang out and find entertainment in sharing joint interests. In my mind, LessWrong is a place where the community makes collective progress on valuable problems. It is an ongoing discussion where we all try to improve our understanding of the world and ourselves. It's not just play or entertainment– it's about getting somewhere. It's as much like an academic journal where people publish and discuss important findings as it is like an interest-based sub-Reddit. And all this makes me really excited by the LessWrong 2018 Review [http://lesswrong]. The idea of the review is to identify posts that have stood the test of time and have made lasting contributions to the community's knowledge and meaningfully impacted people's lives. It's about finding the posts that represent the progress we've made. During the design of the review (valiantly driven by Raemon), I was apprehensive that people would not feel motivated by the process and put in the necessary work. But less than 24 hours after launching, I'm excited by the nominations [https://www.lesswrong.com/nominations] and what people are writing in their nomination comments. Looking at the list of nominations so far and reading the comments, I'm thinking "Yes! This is a list showing the meaningful progress the LW community has made. We are not just a news or entertainment site
10mr-hire17dIn response to a "sell LW to me" post: I think that the thing LW is trying to do is hard. I think that there's a legitimate split in the community, around the things you're calling "cyber-bullying" - I think there should be a place for crockers rules style combat culture reasoning, but I also want a community that is charitable and respectful and kind while maintaining good epistemics. I also think there's a legitimate split in the community around the things you're calling "epistemically sketchy" - I think there should be a place for post-rational ponderings, but I also think there should be a place for not catering to them.. I have an impression that LW is trying to cater to both sides of the splits, and basically ending up in a middle ground that no one wants, driving a lot of the most interesting posters away. That being said, I'm quite impressed by the team running LW. I'm quite impressed by the product that is LW. I'm also quite impressed by the experiments and direction of LW - I perceive it as actively getting better over time, and grappling with hard questions. I don't know a better place to put things to create common knowledge about things I with were common knowledge in the rationalist community, and I expect that things I put there will benefit from the improvements over time. I think that the mods are justifiably being very careful about imposing norms, because splitting the community is very dangerous, but I do have a small amount of faith they'll navigate it correctly - enough to make posting on there worth it.
9TurnTrout19dI feel very excited by the AI alignment discussion group I'm running at Oregon State University. Three weeks ago, most attendees didn't know much about "AI security mindset"-ish considerations. This week, I asked the question "what, if anything, could go wrong with a superhuman reward maximizer which is rewarded for pictures of smiling people? Don't just fit a bad story to the reward function. Think carefully." There was some discussion and initial optimism, after which someone said "wait, those optimistic solutions are just the ones you'd prioritize! What's that called, again?" (It's called anthropomorphic optimism [https://www.readthesequences.com/Anthropomorphic-Optimism]) I'm so proud.
7mingyuan18d[This is in shortform because I haven't looked into any of the existing literature on the subject] I've been thinking for a while about how people's mental machinery works. Specifically, I've been thinking about spelling and reading. It's reasonable to assume that everyone has roughly the same mental machinery for spoken language, as this is something that has been a part of human experience for tens of thousands of years. Similarly, you'd expect everyone to have the same mental machinery for loving others, feeling hungry, and other things that were present in the ancestral environment. Reading, on the other hand, is an 'intellectual technology' that's only been around for a couple thousand years. And of course, along with reading comes writing and spelling. My housemates and I had a conversation once a couple years ago where we each tried to describe our internal experience of spelling words, and they were so vastly different as to be incomprehensible. For example, for me, words are basically indistinguishable from their spellings - each English word is a chunk, and when I think of a word I just also think of a picture of that word. If asked to spell it, I can read it off from the picture, or I can just say the correct sequence of letters in a way that's introspectively opaque to me. On the other hand, some of my housemates described having to sound out the word each time (it was stored verbally rather than visually), or other things that were even more foreign to me and that I can't remember because the conversation was two years ago. (But you can imagine another person who has to imagine looking the word up in a dictionary, or typing it on a keyboard, in order to spell it.) I see this also with the task of memorizing text. For me this is basically trivial - I can memorize hundreds of lines of text in a day or two if I just read through it enough times. One of my housemates, on the other hand, has basically nothing memorized at all, and it's very hard for him
6strangepoop20dThe expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be. But "lower your expectations" can often be almost useless advice, kind of like "do the right thing". Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not magic. It requires thermodynamic work. The thing is, the payoff is rather amazing. You can just get down to work. As soon as you're free of a constant stream of abuse from beliefs previously housed in your head, you can Choose without Suffering. The problem is, I'm not sure how to strategically go about doing this, other than using my full brain with Constant Vigilance. Coda: A large portion of the LW project (or at least, more than a few offshoots) is about noticing you have beliefs that respond to incentives other than pure epistemic ones, and trying not to reload when shooting your foot off with those. So unsurprisingly, there's a failure mode here: when you publicly declare really low expectations (eg "everyone's an asshole"), it works to challenge people, urges them to prove you wrong. It's a cool trick to win games of Chicken but as usual, it works by handicapping you. So make sure you at least understand the costs and the contexts it works in.
Load More (5/16)

Load More Weeks