All Posts

Sorted by Magic (New & Upvoted)

Friday, August 23rd 2019
Fri, Aug 23rd 2019

Shortform [Beta]
1Matthew Barnett15h "Immortality is cool and all, but our universe is going to run down from entropy eventually" I consider this argument wrong for two reasons. The first is the obvious reason, which is that even if immortality is impossible, it's still better to live for a long time. The second reason why I think this argument is wrong is because I'm currently convinced that literal physical immortality is possible in our universe. Usually when I say this out loud I get an audible "what" or something to that effect, but I'm not kidding. It's going to be hard to explain my intuitions for why I think real immortality is possible, so bear with me. First, this is what I'm not saying: * I'm not saying that we can outlast the heat death of the universe somehow * I'm not saying that we just need to shift our conception of immortality to be something like, "We live in the hearts of our countrymen" or anything like that. * I'm not saying that I have a specific plan for how to become immortal personally, and * I'm not saying that my proposal has no flaws whatsoever and that this is a valid line of research to be conducting at the moment. So what am I saying? A typical model of our life as humans is that we are something like a worm in 4 dimensional space. On one side of the worm there's our birth, and on the other side of the worm is our untimely death. We 'live through' this worm, and that is our life. The length of our life is measured by considering the length of the worm in 4 dimensional space, measured just like a yardstick. Now just change the perspective a little bit. If we could somehow abandon our current way of living, then maybe we can alter the geometry of this worm so that we are immortal. Consider: a circle has no starting point and no end. If someone could somehow 'live through' a circle, then their life would consist of an eternal loop through experiences, repeating endlessly. The idea is that we somehow construct a physical manifestation of this immorta

Thursday, August 22nd 2019
Thu, Aug 22nd 2019

Frontpage Posts
Shortform [Beta]
7mr-hire18h A frequent failure mode that I have as a leader: * Someone comes on to a new project, and makes a few suggestions. * All of those suggestions are things we/I have thought about and discussed in detail, and we have detailed reasons why we've made the decisions we have. * I tell the person those reasons. * The person comes away feeling like the project isn't really open to criticism feedback, and their ideas won't be heard. I think a good policy is to just say yes to WHATEVER experiment someone who is new to the project proposes, and let them take their own lumps, or pleasantly surprised. But, despite having known this for a bit, I always seem to forget to do this when it matters. I wonder if I can add this to our onboarding checklists.
4Matthew Barnett2d I generally agree with the heuristic that we should "live on the mainline", meaning that we should mostly plan for events which capture the dominant share of our probability. This heuristic causes me to have a tendency to do some of the following things * Work on projects that I think have a medium-to-high chance of succeeding and quickly abandon things that seem like they are failing. * Plan my career trajectory based on where I think I can plausibly maximize my long term values. * Study subjects only if I think that I will need to understand them at some point in order to grasp an important concept. See more details here [https://www.lesswrong.com/posts/MnrQMLuEg5wZ7f4bn/matthew-barnett-s-shortform#tdTgyEf2Giy6SAZ7n] . * Avoid doing work that leverages small probabilities of exceptionally bad outcomes. For example, I don't focus my studying on worst-case AI safety risk (although I do think that analyzing worst-case failure modes is useful from the standpoint of a security mindset [https://arbital.com/p/AI_safety_mindset/]). I see a few problems with this heuristic, however, and I'm not sure quite how to resolve them. More specifically, I tend to float freely between different projects because I am quick to abandon things if I feel like they aren't working out (compare this to the mindset that some game developers have when they realize their latest game idea isn't very good). One case where this shows up is when I change my beliefs about where the most effective ways to spend my time as far as long-term future scenarios are concerned. I will sometimes read an argument about how some line of inquiry is promising and for an entire day believe that this would be a good thing to work on, only for the next day to bring another argument. And things like my AI timeline predictions vary erratically, much more than I expect most people's: I sometimes wake up and think that AI might be just 10 years away and other days I wake up and wond

Wednesday, August 21st 2019
Wed, Aug 21st 2019

Shortform [Beta]
32Buck3d A couple weeks ago I spent an hour talking over video chat with Daniel Cantu, a UCLA neuroscience postdoc who I hired on Wyzant.com [https://www.wyzant.com/match/tutor/87443576?fbclid=IwAR3n91qFP_ijKlfMHrw1UmOVOhdw3jyG1r1A-whIJBaFPzpBWtWCmzBe414] to spend an hour answering a variety of questions about neuroscience I had. (Thanks Daniel for reviewing this blog post for me!) The most interesting thing I learned is that I had quite substantially misunderstood the connection between convolutional neural nets and the human visual system. People claim that these are somewhat bio-inspired, and that if you look at early layers of the visual cortex you'll find that it operates kind of like the early layers of a CNN, and so on. The claim that the visual system works like a CNN didn’t quite make sense to me though. According to my extremely rough understanding, biological neurons operate kind of like the artificial neurons in a fully connected neural net layer--they have some input connections and a nonlinearity and some output connections, and they have some kind of mechanism for Hebbian learning or backpropagation or something. But that story doesn't seem to have a mechanism for how neurons do weight tying, which to me is the key feature of CNNs. Daniel claimed that indeed human brains don't have weight tying, and we achieve the efficiency gains over dense neural nets by two other mechanisms instead: Firstly, the early layers of the visual cortex are set up to recognize particular low-level visual features like edges and motion, but this is largely genetically encoded rather than learned with weight-sharing. One way that we know this is that mice develop a lot of these features before their eyes open. These low-level features can be reinforced by positive signals from later layers, like other neurons, but these updates aren't done with weight-tying. So the weight-sharing and learning here is done at the genetic level. Secondly, he thinks that we get around the need for
25G Gordon Worley III2d Some thoughts on Buddhist epistemology. This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this". My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error. I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme. As a quick reminder, there are at least three categories of knowledge worth considering [https://mapandterritory.org/doxa-episteme-and-gnosis-ea35e4408edd]: doxa, episteme, and gnosis. Doxa might translate as "hearsay" in English; it's about statements of the truth. Episteme is knowledge you come to believe via evaluation of the truth. Gnosis is direct, unmediated-by-ontology knowledge of reality. To this I'll also distinguish techne from episteme, the former being experienced knowledge and the latter being reasoned knowledge. I'll make the probably not very bold claim that most LW rationalists value episteme above all else, accept techne as evidence, accept doxa as evidence about evidence and only weak evidence of truth itself, and mostly ignore gnosis because it is not "rational" in the sense that it cannot be put into words and it can only be pointed at by wor
18Raemon2d Crossposted from my Facebook timeline (and, in turn, crossposted there from vaguely secret, dank corners of the rationalsphere) “So Ray, is LessLong ready to completely replace Facebook? Can I start posting my cat pictures and political rants there?” Well, um, hmm.... So here’s the deal. I do hope someday someone builds an actual pure social platform that’s just actually good, that’s not out-to-get you, with reasonably good discourse. I even think the LessWrong architecture might be good for that (and if a team wanted to fork the codebase, they’d be welcome to try) But LessWrong shortform *is* trying to do a bit of a more nuanced thing than that. Shortform is for writing up early stage ideas, brainstorming, or just writing stuff where you aren’t quite sure how good it is or how much attention to claim for it. For it to succeed there, it’s really important that it be a place where people don’t have to self-censor or stress about how their writing comes across. I think intellectual progress depends on earnest curiosity, exploring ideas, sometimes down dead ends. I even think it involves clever jokes sometimes. But... I dunno, if looked ahead 5 years and saw that the Future People were using shortform mostly for cat videos and political rants, I’d have a sense that we fucked up somehow. Hopefully the karma system will naturally reward shortform that’s thought provoking and interesting. If we get deluged in politics and cats and worse things I can’t predict, we might add some features that affect what shortform content gets most highly promoted. But the bottom line is the whole thing is an experiment. The important bits to know for now are: 1) use some good judgment 2) don’t stress too much about it – if it turns out to be a problem we’ll most likely try to solve it via positive-selection tech 3) Shortform is one piece of the overall intellectual pipeline. We’re also looking into things we can also do to improve LessWrong’s reward for the upper end of the r
11Chris_Leong2d One thing I'm finding quite surprising about shortform is how long some of these posts are. It seems that many people are using this feature to indicate that they've just written up these ideas quickly in the hope that the feedback is less harsh. This seems valuable; the feedback here can be incredibly harsh at times and I don't doubt that this has discouraged many people from posting.
6mr-hire2d Here are some of the common criticisms I get of myself. If you know me, either in person, through secondhand accounts feel free to comment with your thoughts on which ones feel correct to you and any nuance or comments you'd like to make. Full license for this particular thread to operate on Crocker's rules and not take my feelings into account. If you don't feel comfortable commenting publicly, also feel free to message with your thoughts. * I have too low epistemic rigor. * Too confident in myself * Not confident enough in myself. * Too focused on status. * I don't keep good company. * I'm too impulsive. * Too risk seeking.
Load More (5/7)

Tuesday, August 20th 2019
Tue, Aug 20th 2019

Personal Blogposts
4[Event]San Francisco Meetup: Shallow Questions170 Hawthorne St, San Francisco, CA 94107, USAAug 27th
0
1[Event]Lisbon SSC Meetup #1Largo Rafael Bordalo Pinheiro 29 R/C, LisboaAug 28th
0
Shortform [Beta]
10Davis_Kingsley4d Strategy mini-post: One thing that tends to be weak in strategy games is "opponent's choice" effects, where an ability has multiple possible effects and an opponent chooses which is resolved. Usually, each effect is stronger than what you would normally get for a card with that price, but in practice these cards are often quite weak. For instance, the Magic: the Gathering card "Book Burning" [https://gatherer.wizards.com/pages/card/Details.aspx?multiverseid=35891] looks quite strong in theory, as it either does 6 damage or mills 6 cards (both strong effects that might well be worth more than the card's cost, since this was a set where having cards in your graveyard was quite relevant). However, in fact it is quite weak, because in practice you will always get the effect that is less relevant; if the opponent has life to spare they'll take damage, and if the mill is no longer relevant they'll let you mill instead. This pattern holds true across multiple games. In Legend of the Five Rings, Levy [https://l5r.gamepedia.com/Levy] is similarly weak despite the fact that a card that did only one of its effects would likely be overpowered, as one effect or the other is likely to be much less relevant at any point in the game and the opponent can always choose the less relevant effect.
6Davis_Kingsley3d There are a pair of things in the rationalist community which I like to call "The Two Bad Polys" -- polyphasic sleep and polyamory. Both seem appealing to many people and have been experimented with pretty widely in the community despite being quite harmful; I strongly advise against trying either. In practice they seem to lead to lots of problems for most people who try them. (Attribution note: I'm not sure whether I was the first to come up with this term to describe the pair -- I think the two were first referred to as a dangerous pair by someone else but I might have come up with this particular name for them.)
6Swimmer9633d Fiction writing ramble, #2: Worldbuilding. This is an attempt to walk through the mental process I follow when writing fiction. [Goals: I'd like to better understand what my brain is doing, and put out ideas for other people who might be interested in writing fiction.] Historically, worldbuilding (I'm talking mainly about fantasy settings here, but sci-fi as well; earthfic applies less) has been one of the planning steps that I most struggle to do alone; I've tended to do it via brainstorming with friends. Figuring out how magic systems and societal norms work is shiny, but apparently less shiny to me than just writing. This method has some obvious downsides; it's harder to get a setting that feels lawful and consistent, and I can write myself into a corner; though the upside is that I don't tend to get nerdsniped on the worldbuilding step and never end up actually writing anything. (I've known several aspiring authors with this problem). There are two basic ways that I've approached worldbuilding for fantasy: 1) Top-down: posit some rules for a world, and simulate out the consequences; can include what magic is possible and easy/hard, but also the consequences it would have on society. 2) Bottom-up: posit some facts about that world, and try to reverse-engineer the underlying rule-set that would generate those observations. Thanks to my tendency to do worldbuilding "as I go" while already halfway into a story, rather than figuring out all the rules of the setting in advance, (and my more recent choice to write fanfiction in a setting where the rules were very poorly explained), I've done more of the latter. Both feel "generative" in a sense that's hard to describe, but has some similarities to e.g. doing murphyjitsu on an upcoming event I'm running – I'm building up a model of a scenario, asking a lot of "if X, then what?", and trying to poke at my assumptions and edge cases. (I'm assuming there are multiple ways that fantasy authors do their worldbuildin

Monday, August 19th 2019
Mon, Aug 19th 2019

Shortform [Beta]
16elityre4d Old post: A mechanistic description of status [https://musingsandroughdrafts.wordpress.com/2018/07/13/a-mechanistic-description-of-status/] [This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t readSocial Status: Down the Rabbit Hole [https://meltingasphalt.com/social-status-down-the-rabbit-hole/] on Kevin Simler’s excellent blog, Melting Asphalt [https://meltingasphalt.com/]read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”] In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation. (I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.) By “status” I mean prestige-status. Axiom 1: People have goals. That is, for any given human, there are some things that they want. This can include just about anything. You might want more money, more sex, a ninja-turtles lunchbox, a new car, to have interesting conversations, to become an expert tennis player, to move to New York etc. Axiom 2: There are people who control resources relevant to other people achieving their goals. The kinds of resources are as varied as the goals one can have. Thinking about status dynamics and the like, people often focus on the particularly convergent resources, like money. But resources that are only relevant to a specific goal are just as much a part of the dynamics I’m about to describe. Knowing a bunch about late 16th century Swed
4Matthew Barnett4d Related to: Realism about rationality [https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality] I have talked to some people who say that they value ethical reflection, and would prefer that humanity reflected for a very long time before colonizing the stars. In a sense I agree, but at the same time I can't help but think that "reflection" is a vacuous feel-good word that has no shared common meaning. Some forms of reflection are clearly good. Epistemic reflection is good if you are a consequentialist, since it can help you get what you want. I also agree that narrow forms of reflection can also be good. One example of a narrow form of reflection is philosophical reflection where we compare the details of two possible outcomes and then decide which one is better. However, there are much broader forms of reflection which I'm less hesitant to endorse. Namely, the vague types of reflection, such as reflecting on whether we really value happiness, or whether we should really truly be worried about animal suffering. I can perhaps sympathize with the intuition that we should really try to make sure that what we put into an AI is what we really want, rather than just what we superficially want. But fundamentally, I have skepticism that there is any canonical way of doing this type of reflection that leads to non-arbitrariness. I have heard something along the lines of "I would want a reflective procedure that extrapolates my values as long as the procedure wasn't deceiving me or had some ulterior motive" but I just don't see how this type of reflection corresponds to any natural class. At some point, we will just have to put some arbitrariness into the value system, and there won't be any "right answer" about how the extrapolation is done.
3Matthew Barnett4d In discussions about consciousness I find myself repeating the same basic argument against the existence of qualia [https://en.wikipedia.org/wiki/Qualia] constantly. I don't do this just to be annoying: It is just my experience that 1. People find consciousness really hard to think about and has been known to cause a lot of disagreements. 2. Personally I think that this particular argument dissolved perhaps 50% of all my confusion about the topic, and was one of the simplest, clearest arguments that I've ever seen. I am not being original either. The argument is the same one that has been used in various forms across Illusionist/Eliminativist literature that I can find on the internet. Eliezer Yudkowsky used a version of it [https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted] many years ago. Even David Chalmers, who is quite the formidable consciousness realist, admits in The Meta-Problem of Consciousness [https://philpapers.org/archive/CHATMO-32.pdf] that the argument is the best one he can find against his position. The argument is simply this: If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis. This is the standard debunking argument. It has a more general form which can be used to deny the existence of a lot of other non-reductive things: distinct personal identities, gods, spirits, libertarian free will, a mind-independent morality etc. In some sense it's just an extended version of Occam's razor, showing us that qualia don't do anything in our physical theories, and thus can be rejected as things that actually exist out there in any sense. To me this argument is very clear, and yet I find myself arguing it a lot. I am not sure how else to get people to see my side of it other than sending them a bunch of articles which more-or-less make the exact same argument but from different perspectives. I
3emmab5d spectrum of qualia of rapid muscle movement: 1. wiggle back and forth with no delay 2. effort required to make individual movements one in a row some actions are only sometimes available to me in form #2 e.g. glossolalia, patterns of rapid eye movement sometimes it seems like a matter of training e.g. learning to wiggle my ears

Sunday, August 18th 2019
Sun, Aug 18th 2019

Personal Blogposts
4[Event]San Francisco Meetup: Projects170 Hawthorne St, San Francisco, CA 94107, USAAug 20th
0
Shortform [Beta]
50Buck5d I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour. It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions. For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall. I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are. There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something. Conversational moves I particularly like: * "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about." * "Why don't people talk about X?" * "What should I read to learn more about X, based on what you know about me from this conversation?" All of the above are way faster with a live human than with the internet. I think that doing this for an hour or two weekly will make me substantially more knowl
24Swimmer9635d Fiction writing ramble, 1 of ??: I've been trying to introspect lately on my fiction writing process. My brain is opaque about what it's doing, which I guess makes sense – I've been telling stories since literally before I knew how to write, and I've never had any formal instruction in it. Current question: why is it so intensely a superstimulus for me? Writing fiction is pretty much the only thing I will eagerly and endorsedly do for 16 hours straight. My best and most enjoyable writing periods feel very much like "chasing the shiny" – what "the shiny" is, is hard to describe, but it feels more "substantial" than other second-to-second dopamine-hit-seeking behaviours like chasing links on Wikipedia or TV Tropes. Somewhere between pushing towards an exercise high, and drinking water when I'm thirsty, whereas Internet dopamine-seeking is more like eating sugar. Thinking about a story I'm writing, and especially talking meta about it with friends, is also extremely rewarding, to the point that sometimes I'll find myself off in daydreams or having a chat conversation instead of, you know, actually writing. Reaching the shiny thing on the page is effortful, much of the time – the feeling is often that it's "far away", and I'm sort of hill-climbing towards it, but often spotting smaller shiny things along the way. The big shiny thing can be a particularly clever or cool plot event that I'm setting the groundwork for, a felt sense theme-shaped-thing I'm groping towards, a lesson I want to convey, or the resolution of foreshadowing I've been setting up for dozens of chapters. (Or, let's be honest, my biggest self-indulgence in writing: gratuitous medical emergencies). The opportunities snatched up along the way can be an amusing interaction between characters, a bit of neat worldbuilding my brain generated on the spot, a snippet of dialogue that feels deliciously in-character, or just a sentence or word use that feels poetic and satisfying. (Things I'm not chasing t
12Hazard5d Memex Thread: I've taken copious notes in notebooks over the past 6 years, I've used evernote on and off as a capture tool for the past 4 years, and for the past 1.5 years I've been trying to organize my notes via a personal wiki. I'm in the process of switching and redesigning systems, so here's some thoughts.
12Evan Rysdam5d I'm told that there was a period of history where only the priests were literate and therefore only they could read the Bible. Or maybe it was written in Latin and only they knew how to read it, or something. Anyway, as a result, they were free to interpret it any way they liked, and they used that power to control the masses. Goodness me, it's a good thing we Have Science Now and can use it to free ourselves from the overbearing grip of Religion! Oh, totally unrelatedly, the average modern person is scientifically illiterate and absorbs their knowledge of what is "scientific" through a handful of big news sources and through cultural osmosis. Hmm. [https://i.imgur.com/yw4hlST.png] Moral: Be wary of packages labeled "science" and be especially wary of social pressure to believe implausible-sounding claims just because they're "scientific". There are many ways for that beautiful name to get glued onto random memes.
9mr-hire5d I've had a draft sitting in my posts section for months about shallow, deep, and transfer learning. Just made a Twitter thread that gets at the basics. And figured I'd post here to gauge interest in a longer post with examples. Love kindle, love Evernote. But never highlight good ideas. It's level one reading. Instead use written notes and link important ideas to previous concepts you know. Level 1: What's important? What does this mean? Level 2: How does this link to compare/contrast to previous concepts or experiences? Do I believe this? Level 3: How is this a metaphor for seemingly unrelated concepts? How can this frame my thinking? 4 questions to get to level 2: * How is this similar to other things I know? * How is this different from other things I know? * What previous experiences can I relate this to? * In what circumstances would I use this knowledge? How would I use it? 3 Questions to ask to get to level 3: * How does it feel to view the world through this lens? * How does this explain everything? * What is this a metaphor for?
Load More (5/8)

Saturday, August 17th 2019
Sat, Aug 17th 2019

Personal Blogposts
1[Event]South Bay MeetupSan JoseAug 31st
0
Shortform [Beta]
20Raemon7d I've posted this on Facebook a couple times but seems perhaps worth mentioning once on LW: A couple weeks ago I registered the domain LessLong.com and redirected it to LessWrong.com/shortform. :P
14Davis_Kingsley7d One concept people talk about in game design is "pendulum swing", where something that is too powerful or too weak is overcorrected in balance patches and becomes the opposite -- something too powerful becomes too weak, while something too weak becomes too powerful. A similar concept can be present in other sectors as well -- often, noticing one problem can lead to an overcorrection that brings you the opposite problem. For instance, an early stage organization might notice that they aren't systematic enough in their processes, overcorrect, and become too rigid and doctrinaire. (Duncan Sabien uses this concept of pendulum swing a lot, and while I was aware of it prior to his use he's done a lot to bring it to attention as a relevant rationality concept.)
12Benito7d I block all the big social networks from my phone and laptop, except for 2 hours on Saturday, and I noticed that when I check Facebook on Saturday, the notifications are always boring and not something I care about. Then I scroll through the newsfeed for a bit and it quickly becomes all boring too. And I was surprised. Could it be that, all the hype and narrative aside, I actually just wasn’t interested in what was happening on Facebook? That I could remove it from my life and just not really be missing anything? On my walk home from work today I realised that this wasn’t the case. Facebook has interesting posts I want to follow, but they’re not in my notifications. They’re sparsely distributed in my newsfeed, such that they appear a few times per week, randomly. I can get a lot of value from Facebook, but not by checking once per week - only by checking it all the time. That’s how the game is played. Anyway, I am not trading all of my attention away for such small amounts of value. So it remains blocked.
10Jacobian6d There's been a lot of noise lately about affirmative consent, a standard of consent which requires explicit verbal confirmation for every escalation of romantic or sexual interaction. It has been adopted as a standard by many college campuses [https://endrapeoncampus.org/yes-means-yes], and efforts have been made to turn it into actual law [https://www.city-journal.org/affirmative-consent]. Most of the discussion has centered around the use of affirmative consent as a legal standard, and as such it is quite terrible: unfair, unjust, and impossible to interpret in a consistent way that stops bad behavior without criminalizing normal conduct. But, what I haven't seen mentioned, is that adopting affirmative consent as a loosely enforced social norm is really good for nerds. If you're not great at reading body language and subtle signs, the expectation that you'll ask for verbal consent makes flirting much easier. You're no longer an awkward geek, you're hip to the times! I've personally erred on the side of asking explicitly in the past, and I think it has worked out great for me. Most women were happy to give consent when asked, the momentary awkwardness of asking quickly forgotten. A few said "no", in which case it's a good thing I asked! And I doubt that even a single one was poised so evenly on the fence that asking for verbal consent turned her off me. What do y'all think? And is this actually making life better or worse for women who date nerds?
8Raemon6d I think there's a preformal / formal / post-formal thing going on with Double Crux. My impression is the CFAR folk who created the doublecrux framework see it less as a formal process you should stick to, and more as a general set of guiding principles. The formal process is mostly there to keep you oriented in the right direction. But I see people (sometimes me) trying to use it as a rough set of guiding principles, and then easily slipping back into all the usual failure modes of not understanding each other, or not really taking seriously the possibility that they might be the wrong one. Right now in some contexts I've come across as a bit anal about sticking to "Formal Doublecrux rules". Model share. Check for cruxes. Recurse until you find common cruxes. Look for experiments you can actually run to gain new evidence. Aim to converge on truth. And it does clearly seem that these steps aren't always the best approach for a given conversation. But I often perceive what feel like basic errors, which would have been caught if you were following the formal rules. So I'm currently, like, on a crusade to make sure the people around me that I end up in the most disagreements with are able to nail the Formal Doublecrux Framework, and once we're all roughly on that page I'll trust us to do a post-formal version of it where we trust each other to get the basics right, so we can relax about the rules.
Load More (5/6)

Friday, August 16th 2019
Fri, Aug 16th 2019

Personal Blogposts
1[Event]Nashville SSC September Meetup2509 12th Avenue South, NashvilleSep 18th
0
Shortform [Beta]
17Matthew Barnett7d I get the feeling that for AI safety, some people believe that it's crucially important to be an expert in a whole bunch of fields of math in order to make any progress. In the past I took this advice and tried to deeply study computability theory, set theory, type theory -- with the hopes of it someday giving me greater insight into AI safety. Now, I think I was taking a wrong approach. To be fair, I still think being an expert in a whole bunch of fields of math is probably useful, especially if you want very strong abilities to reason about complicated systems. But, my model for the way I frame my learning is much different now. I think my main model which describes my current perspective is that I think employing a lazy style of learning is superior for AI safety work. Lazy [https://en.wikipedia.org/wiki/Lazy_evaluation]is meant in the computer science sense of only learning something when it seems like you need to know it in order to understand something important. I will contrast this with the model that one should learn a set of solid foundations first before going any further. Obviously neither model can be absolutely correct in an extreme sense. I don't, as a silly example, think that people who can't do basic arithmetic should go into AI safety before building a foundation in math. And on the other side of the spectrum, I think it would be absurd to think that one should become a world renowned mathematician before reading their first AI safety paper. That said, even though both models are wrong, I think my current preference is for the lazy model rather than the foundation model. Here are some points in favor of both, informed by my first-person experience. Points in favor of the foundations model: * If you don't have solid foundations in mathematics, you may not even be aware of things that you are missing. * Having solid foundations in mathematics will help you to think rigorously about things rather than having a vague non-reductionis
5Matthew Barnett7d Sometimes people will propose ideas, and then those ideas are met immediately after with harsh criticism. A very common tendency for humans is to defend our ideas and work against these criticisms, which often gets us into a state that people refer to as "defensive." According to common wisdom, being in a defensive state is a bad thing. The rationale here is that we shouldn't get too attached to our own ideas. If we do get attached, we become liable to become crackpots [https://www.lesswrong.com/posts/qRWfvgJG75ESLRNu9/the-crackpot-offer]who can't give an idea up because it would make them look bad if we did. Therefore, the common wisdom advocates treating ideas as being handed to us by a tablet from the clouds rather than a product of our brain's thinking habits. Taking this advice allows us to detach ourselves from our ideas so that we don't confuse criticism with insults. However, I think the exact opposite failure mode is not often enough pointed out and guarded against. Specifically, the failure mode is being too willing to abandon beliefs based on surface level counterarguments. To alleviate this I suggest we shouldn't be so ready to give up our ideas in the face of criticism. This might sound irrational -- why should we get attached to our beliefs? I'm certainly not advocating that we should actually associate criticism with insults to our character or intelligence. Instead, my argument is that the process of defensively defending against criticism generates a productive adversarial structure. Consider two people. Person A desperately wants to believe proposition X, and person B desperately wants to believe not X. If B comes up to A and says, "Your belief in X is unfounded. Here are the reasons..." Person A can either admit defeat, or fall into defensive mode. If A admits defeat, they might indeed get closer to the truth. On the other hand, if A gets into defensive mode, they might also get closer to the truth in the process of desperately for evidence

Thursday, August 15th 2019
Thu, Aug 15th 2019

Shortform [Beta]
21DonyChristie9d I'm really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.) The guiding question is, "What upgrades would make my life easier?" In contrast with the question that is more typically asked: "How do I achieve this hard thing?" It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don't really identify with). Part of that is a sense of... naughtiness? Like we're supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There's something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents? The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones. (This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. ) Of course
16Matthew Barnett8d I think there are some serious low hanging fruits for making people productive that I haven't seen anyone write about (not that I've looked very hard). Let me just introduce a proof of concept: Final exams in university are typically about 3 hours long. And many people are able to do multiple finals in a single day, performing well on all of them. During a final exam, I notice that I am substantially more productive than usual. I make sure that every minute counts: I double check everything and think deeply about each problem, making sure not to cut corners unless absolutely required because of time constraints. Also, if I start daydreaming, then I am able to immediately notice that I'm doing so and cut it out. I also believe that this is the experience of most other students in university who care even a little bit about their grade. Therefore, it seems like we have an example of an activity that can just automatically produce deep work. I can think of a few reasons why final exams would bring out the best of our productivity: 1. We care about our grade in the course, and the few hours in that room are the most impactful to our grade. 2. We are in an environment where distractions are explicitly prohibited, so we can't make excuses to ourselves about why we need to check Facebook or whatever. 3. There is a clock at the front of the room which makes us feel like time is limited. We can't just sit there doing nothing because then time will just slip away. 4. Every problem you do well on benefits you by a little bit, meaning that there's a gradient of success rather than a binary pass or fail (though sometimes it's binary). This means that we care a lot about optimizing every second because we can always do slightly better. If we wanted to do deep work for some other desired task, all four of these reasons seem like they could be replicable. Here is one idea (related to my own studying), although I'm sure I can come up with a better one if I thought deeply
8Raemon9d Draft/WIP: The Working Memory Hypothesis re: Intellectual Progress Strong claim, medium felt So I'm working with the hypothesis that working memory (or something related) is a major bottleneck on progress within a given field. This has implications on what sort of things fields need. Basic idea is that you generally need to create new concepts out of existing sub-concepts. You can only create a concept if you can hold the requisite sub-concepts in your head at once. Default working memory limits is 4-7 chunks. You can expand that somewhat by writing things down on whiteboards. I'm not sure exactly what the practical limits are. But there is some upper limit on how complex a concept people can work with, even the smartest people in the field. So there are some ideas you just can't invent, if they require 30 moving parts to consider at once. If an idea has 30 moving parts, it's necessary to find some way to conceptualize it as having fewer parts. Fortunately, this is possible. When I first learn a new concept, it's unfamiliar, and I have to track all of it's individual pieces separately – if it has 5 sub-concepts it takes 5 chunks of working memory. But eventually I learn to think of it as a single concept, and then it only takes 1. So, an important aspect of a field is distillation – improving explanations to help people think of clusters-of-ideas as a single concept they can reason about as if it took up only one chunk. Distillation is a sort of tedious process though. So part of why Bell Labs is a great idea is that you can have all these specialists of different fields working in the same building, and whenever one of them wants to learn something they can just walk down the hall and ask the guy who invented it "hey, how does this work", and they get an explanation. And if they don't understand it fully, they can immediately, frictionlessly, ask followup questions. This doesn't just help them understand new ideas – it reduces those ideas in conceptual
4FactorialCode8d I notice that there's a fair bit of "thread necromancy" on LessWrong. I don't think it's a bad thing, but I think it would be cool to have an option to filter comments based on the time gap between when the post was made and when the comment was made. That way it's easier to see to see what the discussion was like around the time when the post was made. On a related note, does LessWrong record when upvotes are made? It would also be cool to have a "time-machine" to see how up-votes and down-votes in a thread evolve over time. Could be good for analysing the behaviour of threads in the short term, and a way to see how community norms change in the long term.

Wednesday, August 14th 2019
Wed, Aug 14th 2019

Shortform [Beta]
13jacobjacob10d What important book that needs fact-checking is nobody fact-checking?
8Hazard9d Noticing an internal dynamic. As a kid I liked to build stuff (little catapults, modify nerf guns, sling shots, etc). I entered a lot of those projects with the mindset of "I'll make this toy and then I can play with it forever and never be bored again!" When I would make the thing and get bored with it, I would be surprised and mildly upset, then forget about it and move to another thing. Now I think that when I was imagining the glorious cool toy future, I was actually imagining a having a bunch of friends to play with (didn't live around many other kids). When I got to middle school and highschool and spent more time around other kids, the idea of "That person's talks like they're cool but they aren't." When I got into sub-cultures centering around a skill or activity (magic) I experienced the more concentrated form, "That person acts like they're good at magic, but couldn't do a show to save their life." I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing." Why, historically, have I gotten so worried when my interests shift? I'm not yet at a point in my life where there are that many logistical constraints (I've switched majors three times in three years without a hitch). I think it's because in the back of my head I expect every possible group or social scene to say, "We only want you if you're all about doing XYZ all the time." And when I'm super excited about XYZ, it's fine. But when I feel like "Yeah, I need a break" I get nervous. Yeah, there is a hard underlying problem of "How to not let your culture become meaningless", but I think my extra-problem is that I gravitated towards the groups that defined themselves by "We put in lots of time mastering this specific hard skill and applying it." Though I expect it to be the case that for the rest of my life I want to have thoughtful engaging discussion with intellectually honest people (a piece of what I want from less wrong), I feel less reason to be sure th
5Slider9d Magic colors and errors Reading Writers guild policy doc there was a principle of "the vase is already broken". The whole document is a lot how you make a red organization and most of the princples are anti-white. The principle makes sense but I found it be foregin to my culture. Things are made to be replaced. And if something is done wrong today we will try to do right the next day. In contrast the blue way is much more familiar with me. Accept only true things, set up things for perpetuity. In the contrast I noticed that the blue thing is focused on avoiding to making mistakes. You slow down enough that everything can be verified. In the red way you speed up so things can be corrected. In blue if you make a mistake you will build a machine that will do damage for to the forseeable future. In contrast in red if you feel the wrong way today you will not be inspired to feel so tomorrow and the energy will whimper. "Energy denial" seemed like a new perpective that I hadn't considered. Other colors probably also have ways to treat errors. Green probably adapts, there are no "mistakes" only "happy little accidents". Oxygen catastrophe is not armageddon but new conditions to build upon. Never say "no" but always "yes and" and just keep the harmony in front of your mind, keep the scene going. Black is proably ready to backstab their friends to death should they turn on them. It's not about what damage the mistake does but whether it proves your character whether you are safe to keep around. On the flip side having good enough goals means arbirarily bad means can be swallowed. White probably does courts and maybe a balance of rights: even if we set a monumentally bad law it will need to resolve itself against other core principles. A bad nomination is limited by the powers of the position being limited.

Load More Days