All Posts

Sorted by New

2019

Frontpage Posts
Shortform [Beta]
63habryka4mo Thoughts on integrity and accountability [Epistemic Status: Early draft version of a post I hope to publish eventually. Strongly interested in feedback and critiques, since I feel quite fuzzy about a lot of this] When I started studying rationality and philosophy, I had the perspective that people who were in positions of power and influence should primarily focus on how to make good decisions in general and that we should generally give power to people who have demonstrated a good track record of general rationality. I also thought of power as this mostly unconstrained resource, similar to having money in your bank account, and that we should make sure to primarily allocate power to the people who are good at thinking and making decisions. That picture has changed a lot over the years. While I think there is still a lot of value in the idea of "philosopher kings", I've made a variety of updates that significantly changed my relationship to allocating power in this way: * I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized. * People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly gener
52Buck1mo I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour. It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions. For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall. I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are. There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something. Conversational moves I particularly like: * "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about." * "Why don't people talk about X?" * "What should I read to learn more about X, based on what you know about me from this conversation?" All of the above are way faster with a live human than with the internet. I think that doing this for an hour or two weekly will make me substantially more knowl
44elityre25d Old post: RAND needed the "say oops" skill [https://musingsandroughdrafts.wordpress.com/2018/12/01/rand-needed-the-say-oops-skill/] [Epistemic status: a middling argument] A few months ago [https://musingsandroughdrafts.wordpress.com/2018/06/21/initial-comparison-between-rand-and-the-rationality-cluster/] , I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.” Since then I spent some time doing additional research into what cognitive errors and mistakesthose consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed. However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months. It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up. The missile gap In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US. Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count. (The Air Force and SAC were incentivized to inflate their estimates of the
42habryka5mo Thoughts on voting as approve/disapprove and agree/disagree: One of the things that I am most uncomfortable with in the current LessWrong voting system is how often I feel conflicted between upvoting something because I want to encourage the author to write more comments like it, and downvoting something because I think the argument that the author makes is importantly flawed and I don't want other readers to walk away with a misunderstanding about the world. I think this effect quite strongly limits certain forms of intellectual diversity on LessWrong, because many people will only upvote your comment if they agree with it, and downvote comments they disagree with, and this means that arguments supporting people's existing conclusions have a strong advantage in the current karma system. Whereas the most valuable comments are likely ones that challenge existing beliefs and that are rigorously arguing for unpopular positions. A feature that has been suggested many times over the years is to split voting into two dimensions. One dimension being "agree/disagree" and the other being "approve/disapprove". Only the "approve/disapprove" dimension matters for karma and sorting, but both are displayed relatively prominently on the comment (the agree/disagree dimension on the the bottom, the approve/disapprove dimension at the top). I think this has some valuable things going for it, and in particular would make me likely to upvote more comments because I could simultaneously signal that while I think a comment was good, I don't agree with it. An alternative way of doing this that Ray has talked about is the introduction of short reactions that users can click at the bottom of a comment, two of the most prominently displayed ones would be "agree/disagree". Reactions would be by default non-anonymous and so would serve more as a form of shorthand comment instead of an alternative voting system. Here is an example of how that kind of UI might look: I don't know pre
38elityre1mo New post: What is mental energy? [https://wordpress.com/post/musingsandroughdrafts.wordpress.com/398] [Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.] There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically). Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it? The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn. My current understanding is that this story is not physiologically realistic. Thinking hard does consume more of the body’s energy than baseline, but not that much more. And we experience mental fatigue long before we even get close to depleting our calorie stores. It isn’t literal energy that is being consumed. [ The Psychology of Fatigue pg.27] So if not that, what is going on here? A few hypotheses: (The first few, are all of a cluster, so I labeled them 1a, 1b, 1c, etc.) Hypothesis 1a: Mental fatigue is a natural control system that redirects our attention to our other goals. The explanation that I’ve heard most frequently in recent years (since it became obvious that much of the literature on ego-depletion was off the mark), is the following: A human mind is composed of a bunch
Load More (5/349)

2018

Frontpage Posts
Shortform [Beta]
45Raemon2y Conversation with Andrew Critch today, in light of a lot of the nonprofit legal work he's been involved with lately. I thought it was worth writing up: "I've gained a lot of respect for the law in the last few years. Like, a lot of laws make a lot more sense than you'd think. I actually think looking into the IRS codes would actually be instructive in designing systems to align potentially unfriendly agents." I said "Huh. How surprised are you by this? And curious if your brain was doing one particular pattern a few years ago that you can now see as wrong?" "I think mostly the laws that were promoted to my attention were especially stupid, because that's what was worth telling outrage stories about. Also, in middle school I developed this general hatred for stupid rules that didn't make any sense and generalized this to 'people in power make stupid rules', or something. But, actually, maybe middle school teachers are just particularly bad at making rules. Most of the IRS tax code has seemed pretty reasonable to me."
38Raemon2y More in neat/scary things Ray noticed about himself. I set aside this week to learn about Machine Learning, because it seemed like an important thing to understand. One thing I knew, going in, is that I had a self-image as a "non technical person." (Or at least, non-technical relative to rationality-folk). I'm the community/ritual guy, who happens to have specialized in web development as my day job but that's something I did out of necessity rather than a deep love. So part of the point of this week was to "get over myself, and start being the sort of person who can learn technical things in domains I'm not already familiar with." And that went pretty fine. As it turned out, after talking to some folk I ended up deciding that re-learning Calculus was the right thing to do this week. I'd learned in college, but not in a way that connected to anything and gave me a sense of it's usefulness. And it turned out I had a separate image of myself as a "person who doesn't know Calculus", in addition to "not a technical person". This was fairly easy to overcome since I had already given myself a bunch of space to explore and change this week, and I'd spent the past few months transitioning into being ready for it. But if this had been at an earlier stage of my life and if I hadn't carved out a week for it, it would have been harder to overcome. Man. Identities. Keep that shit small yo.
38Raemon2y So there was a drought of content during Christmas break, and now... abruptly... I actually feel like there's too much content on LW. I find myself skimming down past the "new posts" section because it's hard to tell what's good and what's not and it's a bit of an investment to click and find out. Instead I just read the comments, to find out where interesting discussion is. Now, part of that is because the front page makes it easier to read comments than posts. And that's fixable. But I think, ultimately, the deeper issue is with the main unit-of-contribution being The Essay. A few months ago, mr-hire said (on writing that provokes comments [https://www.lesserwrong.com/posts/GHBLFPDhzeSQHx2eM/writing-that-provokes-comments] ) Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.This seems basically right to me. In addition to comments working as an early proving ground for an ideas' merit, comments make it easier to focus on the idea, instead of getting wrapped up in writing something Good™. I notice essays on the front page starting with flowery words and generally trying to justify themselves as an essay, when all they actually needed was to be a couple short paragraphs. Sometimes even a sentence. So I think it might be better if the default way of contributing to LW was via comments (maybe using something shaped sort of like this feed), which then appears on the front page, and if you end up writing a comment that's basically an essay, then you can turn it into an essay later if you want.
32Hazard2y Over the past few months I've noticed a very consistent cycle. 1. Notice something fishy about my models 2. Struggle and strain until I was able to formulate the extra variable/handle needed to develop the model 3. Re-read an old post from the sequences and realize "Oh shit, Eliezer wrote a very lucid description of literally this exact same thing." What's surprising is how much I'm surprised by how much this happens.
30Raemon2y So, AFAICT, rational!Animorphs [http://archiveofourown.org/works/5627803/chapters/12963046] is the closest thing CFAR has to publicly available documentation. (The characters do a lot of focusing, hypothesis generation-and-pruning. Also, I just got to the Circling Chapter) I don't think I'd have noticed most of it if I wasn't already familiar with the CFAR material though, so not sure how helpful it is. If someone has an annotated "this chapter includes decent examples of Technique/Skill X, and examples of characters notably failing at Failure Mode Y", that might be handy.
Load More (5/85)

2017

Frontpage Posts
Shortform [Beta]
50Raemon2y Something struck me recently, as I watched Kubo, and Coco - two animated movies that both deal with death, and highlight music and storytelling as mechanisms by which we can preserve people after they die. Kubo begins "Don't blink - if you blink for even an instant, if you a miss a single thing, our hero will perish." This is not because there is something "important" that happens quickly that you might miss. Maybe there is, but it's not the point. The point is that Kubo is telling a story about people. Those people are now dead. And insofar as those people are able to be kept alive, it is by preserving as much of their personhood as possible - by remembering as much as possible from their life. This is generally how I think about death. Cryonics is an attempt at the ultimate form of preserving someone's pattern forever, but in a world pre-cryonics, the best you can reasonably hope for is for people to preserve you so thoroughly in story that a young person from the next generation can hear the story, and palpably feel the underlying character, rich with inner life. Can see the person so clearly that he or she comes to live inside them. Realistically, this means a person degrades with each generation. Their pattern is gradually distorted. Eventually it is forgotten. Maybe this horrendously unsatisfying - it should be. Stories are not very high fidelity storage device. Most of what made the person an agent is gone. But not necessarily - if you choose to not just remember humorous anecdotes about a person, but to remember what they cared about, you can be a channel by which that person continues to act upon the world. Someone recently pointed this out as a concrete reason to respect the wishes of the dead - as long as there are people enacting that person's will, there is some small way in which they meaningfully still exist. This is part of how I chose to handle the Solstices that I lead myself: Little Echo [https://humanistculture.bandcamp.com/track/a-lit
18Raemon2y Musings on ideal formatting of posts (prompted by argument with Ben Pace) My thoughts:1) Working memory is important. If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head. 2) Less Wrong is for thinking This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them. 3) You can expand working memory with visual reference Having larger monitors or notebooks to jot down thoughts makes it easier to think. The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain). But regardless of font-size: 4) Optimizing a post for re-skimmability makes it easier to refer to. This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon [https://www.lesserwrong.com/posts/2x7fwbwb35sG8QmEt/sunset-at-noon] for an example) Ben's Counter:Ben Pace noticed this while reviewing an upcoming post I was working on, and his feeling was "all this bold is making me skim the post instead of reading it." To which all I have to say is "hmm. Yeah, that seems likely." I am currently unsure of the relative tradeoffs.

2016

No posts for this year

Load More Years