All Posts

Sorted by Magic (New & Upvoted)

Thursday, September 19th 2019
Thu, Sep 19th 2019

Shortform [Beta]
17habryka11h What is the purpose of karma?LessWrong has a karma system, mostly based off of Reddit's karma system, with some improvements and tweaks to it. I've thought a lot about more improvements to it, but one roadblock that I always run into when trying to improve the karma system, is that it actually serves a lot of different uses, and changing it in one way often means completely destroying its ability to function in a different way. Let me try to summarize what I think the different purposes of the karma system are: Helping users filter content The most obvious purpose of the karma system is to determine how long a post is displayed on the frontpage, and how much visibility it should get. Being a social reward for good content This aspect of the karma system comes out more when thinking about Facebook "likes". Often when I upvote a post, it is more of a public signal that I value something, with the goal that the author will feel rewarded for putting their effort into writing the relevant content. Creating common-knowledge about what is good and bad This aspect of the karma system comes out the most when dealing with debates, though it's present in basically any karma-related interaction. The fact that the karma of a post is visible to everyone, helps people establish common knowledge of what the community considers to be broadly good or broadly bad. Seeing a an insult downvoted, does more than just filter it out of people's feeds, it also makes it so that anyone who stumbles accross it learns something about the norms of the community. Being a low-effort way of engaging with the site On lesswrong, Reddit and Facebook, karma is often the simplest action you can take on the site. This means its usually key for a karma system like that to be extremely simple, and not require complicated decisions, since that would break the basic engagement loop with the site. Problems with alternative karma systemsHere are some of the most common alternatives to our current k
13romeostevensit15h A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.

Wednesday, September 18th 2019
Wed, Sep 18th 2019

Shortform [Beta]
6TurnTrout18h Good, original thinking feels present to me - as if mental resources are well-allocated. The thought which prompted this: Sure, if people are asked to solve a problem and say they can't after two seconds, yes - make fun of that a bit. But that two seconds covers more ground than you might think, due to System 1 precomputation. Reacting to a bit of HPMOR here, I noticed something felt off about Harry's reply to the Fred/George-tried-for-two-seconds thing. Having a bit of experience noticing confusing, I did not think "I notice I am confused" (although this can be useful). I did not think "Eliezer probably put thought into this", or "Harry is kinda dumb in certain ways - so what if he's a bit unfair here?". Without resurfacing, or distraction, or wondering if this train of thought is more fun than just reading further, I just thought about the object-level exchange. People need to allocate mental energy wisely; this goes far beyond focusing on important tasks. Your existing mental skillsets already optimize and auto-pilot certain mental motions for you, so you should allocate less deliberation to them. In this case, the confusion-noticing module was honed; by not worrying about how well I noticed confusion, I was able to quickly have an original thought. When thought processes derail or brainstorming sessions bear no fruit, inappropriate allocation may be to blame. For example, if you're anxious, you're interrupting the actual thoughts with "what-if"s. To contrast, non-present thinking feels like a controller directing thoughts to go from here to there: do this and then, check that, come up for air over and over... Present thinking is a stream of uninterrupted strikes, the train of thought chugging along without self-consciousness. Moving, instead of thinking about moving while moving. I don't know if I've nailed down the thing I'm trying to point at yet.
1hunterglenn21h When you are in a situation, there are too many true facts about that situation for you to think about all of them at the same time. Whether you do it on purpose or not, you will inevitably end up thinking about some truths more than others. From a truth measure, this is fine, so long as you stick to true statements. From a truth perspective, you could also change which true facts you are thinking about, without sacrificing any truth. Truth doesn't care. But happiness cares, and utility cares. Which truths you happen to focus on may not affect how true your thoughts are, but it does affect your psychological state. And your psychological state affects your power over your situation, to make it better or worse, for yourself, and for everyone else. There's a chain of cause and effect, starting with what truth you hold in your mind, which then affects your emotional and psychological states, which then affects your actions, which, depending on if they're "good" or "bad," end up affecting your life. I harp on this so because some people keep thinking thoughts that ruin their mood, their choices, and their lives, but they refuse to just STOP thinking those ruinous thoughts, and they justify their refusal on the truth of the thoughts. So let's say you're helping to pass food out to stranded orphans, and it occurs to you that this won't matter in a thousand years. Then it occurs to you that there are so many orphans that this won't make any appreciable difference. It occurs to you that you'll go about your life in the world afterwards, seeing and hearing many things, and probably none of those things will be any better as a result of what you're doing for these orphans. Not better, not even different. And what you're doing isn't a big enough change to be noticeable, no matter how hard you look. So, all of these are factually accurate, true ideas, let's say. Fine. Now what happens to your psychological and emotional state? You perceive the choices before you as suddenl
1sayan1d What gadgets have improved your productivity? For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

Tuesday, September 17th 2019
Tue, Sep 17th 2019

Personal Blogposts
Shortform [Beta]
6Spiracular3d One of my favorite little tidbits from working on this post [https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards]: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.
5FactorialCode3d Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By "fuzzy concepts" I mean things where we can say "I know it when I see it." but we might not be able to describe what "it" is. Examples that I believe support the hypothesis: * This shortform is about the philosophy of "philosophy" and this hypothesis is an attempt at an explanation of what we mean by "philosophy". * In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning. * In ethics, an ethical theory attempts to make explicit our moral intuitions. * A clear explanation of consciousness and qualia would be considered philosophical progress.
3Spiracular3d Bubbles in Thingspace It occurred to me recently that, by analogy with ML, definitions might occasionally be more like "boundaries and scoring-algorithms in thingspace" than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center... but for some words, I suspect there are dislocated "bubbles" that use the same word for a completely different concept. Homophones are one of the clearest examples.

Monday, September 16th 2019
Mon, Sep 16th 2019

Personal Blogposts
2[Event]SSC Meetups Everywhere645 South Clark Street, ChicagoSep 28th
0
Shortform [Beta]
15mr-hire3d Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person. I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skill but are now good, and model the strategies that are common for them to make the switch. The episode is cut to tell a narrative of what the skills are to be acquired, what beliefs/attitudes need to be let go of and acquired, and the process to acquire them, rather than focusing on interviewing a particular person If there's enough interest, I'll do a pilot episode. Comment with what skillset you'd love to see a pilot episode on. Upvote if you'd have 50% or more chance of listening to the first episode.
10Matthew Barnett3d There's a phenomenon I currently hypothesize to exist where direct attacks on the problem of AI alignment are criticized much more often than indirect attacks. If this phenomenon exists, it could be advantageous to the field in the sense that it encourages thinking deeply about the problem before proposing solutions. But it could also be bad because it disincentivizes work on direct attacks to the problem (if one is criticism averse and would prefer their work be seen as useful). I have arrived at this hypothesis from my observations: I have watched people propose solutions only to be met with immediate and forceful criticism from others, while other people proposing non-solutions and indirect analyses are given little criticism at all. If this hypothesis is true, I suggest it is partly or mostly because direct attacks on the problem are easier to defeat via argument, since their assumptions are made plain If this is so, I consider it to be a potential hindrance on thought, since direct attacks are often the type of thing that leads to the most deconfusion -- not because the direct attack actually worked, but because in explaining how it failed, we learned what definitely doesn't work.
6TurnTrout3d I seem to differently discount different parts of what I want. For example, I'm somewhat willing to postpone fun to low-probability high-fun futures, whereas I'm not willing to do the same with romance.

Sunday, September 15th 2019
Sun, Sep 15th 2019

Shortform [Beta]
5An1lam4d Epistemic status: Thinking out loud. Introducing the QuestionScientific puzzle I notice I'm quite confused about: what's going on with the relationship between thinking and the brain's energy consumption? On one hand, I'd always been told that thinking harder sadly doesn't burn more energy than normal activity. I believed that and had even come up with a plausible story about how evolution optimizes for genetic fitness not intelligence, and introspective access is pretty bad as it is, so it's not that surprising that we can't crank up our brains energy consumption to think harder. This seemed to jive with the notion that our brain's putting way more computational resources towards perceiving and responding to perception than abstract thinking. It also fit well with recent results calling ego depletion into question and into the framework in which mental energy depletion is the result of a neural opportunity cost calculation [https://www.lesswrong.com/posts/9SSXcQ92ZJHgdqzDj/link-why-self-control-seems-but-may-not-be-limited] . Going even further, studies like this one [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019873/] left me with the impression that experts tended to require less energy to accomplish the same mental tasks as novices. Again, this seemed plausible under the assumption that experts brains developed some sort of specialized modules over the thousands of hours of practice they'd put in. I still believe that thinking harder doesn't use more energy, but I'm now much less certain about the reasons I'd previously given for this. Chess Players' Energy ConsumptionThis recent ESPN (of all places) article [https://www.espn.com/espn/story/_/id/27593253/why-grandmasters-magnus-carlsen-fabiano-caruana-lose-weight-playing-chess] about chess players' energy consumption during tournaments has me questioning this story. The two main points of the article are: 1. Chess players burn a lot of energy during tournaments, potentially on the order of 6000 ca
4Ruby4d Converting this from a Facebook comment to LW Shortform. A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam. Some discussion of repeated messaging behavior ensued. These are my thoughts: I feel conflicted about repeatedly messaging people. All the following being factors in this conflict: * Repeatedly messaging can be making yourself an asshole that gets through someone's unfortunate asshole filter [https://siderea.livejournal.com/1230660.html]. [filter. https://siderea.livejournal.com/1230660.html *] * There's an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways. * I know that many people are in fact disorganized and lose emails or otherwise don't have systems for getting back to you such that failure to get back to you doesn't mean they didn't want to. * There are other people have extremely good systems I'm always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don't always know where someone falls between "has no systems, relies on other people to message repeatedly" vs "has impeccable systems but due to volume of emails will take two weeks." * The overall incentives are such that most people probably shouldn't generally reveal which they are. * Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people's unreliability, it's either you bugging them or a good chance of not getting some important thing. * A wise, well-respected, business-experienced rationalist told me many years ago that if you want somethi
3Evan Rysdam4d I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I'm getting biased for/against certain posts before I even read them, just based on the number of votes they have.
1crabman4d Many biohacking guides suggest using melatonin. Does liquid melatonin [https://iherb.com/pr/Now-Foods-Liquid-Melatonin-2-fl-oz-59-ml/18345] spoil under high temperature if put in tea (95 degree Celcius)? More general question: how do I even find answers to questions like this one?

Saturday, September 14th 2019
Sat, Sep 14th 2019

Personal Blogposts
2[Event]SSC Meetups Everywhere: Rochester, NY200 East Avenue, RochesterSep 21st
1
2[Event]Dublin SSC/LW/EA "Meetups Everywhere" MeetupStephen Court, GF1, Saint Stephen's Green, Dublin 2Sep 21st
0
0[Event]SSC Meetups Everywhere: Albany NY260 Lark Street, AlbanySep 21st
0
0[Event]SSC Meetups Everywhere: Auckland, New Zealand184 Karangahape Road, Auckland CBD, AucklandSep 22nd
0
0[Event]SSC Meetups Everywhere: Baltimore, MD1000 Hilltop Circle, BaltimoreSep 29th
0
0[Event]SSC Meetups Everywhere: Berkeley, CA2412 Martin Luther King Junior Way, BerkeleyOct 11th
0
Shortform [Beta]
23Ruby6d Selected Aphorisms from Francis Bacon's Novum Organum I'm currently working to format Francis Bacon's Novum Organum [https://en.wikipedia.org/wiki/Novum_Organum] as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution) While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far: 3. . . . The only way to command reality is to obey it . . . 9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science. 10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.] 24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.Bacon repeat
15crabman5d A competition on solving math problems via AI is coming. https://imo-grand-challenge.github.io/ [https://imo-grand-challenge.github.io/] * The problems are from the International math olympiad (IMO) * They want to formalize all the problems in Lean (theorem prover) language. They haven't figured out how to do that, e.g. how to formalize problems of form "determine the set of objects satisfying the given property", as can be seen in https://github.com/IMO-grand-challenge/formal-encoding/blob/master/design/determine.lean [https://github.com/IMO-grand-challenge/formal-encoding/blob/master/design/determine.lean] * A contestant must submit a computer program that will take a problem's description as input and output a solution and its proof in Lean. I would guess that this is partly a way to promote Lean. I think it would be interesting to pose questions about this on metaculus.
6TekhneMakre5d Referential distance or referential displacement. Like inferential distance [https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances] , but for reference instead of inference. Inferential distance is when one perspective holds conclusions that require many steps of inference to justify, and the other perspective hasn't gone through those steps of inference (perhaps because the inferences are unsound). Referential distance is when one perspective has a term that points to something, and that term is most easily define using many other terms, and the other perspective doesn't know what those terms are supposed to point to. Referential distance, unlike normal distance, is directional: you are distant from an ancient Greek yarn spinner knows what a spindle and whorl is, and perhaps you don't; and separately, you know what a computer is, and they don't. (So really RD should be called referential displacement, and likewise for inferential displacement. It's the governor's job to rectify this.) Referential distance breaks up into translational distance (we have yet to match up the terms in our respective idiolects), and conceptual distance (you have concepts for things that I don't have concepts for, and/or we carve things up differently). Be careful not to mistakenly take a referential distance to be a translational distance when really it's conceptual; much opportunity to grow is lost in this way.
5Raemon6d I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development? People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd [https://xkcd.com/319/] and I just want to know for sure.

Friday, September 13th 2019
Fri, Sep 13th 2019

Shortform [Beta]
17habryka6d Thoughts on impact measures and making AI traps I was chatting with Turntrout today about impact measures, and ended up making some points that I think are good to write up more generally. One of the primary reasons why I am usually unexcited about impact measures is that I have a sense that they often "push the confusion into a corner" in a way that actually makes solving the problem harder. As a concrete example, I think a bunch of naive impact regularization metrics basically end up shunting the problem of "get an AI to do what we want" into the problem of "prevent the agent from interferring with other actors in the system". The second one sounds easier, but mostly just turns out to also require a coherent concept and reference of human preferences to resolve, and you got very little from pushing the problem around that way, and sometimes get a false sense of security because the problem appears to be solved in some of the toy problems you constructed. I am definitely concerned that Turntrou's AUP does the same, just in a more complicated way, but am a bit more optimistic than that, mostly because I do have a sense that in the AUP case there is actually some meaningful reduction going on, though I am unsure how much. In the context of thinking about impact measures, I've also recently been thinking about the degree to which "trap-thinking" is actually useful for AI Alignment research. I think Eliezer was right in pointing out that a lot of people, when first considering the problem of unaligned AI, end up proposing some kind of simple solution like "just make it into an oracle" and then consider the problem solved. I think he is right that it is extremely dangerous to consider the problem solved after solutions of this type, but it isn't obvious that there isn't some good work that can be done that is born out of the frame of "how can I trap the AI and make it marginally harder for it to be dangerous, basically pretending it's just a slightly smarter human
9Matthew Barnett7d I agree with Wei Dai that we should use our real names [https://www.lesswrong.com/posts/GEHg5T9tNbJYTdZwb/please-use-real-names-especially-for-alignment-forum] for online forums, including Lesswrong. I want to briefly list some benefits of using my real name, * It means that people can easily recognize me across websites, for example from Facebook and Lesswrong simultaneously. * Over time my real name has been stable whereas my usernames have changed quite a bit over the years. For some very old accounts, such as those I created 10 years ago, this means that I can't remember my account name. Using my real name would have averted this situation. * It motivates me to put more effort into my posts, since I don't have any disinhibition from being anonymous. * It often looks more formal than a silly username, and that might make people take my posts more seriously than they otherwise would have. * Similar to what Wei Dai said, it makes it easier for people to recognize me in person, since they don't have to memorize a mapping from usernames to real names in their heads. That said, there are some significant downsides, and I sympathize with people who don't want to use their real names. * It makes it much easier for people to dox you. There are some very bad ways that this can manifest. * If you say something stupid, your reputation is now directly on the line. Some people change accounts every few years, as they don't want to be associated with the stupid person they were a few years ago. * Sometimes disinhibition from being anonymous is a good way to spur creativity. I know that I was a lot less careful in my previous non-real-name accounts, and my writing style was different -- perhaps in a way that made my writing better. * Your real name might sound boring, whereas your online username can sound awesome.
9Hasturtimesthree7d Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case? Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just "git gud"? This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes. It might then be possible that I haven't had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up? In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate. Endless Legend is simple, and the world is complicated. I can therefore fully understand: "this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it". While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn't seem to have an obv
1JustMaier6d I've recently started to try and participate more in online discussions like LessWrong (technically this is my first post here). However in doing so I've realized what feels like a gaping hole in digital identity. No one knows who I am, and how could they? They see my name, my photo, my short bio, and they have no way of knowing the complex person behind it all. In my experience, when I interact with others, I feel like I am often misunderstood because the people I'm interacting with don't have adequate context of me to understand where my perspective is coming from. This plays out positively and negatively. Ultimately it causes people to unwittingly apply bias because they don't have the information they need to make sense of why I'm saying what I'm saying and how who I am plays a factor in what I'm trying to communicate. It seems to me that currently, the most effective way to establish a digital identity is by surrounding yourself with individuals with similar affinities and building social networks that establish your identity within those fields. This seems like a complicated and inefficient process and I'm curious to hear if I'm way off base and what others see as ways to establish a powerful digital identity.

Thursday, September 12th 2019
Thu, Sep 12th 2019

Shortform [Beta]
22jimrandomh8d Eliezer has written about the notion of security mindset [https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia] , and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it. An1lam's recent shortform post [https://www.lesswrong.com/posts/xDWGELFkyKdBpySAf/an1lam-s-short-form-feed#jBwdmYjPCkSCDngX6] talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described. My hypothesis is that to acquire security mindset, you have to: * Practice optimizing from a red team/attacker perspective, * Practice optimizing from a defender perspective; and * Practice modeling the interplay between those two perspectives. So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet)
7hereisonehand7d I keep seeing these articles about the introduction of artificial intelligence/data science to football and basketball strategy. What's crazy to me is that it's happening now instead of much much earlier. The book Moneyball was published in 2003 (the movie in 2011) spreading the story of how use of statistics changed the game when it came to every aspect of managing a baseball team. After reading it, I and many others thought to ourselves "this would be cool to do in other sports" - using data would be interesting in every area of every sport (drafting, play calling, better coaching, clock management, etc). But I guess I assumed - if I thought of it, why wouldn't other people? It's kind of a wild example of the idea that "if something works a little, you should do more of it and see if it works a lot, and keep doing that until you see evidence that it's running out of incremental benefit." My assumption that the "Moneyball" space was saturated back in 2011 was completely off given that in the time between 2011 and now, one could have trained themselves from scratch in the relevant data science methods and pushed for such jobs (my intuition is that 8 years of training could get you there). So, it's not even a "right place, right time" story given the timeline. It's just - when you saw the obvious trend, did you assume that everyone else was already thinking about it, or did you jump in yourself?
6Spiracular7d While I could rattle off the benefits of "delegating" or "ops people", I don't think I've seen a highly-discrete TAP + flowchart for realizing when you're at the point where you should ask yourself "Have my easy-to-delegate annoyances added up to enough that I should hire a full-time ops person? (or more)." Many people whose time is valuable seem likely to put off making this call until they reach the point where it's glaringly obvious. Proposing an easy TAP-like decision-boundary seems like a potentially high-value post? Not my area of specialty, though.
6elityre7d New (image) post: My strategic picture of the work that needs to be done [https://musingsandroughdrafts.wordpress.com/2019/09/12/my-strategic-picture-of-the-work-that-needs-to-be-done/]
4TekhneMakre7d Mentally crowding out possibilities gets you stuck in local maxima. To glean the benefits of temporarily inhabiting local maxima while garnering the benefits of even better points that can be discovered via higher-energy search, acquire the ability to prevent {the idea of existing forms of X} from mentally crowding out {the possibility of, specific visualizations of, and motivation to create} better forms of X.
Load More (5/7)

Wednesday, September 11th 2019
Wed, Sep 11th 2019

Shortform [Beta]
5TekhneMakre9d There's that dumb cliche about a lost item you're searching for: it's always in the last place you look. A pattern I've noticed in me: I'll want to retrieve an item, so I'll check a few places I think it's likely to be. When I don't find it I check a few other places that are less likely but plausible. These checks are all quick, because I'm not frustrated and I expect the item to be easy enough to find. Then I start getting frustrated and start looking in implausible places, and I search those places thoroughly, as I'm now expecting to have to carefully eliminate areas. Eventually I get really frustrated, and in desperation I start meticulously rechecking places I had been treating as eliminated, at which point I quickly find the item in one of the first few places I originally looked. I suspect this pattern metaphorizes well: it's always in the first place you looked, but you had to look a bit harder.
4Chris_Leong8d Book Review: The Rosie Project: Plot summary: After a disastrous series of dates, autistic genetics professor Don Tilman decides that it’d be easier to just create a survey to eliminate all of the women who would be unsuitable for him. Soon after, he meets a barmaid called Rosie who is looking for help with finding out who her father is. Don agrees to help her, but over the course of the project Don finds himself increasingly attracted to her, even though the survey suggests that he is completely unsuitable. The story is narrated in Don’s voice. He tells us all about his social mishaps, while also providing some extremely straight-shooting observations on society Should I read this?: If you’re on the fence, I recommend listening to a couple of minutes as the tone is remarkably consistent throughout, but without becoming stale My thoughts: I found it to be very humorous. but without making fun of Don. We hear the story from his perspective and he manages to be a very sympathetic character. The romance manages to be relatively believable since Don manages to establish himself as having many attractive qualities despite his limited social sills. However, I couldn’t believe that he’d think of Rosie as “the most beautiful woman in the world”; that kind of romantic idealisation is just too inconsistent with his character. His ability to learn skills quickly also stretched credibility, but it felt more believable after he dramatically failed during one instance. I felt that Don’s character development was solid; I did think that he’d struggle more to change his schedule after keeping it rigid for so long, but that wasn’t a major issue for me. I appreciated that by the end he had made significant growth (less strict on his expectations for a partner, not sticking so rigidly to a schedule, being more accomodating of other people’s faults), but he was still largely himself.
1TekhneMakre8d For any thing X, the group concept of X is an umbrella term for ideas regarding X that are shared by members of some (implicit) group. Two kinds of group concepts: the group mutual concept of X (mutual concept for short; mutual as in "mutual knowledge") is the intersection of all the group members's beliefs about X; so if everyone knows some thing Y about X, then Y is part of the mutual concept, but Y is less a part of the mutual concept to the extent that some members don't know or disagree with Y. The group common knowledge concept of X ( CK concept for short) is all the beliefs about X that are in common knowledge for the group. So Y is in the CK concept of X to the extent that group members will, when discussing X, correctly expect each other to: have Y in mind; easily understand implications of Y; coordinate action based on those implications of Y; and expect other members to do likewise.

Tuesday, September 10th 2019
Tue, Sep 10th 2019

Personal Blogposts
4[Event]San Francisco Meetup: Board Games170 Hawthorne St, San Francisco, CA 94107, USASep 17th
0
1[Event]SSC Atlanta October Meetup720 Moreland Avenue Southeast, AtlantaOct 12th
0
Shortform [Beta]
29jp9d Do Anki while Weightlifting Many rationalists appear to be interested in weightlifting. I certainly have enjoyed having a gym habit. I have a recommendation for those who do: Try studying Anki cards [https://twitter.com/michael_nielsen/status/957763229454774272?lang=en] while resting between weightlifting sets. The upside is high. Building the habit of studying Anki cards is hard, and if doing it at the gym causes it to stick, you can now remember things by choice not chance. And the cost is pretty low. I rest for 90 seconds between sets, and do about 20 sets when I go to the gym. Assuming I get a minute in once the overheads are accounted for, that gives me 20 minutes of studying. I go through about 4 cards per minute, so I could do 80 cards per visit to the gym. In practice I spend only ~5 minutes studying per visit, because I don't have that many cards. I'm not too tired to concentrate. In fact, the adrenaline high makes me happy to have something mentally active to do. Probably because of this, it doesn't at all decrease my desire to go to the gym. I find I can add simple cards to my Anki deck at the gym, although the mobile app does make it slow. Give it a try! It's cheap to experiment and the value of a positive result is high.
11romeostevensit9d A short heuristic for self inquiry: * write down things you think are true about important areas of your life * produce counter examples * write down your defenses/refutations of those counter examples * come back later when you are less defensive and review whether your defenses were reasonable * if not, why not? whence the motivated reasoning? what is being protecting from harm?
5strangepoop9d Is metarationality about (really tearing open) the twelfth virtue? It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void. (this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code) The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.
5Adam Scholl9d TIL that (according to this study [https://www.mayoclinicproceedings.org/article/S0025-6196(11)61392-X/fulltext], at least) adenovirus serotype 36 is present in 30% of obese humans, but only 11% of non-obese humans. The virus appears to cause obesity in chickens, mice, rats and monkeys. It may work (paper [https://sci-hub.tw/https://www.ncbi.nlm.nih.gov/pubmed/24788832], pop summary [https://medium.com/@subC0smos/viruses-make-your-fat-cells-greedy-7028a14438f0]) by binding to and permanently activating the PI3K enzyme, causing it to activate the insulin signaling pathway even when insulin isn't present. Previous discussion [https://www.lesswrong.com/posts/YbT9yEdrZJLGAdsGw/rationality-case-study-ad-36#6Hv6Z7Cgyc9WyFB4R] on LessWrong.
4romeostevensit9d When young you mostly play within others' reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.
Load More (5/6)

Load More Days