All Posts

Sorted by Recent Comments

Sunday, September 22nd 2019
Sun, Sep 22nd 2019

Shortform [Beta]
3TekhneMakre17h Self-deprecation good, absolution through self-flagellation bad. When a piece of software is deprecated, it often still sticks around, so that programs that use it will still work, and so that people can see how things used to work to make sense of history. The software is, nonetheless, marked as deprecated, so that other programmers don't use it in new programs, they expect it to change, and they don't view it as representing the current intent of the designers. If you want to deprecate parts of your mind, it makes sense to keep them around while you figure out the better thing, and it makes sense to tell yourself and other people that it's deprecated; self-deprecation is good when it's a tool of growth. Contrast: self-flagellation is when you criticize yourself with the intent of telling other people that you know you are bad and you are trying to get better, so that they won't view you as responsible for your behavior and won't punish you. Sometimes it makes sense to tell people that you know you did something wrong. But it doesn't make sense to whip yourself while you're alone. Whether or not absolution is something you actually want, real absolution comes from growth, not self-flagellation.
2Chris_Leong5h Book Review: Awaken the Giant Within Audiobook by Tony Robbins First things first, the audiobook isn't the full book or anything close to it. The standard book is 544 pages, while the audiobook is a little over an hour and a half. The fact that it was abridged really wasn't obvious. We can split what he offers into two main categories: motivational speaking and his system itself. The motivational aspect of his speaking is very subjective, so I'll leave it to you to evaluate yourself. You can find videos of his on Youtube and you should know within a few minutes whether you like his style. Instead I'll focus on reviewing his system. The first key aspect Robbins focuses on what he calls neuro-associations; that is what experiences we link pleasure and pain to. While we may be able to maintain a habit using willpower in the short-term, Robbins believes that in order to maintain it over the long term we need to change our neuro-associations to link please to actions that are good for us and pain to actions that are bad for us. He argues that we can attach positive or negative neuro-associations to an action by making the advantages or disadvantages as salient as possible. The images on packs of cigarettes are a good example of that principle in action, as would be looking the scans of people who have lung cancer. In addition, we can reward ourselves for success (though he doesn't discuss the possibility of punishing yourself for failure). This seems like a plausible method for affecting change and one that seems worthwhile experimenting with, although I've never experienced much motivation from rewarding myself as it doesn't really feel like the action is connected to the reward. The second key aspect of his system is to draw a distinction between decisions and preferences. Most of the time when we say that we've decided to do something, such as going to the gym, we're only just saying that we were prefer that to happen. We haven't really decided that we WILL do wh
2TurnTrout14h How does representation interact with consciousness? Suppose you're reasoning about the universe via a partially observable Markov decision process, and that your model is incredibly detailed and accurate. Further suppose you represent states as numbers, as their numeric labels. To get a handle on what I mean, consider the game of Pac-Man, which can be represented as a finite, deterministic, fully-observable MDP. Think about all possible game screens you can observe, and number them. Now get rid of the game screens. From the perspective of reinforcement learning, you haven't lost anything - all policies yield the same return they did before, the transitions/rules of the game haven't changed - in fact, there's a pretty strong isomorphism I can show between these two MDPs. All you've done is changed the labels - representation means practically nothing to the mathematical object of the MDP, although many eg DRL algorithms should be able to exploit regularities in the representation to reduce sample complexity. So what does this mean? If you model the world as a partially observable MDP whose states are single numbers... can you still commit mindcrime via your deliberations? Is the structure of the POMDP in your head somehow sufficient for consciousness to be accounted for (like how the theorems of complexity theory govern computers both of flesh and of silicon)? I'm confused.
1TekhneMakre9h My cause, not my tool. Causes are what we care about together, and models are tools for acheiving the aims of our cause. In a world filling up with information cascades and emotive cascades [https://www.edge.org/response-detail/27181], it is worth maintaining separate mental threads for your cause and for the models (opinions, facts, judgements about people and policies) that you receive from other people who share your cause. Don't confuse the sense of "these people are my tribe, they care about what I care about, I can work with them" with their models being correct. Don't confuse (1) sharing models held by others in your cause, with (2) implementing in yourself a model that can update on stuff you see and direct your actions to acheive the outcomes you want. Don't confuse your desire to acheive the aims of the cause, with your agreeing with the opinions of others in the cause. When you're operating with something that is functioning as an ideology but not an effective model, remind yourself: My cause, not my tool.

Saturday, September 21st 2019
Sat, Sep 21st 2019

Shortform [Beta]
3TekhneMakre1d Vassar's Razor: preferences are about the future, not the present or the past. It makes sense to ask how past-you would've wanted the world to end up looking in the present, but it doesn't make sense to want the present world, which already exists, to be a certain way; it just is how it is.

Friday, September 20th 2019
Fri, Sep 20th 2019

Shortform [Beta]
12Jacobian2d Cross-tweeted [https://twitter.com/yashkaf/status/1174946434585546752]. TL;DR: In Utopia [https://www.nickbostrom.com/utopia.html], no one is Catholic. Politics, business, technology, even rationality: many important things preoccupy us but leave the soul lacking. People crave connection, beauty, purpose, meaning, transcendence. These things can be found And so they turn to the many religions that seem to offer that. Tim Urban noted [https://waitbutwhy.com/2014/02/pick-life-partner.html] that people in non-great relationships are twice as far from having relationships figured out as single people. They have two hard steps to take instead of just one: first to realize their current relationship is bad and break it up, then to find a great one. I'm starting to feel that today's religions are the bad marriages of the pursuit of transcendence. Religions were not optimized to uplift humans, they evolved to serve their priests and their memes. They offer pieces of the good stuff, but at the heavy cost of false belief [https://putanumonit.com/2018/04/23/dont-believe-wrong-things/#jesus]s. And how can the ideal state of your soul be one where you believe falsehoods? I have seen glimpses of my soul's ultimate goal. In poems, in woods, in cuddle parties. And I'm grateful that I could see them clearly, without the veil of one dogma or another obscuring my vision. I may or may not ever get there, but with religion I certainly won't. And when I hear about rationalist friends becoming religious, I grieve for them having fallen off the path. I think that this is Sam Harris' core truth, which is why he's so adamant about the benefits of individual spiritual practice and the horrors of organized religion. Each person's path is their own, and while the wisdom and advice of others is indispensable, any person (or holy book) claiming to already know the destination is going in the wrong direction.
10FactorialCode2d Inspired by the recent post on impact measures [https://www.lesswrong.com/posts/xCxeBSHqMEaP3jDvY/reframing-impact], I though of an example illustrating the subjective nature of impact. Consider taking the action of simultaneously collapsing all the stars except our sun into black holes.(Suppose you can somehow do this without generating supernovas.) To me, this seems like a highly impactful event, potentially vastly curtailing the future potential of humanity. But to an 11th century peasant, all this would mean is that the stars in the night sky would slowly go out over the course of millenia. Which would have very little impact on the peasants life.
7Matthew Barnett2d Signal boosting a Lesswrong-adjacent author from the late 1800s and early 1900s Via a friend, I recently discovered the zoologist, animal rights advocate, and author J. Howard Moore. His attitudes towards the world reflect contemporary attitudes within effective altruism about science, the place of humanity in nature, animal welfare, and the future. Here are some quotes [https://en.wikiquote.org/wiki/J._Howard_Moore] which readers may enjoy, Oh, the hope of the centuries and the centuries and centuries to come! It seems sometimes that I can almost see the shining spires of that Celestial Civilisation that man is to build in the ages to come on this earth—that Civilisation that will jewel the land masses of this planet in that sublime time when Science has wrought the miracles of a million years, and Man, no longer the savage he now is, breathes Justice and Brotherhood to every being that feels. But we are a part of Nature, we human beings, just as truly a part of the universe of things as the insect or the sea. And are we not as much entitled to be considered in the selection of a model as the part 'red in tooth and claw'? At the feet of the tiger is a good place to study the dentition of the cat family, but it is a poor place to learn ethics. Nature is the universe, including ourselves. And are we not all the time tinkering at the universe, especially the garden patch that is next to us—the earth? Every time we dig a ditch or plant a field, dam a river or build a town, form a government or gut a mountain, slay a forest or form a new resolution, or do anything else almost, do we not change and reform Nature, make it over again and make it more acceptable than it was before? Have we not been working hard for thousands of years, and do our poor hearts not almost faint sometimes when we think how far, far away the millennium still is after all our efforts, and how long our little graves will have been forgotten when that blessed time gets here? The defect in this arg
7Evan Rysdam2d I just learned a (rationalist) lesson. I'm taking a course that has some homework that's hosted on a third party site. There was one assignment at the beginning of the semester, a few weeks ago. Then, about a week ago, I was wondering to myself whether there would be any more assignments any time soon. In fact, I even wondered if I had somehow missed a few assignments, since I'd thought they'd be assigned more frequently. Well, I checked my course's website (different from the site where the homework was hosted) and didn't see any mention of assignments. Then I went to the professor's website, and saw that they said they didn't assign any "formal homework". Finally, I thought back to the in-class discussions, where the third-party homework was never mentioned. "Ah, good," I thought. "I guess I haven't missed any assignments, and none are coming up any time soon either." Then, today, the third-party homework was actually mentioned in class, so just now I went to look at the third-party website. I have missed three assignments, and there is another one due on Sunday. I am not judged by the quality of my reasoning. I am judged by what actually happens, as are we all. In retrospect (read: "beware that hindsight bias might be responsible for this paragraph") I kind of feel like I wasn't putting my all into figuring out if I was missing any assignments, and was instead just nervously trying to convince myself that I wasn't. Obviously, I would rather have had that unpleasant experience earlier and missed fewer assignments -- aka, if I was missing assignments, then I should have wanted to believe that I was missing assignments. Oops.
Load More (5/9)

Thursday, September 19th 2019
Thu, Sep 19th 2019

Shortform [Beta]
17habryka3d What is the purpose of karma?LessWrong has a karma system, mostly based off of Reddit's karma system, with some improvements and tweaks to it. I've thought a lot about more improvements to it, but one roadblock that I always run into when trying to improve the karma system, is that it actually serves a lot of different uses, and changing it in one way often means completely destroying its ability to function in a different way. Let me try to summarize what I think the different purposes of the karma system are: Helping users filter content The most obvious purpose of the karma system is to determine how long a post is displayed on the frontpage, and how much visibility it should get. Being a social reward for good content This aspect of the karma system comes out more when thinking about Facebook "likes". Often when I upvote a post, it is more of a public signal that I value something, with the goal that the author will feel rewarded for putting their effort into writing the relevant content. Creating common-knowledge about what is good and bad This aspect of the karma system comes out the most when dealing with debates, though it's present in basically any karma-related interaction. The fact that the karma of a post is visible to everyone, helps people establish common knowledge of what the community considers to be broadly good or broadly bad. Seeing a an insult downvoted, does more than just filter it out of people's feeds, it also makes it so that anyone who stumbles accross it learns something about the norms of the community. Being a low-effort way of engaging with the site On lesswrong, Reddit and Facebook, karma is often the simplest action you can take on the site. This means its usually key for a karma system like that to be extremely simple, and not require complicated decisions, since that would break the basic engagement loop with the site. Problems with alternative karma systemsHere are some of the most common alternatives to our current k
15romeostevensit4d A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
6Hazard3d Reverse-Engineering a World View I've been having to do this a lot for Ribbonfarm's Mediocratopia [https://www.ribbonfarm.com/2019/02/08/mediocratopia-1/] blog chain. Rao often confuses me and I have to step up my game to figure out where he's coming from. It's basically a move of "What would have to be different for this to make sense?" Confusion: "But if you're going up in levels, stuff must be getting harder, so even though you're mediocre in the next tier, shouldn't you be loosing slack, which is antithetical to mediocrity?" Resolution: "What if there's weird discontinuous jumps to both skill and performance, and taking on a new frame/strategy/practice bumps you to the next level, without your effort going up proportionally?"

Wednesday, September 18th 2019
Wed, Sep 18th 2019

Shortform [Beta]
13TurnTrout4d Good, original thinking feels present to me - as if mental resources are well-allocated. The thought which prompted this: Sure, if people are asked to solve a problem and say they can't after two seconds, yes - make fun of that a bit. But that two seconds covers more ground than you might think, due to System 1 precomputation. Reacting to a bit of HPMOR here, I noticed something felt off about Harry's reply to the Fred/George-tried-for-two-seconds thing. Having a bit of experience noticing confusing, I did not think "I notice I am confused" (although this can be useful). I did not think "Eliezer probably put thought into this", or "Harry is kinda dumb in certain ways - so what if he's a bit unfair here?". Without resurfacing, or distraction, or wondering if this train of thought is more fun than just reading further, I just thought about the object-level exchange. People need to allocate mental energy wisely; this goes far beyond focusing on important tasks. Your existing mental skillsets already optimize and auto-pilot certain mental motions for you, so you should allocate less deliberation to them. In this case, the confusion-noticing module was honed; by not worrying about how well I noticed confusion, I was able to quickly have an original thought. When thought processes derail or brainstorming sessions bear no fruit, inappropriate allocation may be to blame. For example, if you're anxious, you're interrupting the actual thoughts with "what-if"s. To contrast, non-present thinking feels like a controller directing thoughts to go from here to there: do this and then, check that, come up for air over and over... Present thinking is a stream of uninterrupted strikes, the train of thought chugging along without self-consciousness. Moving, instead of thinking about moving while moving. I don't know if I've nailed down the thing I'm trying to point at yet.
3hunterglenn4d When you are in a situation, there are too many true facts about that situation for you to think about all of them at the same time. Whether you do it on purpose or not, you will inevitably end up thinking about some truths more than others. From a truth measure, this is fine, so long as you stick to true statements. From a truth perspective, you could also change which true facts you are thinking about, without sacrificing any truth. Truth doesn't care. But happiness cares, and utility cares. Which truths you happen to focus on may not affect how true your thoughts are, but it does affect your psychological state. And your psychological state affects your power over your situation, to make it better or worse, for yourself, and for everyone else. There's a chain of cause and effect, starting with what truth you hold in your mind, which then affects your emotional and psychological states, which then affects your actions, which, depending on if they're "good" or "bad," end up affecting your life. I harp on this so because some people keep thinking thoughts that ruin their mood, their choices, and their lives, but they refuse to just STOP thinking those ruinous thoughts, and they justify their refusal on the truth of the thoughts. So let's say you're helping to pass food out to stranded orphans, and it occurs to you that this won't matter in a thousand years. Then it occurs to you that there are so many orphans that this won't make any appreciable difference. It occurs to you that you'll go about your life in the world afterwards, seeing and hearing many things, and probably none of those things will be any better as a result of what you're doing for these orphans. Not better, not even different. And what you're doing isn't a big enough change to be noticeable, no matter how hard you look. So, all of these are factually accurate, true ideas, let's say. Fine. Now what happens to your psychological and emotional state? You perceive the choices before you as suddenl
1sayan4d What gadgets have improved your productivity? For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

Tuesday, September 17th 2019
Tue, Sep 17th 2019

Shortform [Beta]
6Spiracular6d One of my favorite little tidbits from working on this post [https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards]: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.
5FactorialCode6d Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By "fuzzy concepts" I mean things where we can say "I know it when I see it." but we might not be able to describe what "it" is. Examples that I believe support the hypothesis: * This shortform is about the philosophy of "philosophy" and this hypothesis is an attempt at an explanation of what we mean by "philosophy". * In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning. * In ethics, an ethical theory attempts to make explicit our moral intuitions. * A clear explanation of consciousness and qualia would be considered philosophical progress.
3Spiracular6d Bubbles in Thingspace It occurred to me recently that, by analogy with ML, definitions might occasionally be more like "boundaries and scoring-algorithms in thingspace" than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center... but for some words, I suspect there are dislocated "bubbles" and oddly-shaped "smears" that use the same word for a completely different concept. Homophones are one of the clearest examples; totally disconnected bubbles of substance. Another example is when a word covers all cases except those where a different word applies better; in that case, you can expect to see a "bite" taken out of its space, or even a multidimensional empty bubble, or a doughnut-like gap in the definition. If the hole is centered ("the strongest cases go by a different term" actually seems like a very common phenomenon), it even makes the idea of a "central" definition rather meaningless, unless you're willing to fuse or switch terms.

Monday, September 16th 2019
Mon, Sep 16th 2019

Personal Blogposts
2[Event]SSC Meetups Everywhere645 South Clark Street, ChicagoSep 28th
0
Shortform [Beta]
15mr-hire6d Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person. I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skill but are now good, and model the strategies that are common for them to make the switch. The episode is cut to tell a narrative of what the skills are to be acquired, what beliefs/attitudes need to be let go of and acquired, and the process to acquire them, rather than focusing on interviewing a particular person If there's enough interest, I'll do a pilot episode. Comment with what skillset you'd love to see a pilot episode on. Upvote if you'd have 50% or more chance of listening to the first episode.
10Matthew Barnett6d There's a phenomenon I currently hypothesize to exist where direct attacks on the problem of AI alignment are criticized much more often than indirect attacks. If this phenomenon exists, it could be advantageous to the field in the sense that it encourages thinking deeply about the problem before proposing solutions. But it could also be bad because it disincentivizes work on direct attacks to the problem (if one is criticism averse and would prefer their work be seen as useful). I have arrived at this hypothesis from my observations: I have watched people propose solutions only to be met with immediate and forceful criticism from others, while other people proposing non-solutions and indirect analyses are given little criticism at all. If this hypothesis is true, I suggest it is partly or mostly because direct attacks on the problem are easier to defeat via argument, since their assumptions are made plain If this is so, I consider it to be a potential hindrance on thought, since direct attacks are often the type of thing that leads to the most deconfusion -- not because the direct attack actually worked, but because in explaining how it failed, we learned what definitely doesn't work.
6TurnTrout6d I seem to differently discount different parts of what I want. For example, I'm somewhat willing to postpone fun to low-probability high-fun futures, whereas I'm not willing to do the same with romance.

Sunday, September 15th 2019
Sun, Sep 15th 2019

Shortform [Beta]
5An1lam7d Epistemic status: Thinking out loud. Introducing the QuestionScientific puzzle I notice I'm quite confused about: what's going on with the relationship between thinking and the brain's energy consumption? On one hand, I'd always been told that thinking harder sadly doesn't burn more energy than normal activity. I believed that and had even come up with a plausible story about how evolution optimizes for genetic fitness not intelligence, and introspective access is pretty bad as it is, so it's not that surprising that we can't crank up our brains energy consumption to think harder. This seemed to jive with the notion that our brain's putting way more computational resources towards perceiving and responding to perception than abstract thinking. It also fit well with recent results calling ego depletion into question and into the framework in which mental energy depletion is the result of a neural opportunity cost calculation [https://www.lesswrong.com/posts/9SSXcQ92ZJHgdqzDj/link-why-self-control-seems-but-may-not-be-limited] . Going even further, studies like this one [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019873/] left me with the impression that experts tended to require less energy to accomplish the same mental tasks as novices. Again, this seemed plausible under the assumption that experts brains developed some sort of specialized modules over the thousands of hours of practice they'd put in. I still believe that thinking harder doesn't use more energy, but I'm now much less certain about the reasons I'd previously given for this. Chess Players' Energy ConsumptionThis recent ESPN (of all places) article [https://www.espn.com/espn/story/_/id/27593253/why-grandmasters-magnus-carlsen-fabiano-caruana-lose-weight-playing-chess] about chess players' energy consumption during tournaments has me questioning this story. The two main points of the article are: 1. Chess players burn a lot of energy during tournaments, potentially on the order of 6000 ca
4Ruby7d Converting this from a Facebook comment to LW Shortform. A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam. Some discussion of repeated messaging behavior ensued. These are my thoughts: I feel conflicted about repeatedly messaging people. All the following being factors in this conflict: * Repeatedly messaging can be making yourself an asshole that gets through someone's unfortunate asshole filter [https://siderea.livejournal.com/1230660.html]. [filter. https://siderea.livejournal.com/1230660.html *] * There's an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways. * I know that many people are in fact disorganized and lose emails or otherwise don't have systems for getting back to you such that failure to get back to you doesn't mean they didn't want to. * There are other people have extremely good systems I'm always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don't always know where someone falls between "has no systems, relies on other people to message repeatedly" vs "has impeccable systems but due to volume of emails will take two weeks." * The overall incentives are such that most people probably shouldn't generally reveal which they are. * Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people's unreliability, it's either you bugging them or a good chance of not getting some important thing. * A wise, well-respected, business-experienced rationalist told me many years ago that if you want somethi
3Evan Rysdam7d I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I'm getting biased for/against certain posts before I even read them, just based on the number of votes they have.
1crabman7d Many biohacking guides suggest using melatonin. Does liquid melatonin [https://iherb.com/pr/Now-Foods-Liquid-Melatonin-2-fl-oz-59-ml/18345] spoil under high temperature if put in tea (95 degree Celcius)? More general question: how do I even find answers to questions like this one?

Saturday, September 14th 2019
Sat, Sep 14th 2019

Personal Blogposts
0[Event]SSC Meetups Everywhere: Brighton, UK201 Western Rd, Brighton, UKSep 21st
4
2[Event]SSC Meetups Everywhere: Rochester, NY200 East Avenue, RochesterSep 21st
1
0[Event]SSC Meetups Everywhere: St. Louis, MO3974 Hartford Street, St. LouisSep 21st
0
0[Event]SSC Meetups Everywhere: San Antonio, TX7338 Louis Pasteur Drive #204, San AntonioSep 21st
0
0[Event]SSC Meetups Everywhere: Rio de Janeiro, BrazilAvenida Rio Branco, 143 - A - Centro, Rio de JaneiroSep 21st
0
0[Event]SSC Meetups Everywhere: Riga, LatviaAudēju iela 15, Centra rajons, RīgaSep 28th
0
Shortform [Beta]
23Ruby9d Selected Aphorisms from Francis Bacon's Novum Organum I'm currently working to format Francis Bacon's Novum Organum [https://en.wikipedia.org/wiki/Novum_Organum] as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution) While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far: 3. . . . The only way to command reality is to obey it . . . 9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science. 10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.] 24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.Bacon repeat
15crabman8d A competition on solving math problems via AI is coming. https://imo-grand-challenge.github.io/ [https://imo-grand-challenge.github.io/] * The problems are from the International math olympiad (IMO) * They want to formalize all the problems in Lean (theorem prover) language. They haven't figured out how to do that, e.g. how to formalize problems of form "determine the set of objects satisfying the given property", as can be seen in https://github.com/IMO-grand-challenge/formal-encoding/blob/master/design/determine.lean [https://github.com/IMO-grand-challenge/formal-encoding/blob/master/design/determine.lean] * A contestant must submit a computer program that will take a problem's description as input and output a solution and its proof in Lean. I would guess that this is partly a way to promote Lean. I think it would be interesting to pose questions about this on metaculus.
6TekhneMakre8d Referential distance or referential displacement. Like inferential distance [https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances] , but for reference instead of inference. Inferential distance is when one perspective holds conclusions that require many steps of inference to justify, and the other perspective hasn't gone through those steps of inference (perhaps because the inferences are unsound). Referential distance is when one perspective has a term that points to something, and that term is most easily define using many other terms, and the other perspective doesn't know what those terms are supposed to point to. Referential distance, unlike normal distance, is directional: you are distant from an ancient Greek yarn spinner knows what a spindle and whorl is, and perhaps you don't; and separately, you know what a computer is, and they don't. (So really RD should be called referential displacement, and likewise for inferential displacement. It's the governor's job to rectify this.) Referential distance breaks up into translational distance (we have yet to match up the terms in our respective idiolects), and conceptual distance (you have concepts for things that I don't have concepts for, and/or we carve things up differently). Be careful not to mistakenly take a referential distance to be a translational distance when really it's conceptual; much opportunity to grow is lost in this way.
5Raemon9d I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development? People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd [https://xkcd.com/319/] and I just want to know for sure.

Friday, September 13th 2019
Fri, Sep 13th 2019

Shortform [Beta]
17habryka9d Thoughts on impact measures and making AI traps I was chatting with Turntrout today about impact measures, and ended up making some points that I think are good to write up more generally. One of the primary reasons why I am usually unexcited about impact measures is that I have a sense that they often "push the confusion into a corner" in a way that actually makes solving the problem harder. As a concrete example, I think a bunch of naive impact regularization metrics basically end up shunting the problem of "get an AI to do what we want" into the problem of "prevent the agent from interferring with other actors in the system". The second one sounds easier, but mostly just turns out to also require a coherent concept and reference of human preferences to resolve, and you got very little from pushing the problem around that way, and sometimes get a false sense of security because the problem appears to be solved in some of the toy problems you constructed. I am definitely concerned that Turntrou's AUP does the same, just in a more complicated way, but am a bit more optimistic than that, mostly because I do have a sense that in the AUP case there is actually some meaningful reduction going on, though I am unsure how much. In the context of thinking about impact measures, I've also recently been thinking about the degree to which "trap-thinking" is actually useful for AI Alignment research. I think Eliezer was right in pointing out that a lot of people, when first considering the problem of unaligned AI, end up proposing some kind of simple solution like "just make it into an oracle" and then consider the problem solved. I think he is right that it is extremely dangerous to consider the problem solved after solutions of this type, but it isn't obvious that there isn't some good work that can be done that is born out of the frame of "how can I trap the AI and make it marginally harder for it to be dangerous, basically pretending it's just a slightly smarter human
9Matthew Barnett10d I agree with Wei Dai that we should use our real names [https://www.lesswrong.com/posts/GEHg5T9tNbJYTdZwb/please-use-real-names-especially-for-alignment-forum] for online forums, including Lesswrong. I want to briefly list some benefits of using my real name, * It means that people can easily recognize me across websites, for example from Facebook and Lesswrong simultaneously. * Over time my real name has been stable whereas my usernames have changed quite a bit over the years. For some very old accounts, such as those I created 10 years ago, this means that I can't remember my account name. Using my real name would have averted this situation. * It motivates me to put more effort into my posts, since I don't have any disinhibition from being anonymous. * It often looks more formal than a silly username, and that might make people take my posts more seriously than they otherwise would have. * Similar to what Wei Dai said, it makes it easier for people to recognize me in person, since they don't have to memorize a mapping from usernames to real names in their heads. That said, there are some significant downsides, and I sympathize with people who don't want to use their real names. * It makes it much easier for people to dox you. There are some very bad ways that this can manifest. * If you say something stupid, your reputation is now directly on the line. Some people change accounts every few years, as they don't want to be associated with the stupid person they were a few years ago. * Sometimes disinhibition from being anonymous is a good way to spur creativity. I know that I was a lot less careful in my previous non-real-name accounts, and my writing style was different -- perhaps in a way that made my writing better. * Your real name might sound boring, whereas your online username can sound awesome.
9Hasturtimesthree10d Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case? Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just "git gud"? This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes. It might then be possible that I haven't had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up? In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate. Endless Legend is simple, and the world is complicated. I can therefore fully understand: "this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it". While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn't seem to have an obv
1JustMaier10d I've recently started to try and participate more in online discussions like LessWrong (technically this is my first post here). However in doing so I've realized what feels like a gaping hole in digital identity. No one knows who I am, and how could they? They see my name, my photo, my short bio, and they have no way of knowing the complex person behind it all. In my experience, when I interact with others, I feel like I am often misunderstood because the people I'm interacting with don't have adequate context of me to understand where my perspective is coming from. This plays out positively and negatively. Ultimately it causes people to unwittingly apply bias because they don't have the information they need to make sense of why I'm saying what I'm saying and how who I am plays a factor in what I'm trying to communicate. It seems to me that currently, the most effective way to establish a digital identity is by surrounding yourself with individuals with similar affinities and building social networks that establish your identity within those fields. This seems like a complicated and inefficient process and I'm curious to hear if I'm way off base and what others see as ways to establish a powerful digital identity.

Load More Days