All Posts

Sorted by Magic (New & Upvoted)

Thursday, December 12th 2019
Thu, Dec 12th 2019

No posts for December 12th 2019
Shortform [Beta]
15BrienneYudkowsky1hSangha: Part 1 In years past, the word “community” conjured for me images of many people talking to each other, as at a party or a bake sale. When I thought of “finding community”, I thought of looking for a group of people that would frequently interact with each other and also me. It didn’t really sound appealing — lots of chaos, too many people talking at once, constant misunderstandings, and so forth. But I knew that I felt worse and worse over time if I never saw other people. So I entered a “community” of people with a shared interest, and therefore an excuse to spend time together, and I gradually tried to figure out how to make “community” a better experience. I failed at that, over and over, for years. In 2019, I began to succeed. I know exactly why, but I feel a little embarrassed saying it, because it sounds so cheesy. I’ll say it anyway: I succeeded because I stopped looking for community outside of myself. My new year’s resolution for this year was to “take refuge in the sangha”. Literally, “sangha” is a group of monks and nuns living together at a monastery. When I spent a summer at a Zen temple, though, the abbess there used the term much more expansively. Sometimes she meant “everybody who comes to the temple”. Sometimes she meant “everyone who practices Buddhism”. Sometimes she meant “all sentient beings” (and she used “sentient” rather broadly as well, usually including all living things plus a variety of spirits). But whenever she said “sangha”, she always seemed to suggest something about her relationship to those beings, something with the flavor of monks and nuns practicing together day in and day out, washing floors together and meeting in the meditation hall well before the sun is up. In her view of the world, the grasshoppers in Germany are supporting her practice. When I resolved to “take refuge in the sangha”, I intended to do it no matter where I was or who I was with. If it’s possible to be supported by the grasshoppers in Germany,
11TekhneMakre20hWish: "Wonks, Bureaucrats, Sociopaths" hidden-identity game. I wish there were a hidden-identity politicking game, like Werewolf, Avalon, Secret Hitler, etc., that has the following properties: some kind of consensus will eventually be reached; some players want the consensus to match some "objective truth" (these are the Wonks, and the "objective truth" is an actual thing about which all players get some private information); some players want to bend the consensus to agree with something private and particular to that one player (these are the Sociopaths); and most players just want to agree with what the eventual consensus is, and not get shot in the process (these are the Bureaucrats). In my imagination, the Sociopaths have some power (maybe more sway on who gets assassinated or otherwise handicapped, or a codified greater ability to lie) such that by default, Bureaucrats are incentivized to try to latch on to whatever position it seems that the dominant Sociopath is driving the consensus, contra the Bureaucrats's incentive to predict the truth; and the Wonks are incentivized to prevent the Sociopaths from distinguishing them from the Bureaucrats (and targeting them for assassination). One dynamic I'm interested in Wonks signaling to each other that they're Wonks by choosing truth over power (that is, choosing to allow the group to keep aggregating info on what the truth is, over seizing power the way a non-Wonk would). Bureaucrats can choose to act like Sociopaths but with more leeway to shift their opinion (since they just care about agreeing with consensus, so they can appear to care about truth so that they keep power and exert a "clarifying force" that makes the consensus collapse prematurely from the truth-seeking perspective). Wonks can get Bureaucrats to switch to behaving mostly like Wonks, except Bureaucrats will always prefer a clear consensus over a correct one. Would be cool to tune the game so that those strategies actually come apart in practice.
10jacobjacob8hRATIONALITY HAS PUBS; WE NEED GYMS Consider the difference between a pub and a gym. You go to a pub with your rationalist friends to: * hang out * discuss interesting ideas * maybe maths a bit in a notebook someone brought * gossip * get inspired about the important mission you're all on * relax * brainstorm ambitious plans to save the future * generally have a good time You go to a gym to: * exercise * that is, repeat a particular movement over and over, paying attention to the motion as you go, being very deliberate about using it correctly * gradually trying new or heavier moves to improve in areas you are weak in * maybe talk and socialise -- but that is secondary to your primary focus of becoming stronger * in fact, it is common knowledge that the point is to practice, and you will not get socially punished for trying really hard, or stopping a conversation quickly and then just focus on your own thing in silence, or making weird noises or grunts, or sweating... in fact, this is all expected * not necessarily have a good time, but invest in your long-term health, strength and flexibility One key distinction here is effort. Going to a bar is low effort. Going to a gym is high effort. In fact, going to gym requires such a high effort that most people have a constant nagging guilt about doing it. They proceed to set up accountability systems with others, hire personal trainers, use habit installer apps, buy gym memberships as commitment devices, use clever hacks to always have their gym bag packed and ready to go, introspect on their feelings off anxiety about it and try to find work-arounds or sports which suit them, and so forth... People know gyms are usually a schlep, yet they also know going there is important, so they accept that they'll have to try really hard to build systems which get them exercising. However, things seem different for rationality. I've often heard people go "this rationality stuff doesn't se
6toonalfrink1hI have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all. It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind. If you're going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don't look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That's just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
6TurnTrout4hGoing through an intro chem textbook, it immediately strikes me how this should be as appealing and mysterious as the alchemical magic system of Fullmetal Alchemist. "The law of equivalent exchange" ≈ "conservation of energy/elements/mass (the last two holding only for normal chemical reactions)", etc. If only it were natural to take joy in the merely real [https://www.readthesequences.com/Joy-In-The-Merely-Real]...
Load More (5/6)

Wednesday, December 11th 2019
Wed, Dec 11th 2019

No posts for December 11th 2019
Shortform [Beta]
7Vanessa Kosoy1dIn the past I considered the learning-theoretic approach to AI theory [https://www.alignmentforum.org/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda] as somewhat opposed to the formal logic approach popular in MIRI [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic] (see also discussion [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic#fEKc88NbDWZavkW9o] ): * Learning theory starts from formulating natural desiderata for agents, whereas "logic-AI" usually starts from postulating a logic-based model of the agent ad hoc. * Learning theory naturally allows analyzing computational complexity whereas logic-AI often uses models that are either clearly intractable or even clearly incomputable from the onset. * Learning theory focuses on objects that are observable or finite/constructive, whereas logic-AI often considers objects that unobservable, infinite and unconstructive (which I consider to be a philosophical error). * Learning theory emphasizes induction whereas logic-AI emphasizes deduction. However, recently I noticed that quasi-Bayesian reinforcement learning [https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v] and Turing reinforcement learning [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic#fEKc88NbDWZavkW9o] have very suggestive parallels to logic-AI. TRL agents have beliefs about computations they can run on the envelope: these are essentially beliefs about mathematical facts (but, we only consider computable facts and computational complexity plays some role there). QBRL agents reason in terms of hypotheses that have logical relationships between them: the order on functions corresponds to implication, taking the minimum of two functions corresponds to logical "and", taking the concave hull of two func
5DanielFilan1dBetter to concretise 3 ways than 1 if you have the time. Here's a tale I've heard but not verified: in the good old days, Intrade had a prediction market on whether Obamacare would become law, which resolved negative, due to the market's definition of Obamacare. Sometimes you're interested in answering a vague question, like 'Did Donald Trump enact a Muslim ban in his first term' or 'Will I be single next Valentine's day'. Standard advice is to make the question more specific [https://www.lesswrong.com/posts/XosKB3mkvmXMZ3fBQ/specificity-your-brain-s-superpower] and concrete into something that can be more objectively evaluated. I think that this is good advice. However, it's inevitable that your concretisation may miss out on aspects of the original vague question that you cared about. As such, it's probably better to concretise the question multiple ways which have different failure modes. This is sort of obvious for evaluating questions about things that have already happened, like whether a Muslim ban was enacted, but seems to be less obvious or standard in the forecasting setting. That being said, sometimes it is done - OpenPhil's animal welfare series of questions [https://www.metaculus.com/questions/?search=cat:series--animal-welfare] seems to me to basically be an example - to good effect. This procedure does have real costs. Firstly, it's hard to concretise vague questions, and concretising multiple times is harder than concretising once. It's also hard to predict multiple questions, especially if they're somewhat independent as is necessary to get the benefits, meaning that each question will be predicted less well. In a prediction market context, this may well manifest in having multiple thin, unreliable markets instead of one thick and reliable one.
4mr-hire1dMy biggest win lately (Courtesy of Elliot Teperman [https://www.facebook.com/elliot.teperman?__tn__=%2CdK-R-R&eid=ARDXTYGkvPIjD_TeT_Efe4IT5g5pXJ2SBqbv1M-ws-k0RiajAFSzorYCFXXxPukb5xThhwgL-7ELcrRO&fref=mentions] ) in regards to self love is to get in the habit of thinking of myself as the parent of a child (myself) who I have unconditional love for, and saying what that parent would say. An unexpected benefit of this is that I've started talking like this to others. Like, sometimes my friends just need to hear that I appreciate them as a human being, and am proud of them for what they accomplished and its' not the type of thing I used to say at all. And so do I, I didn't realize how much I needed to hear that sort of thing from myself until I started saying it regularly. One could call this Internal Parent Systems. Not to be confused with the default installed one that many of has that judges, criticizes, or blames in our parents' voice :). A close cousin of Qiaochu Yuan [https://www.facebook.com/qiaochu?__tn__=%2CdK-R-R&eid=ARD9wYBDhpDinei064JF6oy43LuvbMqe-sbMqVT3fnKzU7DBedTodCFZI2Mk-96uBHSIiCydZeZqNH9j&fref=mentions] 's Internal Puppy Systems
3TekhneMakre1dRepeated phrases stick in the mind. Repeated phrases heard in different contexts stick in the mind. Even if they're just repeated without additional information, repeated phrases stick in the mind. Repeated phrases stick in the mind and make themselves more available to say, more available to think, more available to use to recognize people who are "on the same wavelength as you". Other forces in the mind latch onto repeated phrases stuck in the mind. External forces with external goals repeat phrases so they'll stick in your mind.
2TekhneMakre1dWish: sememic translations. Most translations are not word-for-word, but instead are paraphrases of the original text; the translator interprets idioms, corrects apparent mistakes, rearranges the order of words and phrases to clarify the sentence structure according to the target language's usual structure, and so on. Obviously this is mostly good, but in some cases one wants to cleave more closely to the "raw" expression of the original. There are interlinear translations. But I've never seen, and wish to see, a good "sememic" translation . A sememe is a unit of meaning; e.g. the word "organizing" has (at least) three sememes: "organ", "iz" = make into the form of, "ing" = ongoing action (or something, IDK). A sememic translation (or maybe "hyperliteral translation") would aspire to directly translate the sememes of the source text into child-like gestures in the target language. For example, the German word "Wahrheit" might be translated as "True-ness", rather than the more standard "truth", and "Wahrscheinlichkeit" might be translated as "True-seeming-like-ness", as opposed to the more standard "probability". Obviously this would be a lot of work, and might only be worth it in cases where the author of the original is a master of language (choosing words with great care), but I think it would prove thought-provoking. One might even make a sememic translation of a text back into its own language! (Sort of related, and a fun read: https://warwick.ac.uk/fac/cross_fac/complexity/people/students/dtc/students2011/maitland/fun/ [https://warwick.ac.uk/fac/cross_fac/complexity/people/students/dtc/students2011/maitland/fun/] )

Tuesday, December 10th 2019
Tue, Dec 10th 2019

No posts for December 10th 2019
Shortform [Beta]
13ozziegooen2dI feel like I've long underappreciated the importance of introspectability in information & prediction systems. Say you have a system that produces interesting probabilities pn for various statements. The value that an agent gets from them is not directly correlating to the accuracy of these probabilities, but rather to the expected utility gain they get after using information of these probabilities in corresponding Bayesian-approximating updates. Perhaps more directly, something related to the difference between one's prior and posterior after updated on pn. Assuming that prediction systems produce varying levels of quality results, agent's will need to know more about these predictions to really optimally update accordingly. A very simple example would be something like a bunch of coin flips. Say there were 5 coins flipped, I see 3 of them, and I want to estimate the number that were heads. A predictor tells me that their prediction has a mean probability of 40% heads. This is useful, but what would be much more useful is a list of which specific coins the predictor saw and what their values were. Then I could get a much more confident answer; possibly a perfect answer. Financial markets are very black-box like. Many large changes in company prices never really get explained publicly. My impression is that no one really understands the reasons for many significant market moves. This seems really suboptimal and I'm sure no one wanted this property to be the case.[1] Similarly, when trying to model the future of our own prediction capacities, I really don't think they should be like financial markets in this specific way. [1] I realize that participants in the market try to keep things hidden, but I mean the specific point that few people think that "Stock Market being a black box" = "A good thing for society."
12aaq3dScattered thoughts on how the rationalist movement has helped me: On the topic of rationalist self-improvement [https://www.lesswrong.com/posts/8xLtE3BwgegJ7WBbf/is-rationalist-self-improvement-real] , I would like to raise the point that simply feeling as though there's a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world. That generates a lot of hedons for me, which then on occasion allow me to "afford" doing other things I wouldn't otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There's a part of me that feels like I'm not just building up this knowledge for myself, but for the future possible good of "my people". I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that's honestly fricking awesome and blows standard delayed-gratification "self improvement" tactics outta the water 10/10 would recommend. Also there's the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again - nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says "mmm yes tasty good word soup i liek thanke" and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief. TLDR rationalists > rationalism
11dspeyer3dThis is a public key I wanted to document exists in a rationalist-associated place: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDSU/aJmWPV/lHsh5TiePzWimK0/Bj4VlsykTYucHv5PG+b3ogUe8zjcBqzW1Dl0pIJj+KYaEdxk5KYhEEImyaP6umMPnlKvL4VqR3lXebvTAnGxcWN27ZJDqcfgGI/Ilcf1JVEjA6g6DyvEOx3xeqBUH+oPvo8Z/VmyZjAFuuWwQ== dspeyer@dspeyerheim
6ozziegooen2dHe's an in-progress hierarchy of what's needed for information to be most useful to an organization or other multi-agent system. I'm sure there must be other very similar hierarchies out there, but don't currently know of any quite like this. Say you've come up with some cool feature that Apple could include in it's next phone. You think this is a great idea and they should add it in the future. You're outside of Apple, so the only way you have of interacting with them is by sending information through various channels. The question is: what things should you first figure out to understand how to do this? First, you need to have identified an improvement. You've done that, so you've gotten through the first step. Second, for this to be incorporated, it should make sense from Apple's perspective. If it comes out that the costs of adding the feature, including opportunity costs, outweigh the benefits, then it wouldn't make sense to them. Perhaps you could deceive them to incorporate the feature, but it would be against their interests. So you should hopefully get information about Apple's utility function and identify an intervention that would implement your improvement while being positive in expected value to them. Of course, just because it could be good for Apple does not mean that the people necessary to implement it would be in favor of doing so. Perhaps this feature involves the front-facing camera, and it so happens that people in charge of the decisions around the front-facing camera have some strange decision function and would prefer not being asked to do more work. To implement your change, these people would have to be convinced. A rough estimation for that would be an analysis that suggests that taking this feature on would have positive expected value for their utility functions. Again, it's possible that isn't a requirement, but if so, you may be needing to effectively deceive people. Once you have expected value equations showing that a specifi
4G Gordon Worley III2dAfter seeing another LW user (sorry, forgot who) mention this post [https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/] in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago: I like this because it's simple and it says what rather than how. My old guidelines were all about how: More generally, I think the SSC commenting guidelines might be a good cluster for those of us who want LW comment sections to be "nice" and so mark our posts as norm enforcing. If this catches on this might help deal with finding the few clusters of commenting norms that make people want without having lots of variation between authors.
Load More (5/13)

Monday, December 9th 2019
Mon, Dec 9th 2019

No posts for December 9th 2019

Sunday, December 8th 2019
Sun, Dec 8th 2019

No posts for December 8th 2019
Shortform [Beta]
40BrienneYudkowsky4dSuppose you wanted to improve your social relationships on the community level. (I think of this as “my ability to take refuge in the sangha”.) What questions might you answer now, and then again in one year, to track your progress? Here’s what’s come to mind for me so far. I’m probably missing a lot and would really like your help mapping things out. I think it’s a part of the territory I can only just barely perceive at my current level of development. * If something tragic happened to you, such as a car crash that partially paralyzed you or the death of a loved one, how many people can you name whom you'd find it easy and natural to ask for help with figuring out your life afterward? * For how many people is it the case that if they were hospitalized for at least a week you would visit them in the hospital? * Over the past month, how lonely have you felt? * In the past two weeks, how often have you collaborated with someone outside of work? * To what degree do you feel like your friends have your back? * Describe the roll of community in your life. * How do you feel as you try to describe the roll of community in your life? * When's the last time you got angry with someone and confronted them one on one as a result? * When's the last time you apologized to someone? * How strong is your sense that you're building something of personal value with the people around you? * When's the last time you spent more than ten minutes on something that felt motivated by gratitude? * When a big change happens in your life, such as loosing your job or having a baby, how motivated do you feel to share the experience with others? * When you feel motivated to share an experience with others, how satisfied do you tend to be with your attempts to do that? * Do you know the love languages of your five closest friends? To what extent does that influence how you behave toward them? * Does it seem to you that your friends know your love
7AlexMennen5dTheorem: Fuzzy beliefs (as in https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v [https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v] ) form a continuous DCPO. (At least I'm pretty sure this is true. I've only given proof sketches so far) The relevant definitions: A fuzzy belief over a set X is a concave function ϕ:ΔX→[0,1] such that sup(ϕ)=1 (where ΔX is the space of probability distributions on X). Fuzzy beliefs are partially ordered by ϕ≤ψ⟺∀μ∈ΔX:ϕ(μ)≥ψ(μ) . The inequalities reverse because we want to think of "more specific"/"less fuzzy" beliefs as "greater", and these are the functions with lower values; the most specific/least fuzzy beliefs are ordinary probability distributions, which are represented as the concave hull of the function assigning 1 to that probability distribution and 0 to all others; these should be the maximal fuzzy beliefs. Note that, because of the order-reversal, the supremum of a set of functions refers to their pointwise infimum. A DCPO (directed-complete partial order) is a partial order in which every directed subset has a supremum. In a DCPO, define x<<y to mean that for every directed set D with supD≥y, ∃d∈D such that d≥x. A DCPO is continuous if for every y , y=sup{x∣x<<y}. Lemma: Fuzzy beliefs are a DCPO. Proof sketch: Given a directed set D , (supD)(μ)=min{d(μ)∣d∈D} is convex, and {μ∣(supD)(μ)=1}=⋂d∈D{μ∣d(μ)=1}. Each of the sets in that intersection are non-empty, hence so are finite intersections of them since D is directed, and hence so is the whole intersection since ΔX is compact. Lemma: ϕ<<ψ iff {μ∣ψ(μ)=1} is contained in the interior of {μ∣ϕ(μ)=1} and for every μ such that ψ(μ)≠1, ϕ(μ)>ψ(μ). Proof sketch: If supD≥ψ, then ⋂d∈D{μ∣d(μ)=1}⊆{μ∣ψ(μ)=1} , so by compactness of ΔX and directedness of D, there should be d∈D such that {μ∣d(μ)=1}⊆int({μ∣ϕ(μ)=1}). Similarly, for each μ such that ψ(μ)≠1, there should be dμ∈D s

Saturday, December 7th 2019
Sat, Dec 7th 2019

No posts for December 7th 2019
Shortform [Beta]
11Adam Scholl6dSo apparently Otzi the Iceman [https://en.wikipedia.org/wiki/%C3%96tzi] still has a significant amount [https://sci-hub.tw/https://link.springer.com/article/10.1007/s00018-013-1360-y] of brain tissue. Conceivably some memories are preserved?
10Raemon6dIn response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).

Friday, December 6th 2019
Fri, Dec 6th 2019

No posts for December 6th 2019
Shortform [Beta]
42BrienneYudkowsky6dSome advice to my past self about autism: Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens. Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit done and having a good life. It’s just hard to get them all working together. Try leaning into those behaviors and see what’s good about them. For example, you know how when you accidentally do something three times in a row, you then feel compelled to keep doing it the same way at the same time forever? Studying this phenomenon in yourself will lead you to build solid and carefully designed routines that allow you to be a lot more reliably vibrant. You know how some autistic people have one-on-one aides, caretakers, and therapists who assist in their development and day-to-day wellbeing? Read a bit about what those aides do. You’ll notice right away that the state of the art in this area is crap, but try to imagine what professional autism aides might do if they really had things figured out and were spectacular at their jobs. Then devote as many resources as you can spare for a whole year to figuring out how to perform those services for yourself. It seems to me that most of what’s written about autism by neurotypicals severely overemphasizes social stuff. You’ll find almost none of it compelling. Try to understand what’s really going on with autism, and your understanding will immediately start paying off in non-social quality of life improvements. Keep at it, and it’ll eventually start paying off in deep and practical social insights as well (which I know you don’t care about right now, but it’s true). I
21Raemon7dOver in this thread, Said asked [https://www.lesswrong.com/posts/5zSbwSDgefTvmWzHZ/affordance-widths#iM4Jfa3ThJcFii2Pm] the reasonable question "who exactly is the target audience with this Best of 2018 book?" I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. But, a quick "best guess" answer for now. I see the overall review process as having two "major phases": * Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting * Phase 2: Compilation and Publication I think the first phase should be oriented entirely around "internal consumption" – figuring out what epistemic standard to hold ourselves to, and how, so that we can do better in the future. (As well as figuring out what ideas we've developed that should be further built upon). Any other benefits are incidental. The final book/sequence is at least somewhat externally facing. I do expect it to be some people's first introduction to LessWrong, and other people's "one thing they read from LW this year". And at least some consideration should be given to those people's reading experience (which will be lacking a lot of context). But my guess is that should come more in the form of context-setting editor commentary than in decisions about what to include. I think “here are the fruits of our labors; take them and make use of them” is more of what I was aiming for. (Although "what standards are we internally holding ourselves to, and what work should we build towards?" is still an important function of the finished product). It'd be nice if people were impressed, but a better frame for that goal is "Outsiders looking in can get an accurate picture of how productive our community is, and what sort of things we do", and maybe they are impressed by that or maybe not. (I re

Thursday, December 5th 2019
Thu, Dec 5th 2019

No posts for December 5th 2019
Shortform [Beta]
8BrienneYudkowsky7dThread on The Abolition of Man by C. S. Lewis
8Raemon8dAfter this weeks's stereotypically sad experience with the DMV.... (spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot) ...and having just looked over a lot of 2018 posts [https://www.lesswrong.com/nominations] investigating coordination failure... I find myself wondering if it's achievable to solve one particular way in which bureaucracy is terrible: the part where each node/person in the system only knows a small number of things, so you have to spend a lot of time rehashing things, and meanwhile can't figure out if your goal is actually achievable. (While attempting to solve this problem, it's important to remember that at least some of the inconvenience of bureaucracy may be an active ingredient [https://slatestarcodex.com/2018/08/30/bureaucracy-as-active-ingredient/] rather than inefficiency. But at least in this case it didn't seem so: drivers licenses aren't a conserved resource that the DMV wants to avoid handing out. If I had learned early on that I couldn't get my license last Monday it would have not only saved me time, but saved DMV employee hassle) I think most of the time there's just no incentive to really fix this sort of thing (while you might have saved DMV employee hassle, you probably wouldn't save them time, since they still just work the same 8 hour shift regardless. And if you're the manager of a DMV you probably don't care too much about your employees having slightly nicer days. But, I dunno man, really!?. Does it seem like at least Hot New Startups could be sold on software that, I dunno, tracks all the requirements of a bureaucratic process and tries to compile "will this work?" at sta
4AABoyles7dAttention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence. The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is. Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd. What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline: 1. Create a random DAG representing some complex related phenomena. 2. Create an agent which holds beliefs about the relationship between nodes in the graph, and updates its beliefs when it discovers a correlation with p > 0.95. 3. Create a second agent with the same belief structure, but which updates on every experiment regardless of the correlation. 4. On each iteration have each agent select two nodes in the graph, measure their correlation, and update their beliefs. Then have them compute the DAG corresponding to their current belief matrix. Measure the difference between the DAG they output and the original DAG created in step 1. I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There a
2Chris_Leong7dEDT agents handle Newcomb's problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box. That's the high-level description, but let's break it down further. Unlike CDT, EDT doesn't worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are comparable. In other words, any differences in hidden state, such as you being a different agent or money being placed in the box, are attributed to your decision (see my previous discussion here [https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/shortform#yKRZgXjt3qvzpWQEr])

Wednesday, December 4th 2019
Wed, Dec 4th 2019

No posts for December 4th 2019
Shortform [Beta]
34BrienneYudkowsky8dHere’s what Wikipedia has to say about monographs [https://en.wikipedia.org/wiki/Monograph] . “A monograph is a specialist work of writing… or exhibition on a single subject or an aspect of a subject, often by a single author or artist, and usually on a scholarly subject… Unlike a textbook, which surveys the state of knowledge in a field, the main purpose of a monograph is to present primary research and original scholarship ascertaining reliable credibility to the required recipient. This research is presented at length, distinguishing a monograph from an article.” I think it’s a bit of an antiquated term. Either that or it’s chiefly British, because as an American I’ve seldom encountered it. I know the word because Sherlock Holmes is always writing monographs. In *A Study In Scarlet*, he says, “I gathered up some scattered ash from the floor. It was dark in colour and flakey—such an ash as is only made by a Trichinopoly. I have made a special study of cigar ashes—in fact, I have written a monograph upon the subject. I flatter myself that I can distinguish at a glance the ash of any known brand, either of cigar or of tobacco.” He also has a monograph on the use of disguise in crime detection, and another on the utilities of dogs in detective work. When I tried thinking of myself as writing “monographs” on things, I broke though some sort of barrier. The things I wrote turned out less inhibited and more… me. I benefited from them myself more as well. What I mean by “monograph” is probably a little different from what either Sherlock or academia means, but it’s in the same spirit. I think of it as a photo study or a character sketch, but in non-fiction writing form. Here are my guidelines for writing a monograph. 1. Pick a topic you can personally investigate. It doesn’t matter whether it’s “scholarly”. It’s fine if other people have already written dozens of books on the subject, regardless of whether you’ve read them, just as long as you can stick your own
15TurnTrout9dListening to Eneasz Brodski's excellent reading of Crystal Society [http://www.hpmorpodcast.com/?page_id=1958], I noticed how curious I am about how AGI will end up working. How are we actually going to do it? What are those insights? I want to understand quite badly, which I didn't realize until experiencing this (so far) intelligently written story. Similarly, how do we actually "align" agents, and what are good frames for thinking about that? Here's to hoping we don't sate the former curiosity too early.
11mr-hire8dAs part of the Athena Rationality Project, we've recently launched two new prototype apps that may be of interest to LWers Virtual Akrasia Coach The first is a Virtual Akrasia Coach [http://athenarationality.mattgoldenberg.net/project/akrasia-procrastination-coach/] , which comes out of a few months of studying various interventions for Akrasia, then testing the resulting ~25 habits/skills through internet based lessons to refine them. We then took the resulting flowchart for dealing with Akrasia, and created a "Virtual Coach" that can walk you through a work session, ensuring your work is focused, productive and enjoyable. Right now about 10% of people find it useful to use in every session, 10% of people find it useful to use when they're procrastinating, and 10% of people find it useful to use when they're practicing the anti-akrasia habits. The rest don't find it useful, or think it would be useful but don't tend to use it. I know many of you may be wondering how the idea of 25 skills fits in with the Internal Conflict model of akrasia. One way to frame the skills is that for people with chronic akrasia, we've found that they tend to have certain patterns that lead to internal conflict - For instance, one side thinks it would be good to work on something, but another side doesn't like uncertainty. You can solve this by internal double crux, or you can have a habit to always know your next action so there's no uncertainty. By using this and the other 24 tools you can prevent a good portion of internal conflict from showing up in the first place. Habit Installer/Uninstaller App The habit installer/uninstaller app is an attempt to create a better process for creating TAPs, and using a modified Murphyjitsu process to create setbacks for those taps. Here's how it works. 1. When you think of a new TAP to install, add it to your habit Queue.. 2. When the TAP reaches the top of the Habit Queue, it gives you a "Conditioning Session" - these are a set of au
6hamnox8dI could discuss everything within a few very concrete examples. A concrete example tends to create a working understanding in a way mathematical abstraction fails to. I want to give my readers real knowledge, so I do often insist on describing concepts in the world without numbers or equations or proofs. However, math exists for a reason. Some patterns generalize so strongly that you simply cannot communicate the breadth of its applications in concrete examples. You have to describe the shape of it by constraint. To do otherwise would render it a handful of independent parlor tricks instead of one sharp and heavy blade.
3cousin_it8dEdit: no point asking this question here.
Load More (5/7)

Tuesday, December 3rd 2019
Tue, Dec 3rd 2019

No posts for December 3rd 2019

Load More Days