All Posts

Sorted by Magic (New & Upvoted)

Friday, December 13th 2019
Fri, Dec 13th 2019

Shortform [Beta]
5Matthew Barnett4hShould effective altruists be praised for their motives, or their results? It is sometimes claimed, perhaps by those who recently read The Elephant in the Brain, that effective altruists have not risen above the failures of traditional charity, and are every bit as mired in selfish motives as non-EA causes. From a consequentialist view, however, this critique is not by itself valid. To a consequentialist, it doesn't actually matter what one's motives are as long as the actual effect of their action is to do as much good as possible. This is the primary difference between the standard way of viewing morality, and the way that consequentialists view it. Now, if the critique was that by engaging in unconsciously selfish motives, we are systematically biasing ourselves away from recognizing the most important actions, then this critique becomes sound. Of course then the conversation shifts immediately towards what we can do to remedy the situation. In particular, it hints that we should set up a system which corrects our systematic biases. Just as a prediction market corrects for systematic biases by rewarding those who predict well, and punishing those who don't, there are similar ways to incentivize exact honesty in charity. One such method is to praise people in proportion to how much good they really acheive. Previously, it has been argued in the philosophical literature that consequentialists should praise people for motives rather than results, because punishing someone for accidentally doing something bad when they legitimately meant to help people would do nothing but discourage people from trying to do good. While clearly containing a kernel of truth, this argument is nonetheless flawed. Similar to how rewarding a student for their actual grades on a final exam will be more effective in getting them to learn the material than rewarding them merely for how hard they tried, rewarding effective altruists for the real results of their actions will incentivize
3rmoehn1dUpdated the Prediced AI alignment event/meeting calendar [https://www.lesswrong.com/posts/h8gypTEKcwqGsjjFT/predicted-ai-alignment-event-meeting-calendar] . New event: AI Safety Camp Toronto [https://aisafetycamp.com/ai-safety-camp-toronto/]
2An1lam4hLink post for a short post [https://an1lam.github.io/post/2019-12-13-alternative-explanation-of-simpsons-paradox/] I just published describing my way of understanding Simpson's Paradox.
2TekhneMakre20hThe blessing you give by asking a question. For a socially anxious overthinker, it's often hard to just, like, say stuff to people. Am I intruding on your thoughts, am I being an imposition on you, should I go? Am I secretly just trying to get something from you, and even though I really do want that thing I also don't want to trick you about it, so to be safely cooperative I should stay silent (should I go)? Now, even if that's all true, and even if all that should be solved intrinsically in the limit, it's still often good to grease the wheels of social relationships, as a bootstrapping move. So if you ask this type of nerd what they think of X, it bestows a small blessing on them: you've made the cooperation-point be solidly in the regime of yes-talking rather than of not-talking, and they can just answer without worrying so much that their choice of topic is suspicious and/or detrimental (they could still be manipulating you, but of course probably approximately not more than the norm of manipulativeness, which you're already at least in equilibrium about). (I personally have this type of anxiety, and also often ask questions / start threads.) Also, there's some mysterious(-to-me) way in which being asked a question by another human activates a mental thread / orientation that is much more difficult to access otherwise. So, by asking questions you are blessing your friends.
2TekhneMakre21hAmphicausal, autocausal. Two phenomena A and B are__amphicausal__ if A causes B and also B causes A. Amphi = both sides, as in amphibian (= both kinds of bios (= life)). (We could also call this reciprocausal, as in reciprocal = back and forth, but that seems gauche.) Example: wind blows over the water's surface; a ripple forms, and because it sticks up above the surface, it catches the wind more, and so it grows to stick up a bit further above the surface, and so on. Height catches wind, caught wind increases height. A phenomenon is autocausal if it causes itself. We can coarsen our view of the wave example, and just say: the wave causes itself. Obviously there's no amphi- or auto-causality in a plain Bayes net. But the intuitive notion is good, and maybe makes sense in dynamic Bayes nets where we think of the phenomena A and B as being collections of variables that extend over time, so "A causes B and B causes A" cashes out as "A_t causes B_t+1, and B_t causes A_t+1". (I think there's more clarity to be had here regarding the difference between the mathematical "causes" = "has any effect on", vs the intuitive cause, which has something to do with surprising/specific conditions coming about, or something.)
Load More (5/8)

Thursday, December 12th 2019
Thu, Dec 12th 2019

Personal Blogposts
3[Event]Seattle Secular Solstice 2019Seattle, WA, USADec 22nd
0
1[Event]Bangalore SSC/LW Meetup # 20 - Dec2, Church Street, MG ROad, BengaluruDec 22nd
0
Shortform [Beta]
26BrienneYudkowsky1dSangha: Part 1 In years past, the word “community” conjured for me images of many people talking to each other, as at a party or a bake sale. When I thought of “finding community”, I thought of looking for a group of people that would frequently interact with each other and also me. It didn’t really sound appealing — lots of chaos, too many people talking at once, constant misunderstandings, and so forth. But I knew that I felt worse and worse over time if I never saw other people. So I entered a “community” of people with a shared interest, and therefore an excuse to spend time together, and I gradually tried to figure out how to make “community” a better experience. I failed at that, over and over, for years. In 2019, I began to succeed. I know exactly why, but I feel a little embarrassed saying it, because it sounds so cheesy. I’ll say it anyway: I succeeded because I stopped looking for community outside of myself. My new year’s resolution for this year was to “take refuge in the sangha”. Literally, “sangha” is a group of monks and nuns living together at a monastery. When I spent a summer at a Zen temple, though, the abbess there used the term much more expansively. Sometimes she meant “everybody who comes to the temple”. Sometimes she meant “everyone who practices Buddhism”. Sometimes she meant “all sentient beings” (and she used “sentient” rather broadly as well, usually including all living things plus a variety of spirits). But whenever she said “sangha”, she always seemed to suggest something about her relationship to those beings, something with the flavor of monks and nuns practicing together day in and day out, washing floors together and meeting in the meditation hall well before the sun is up. In her view of the world, the grasshoppers in Germany are supporting her practice. When I resolved to “take refuge in the sangha”, I intended to do it no matter where I was or who I was with. If it’s possible to be supported by the grasshoppers in Germany,
24jacobjacob2dRATIONALITY HAS PUBS; WE NEED GYMS Consider the difference between a pub and a gym. You go to a pub with your rationalist friends to: * hang out * discuss interesting ideas * maybe maths a bit in a notebook someone brought * gossip * get inspired about the important mission you're all on * relax * brainstorm ambitious plans to save the future * generally have a good time You go to a gym to: * exercise * that is, repeat a particular movement over and over, paying attention to the motion as you go, being very deliberate about using it correctly * gradually trying new or heavier moves to improve in areas you are weak in * maybe talk and socialise -- but that is secondary to your primary focus of becoming stronger * in fact, it is common knowledge that the point is to practice, and you will not get socially punished for trying really hard, or stopping a conversation quickly and then just focus on your own thing in silence, or making weird noises or grunts, or sweating... in fact, this is all expected * not necessarily have a good time, but invest in your long-term health, strength and flexibility One key distinction here is effort. Going to a bar is low effort. Going to a gym is high effort. In fact, going to gym requires such a high effort that most people have a constant nagging guilt about doing it. They proceed to set up accountability systems with others, hire personal trainers, use habit installer apps, buy gym memberships as commitment devices, use clever hacks to always have their gym bag packed and ready to go, introspect on their feelings off anxiety about it and try to find work-arounds or sports which suit them, and so forth... People know gyms are usually a schlep, yet they also know going there is important, so they accept that they'll have to try really hard to build systems which get them exercising. However, things seem different for rationality. I've often heard people go "this rationality stuff doesn't se
12TekhneMakre2dWish: "Wonks, Bureaucrats, Sociopaths" hidden-identity game. I wish there were a hidden-identity politicking game, like Werewolf, Avalon, Secret Hitler, etc., that has the following properties: some kind of consensus will eventually be reached; some players want the consensus to match some "objective truth" (these are the Wonks, and the "objective truth" is an actual thing about which all players get some private information); some players want to bend the consensus to agree with something private and particular to that one player (these are the Sociopaths); and most players just want to agree with what the eventual consensus is, and not get shot in the process (these are the Bureaucrats). In my imagination, the Sociopaths have some power (maybe more sway on who gets assassinated or otherwise handicapped, or a codified greater ability to lie) such that by default, Bureaucrats are incentivized to try to latch on to whatever position it seems that the dominant Sociopath is driving the consensus, contra the Bureaucrats's incentive to predict the truth; and the Wonks are incentivized to prevent the Sociopaths from distinguishing them from the Bureaucrats (and targeting them for assassination). One dynamic I'm interested in Wonks signaling to each other that they're Wonks by choosing truth over power (that is, choosing to allow the group to keep aggregating info on what the truth is, over seizing power the way a non-Wonk would). Bureaucrats can choose to act like Sociopaths but with more leeway to shift their opinion (since they just care about agreeing with consensus, so they can appear to care about truth so that they keep power and exert a "clarifying force" that makes the consensus collapse prematurely from the truth-seeking perspective). Wonks can get Bureaucrats to switch to behaving mostly like Wonks, except Bureaucrats will always prefer a clear consensus over a correct one. Would be cool to tune the game so that those strategies actually come apart in practice.
9toonalfrink1dI have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all. It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind. If you're going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don't look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That's just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
8ozziegooen1dCharity investigators could be time-effective by optimizing non-cause-neutral donations. There are a lot more non-EA donors than EA donors. It may also be the case that EA donation research is somewhat saturated. Say you think that $1 donated to the best climate change intervention is worth 1/10th that of $1 for the best AI-safety intervention. But you also think that your work could increase the efficiency of $10mil of AI donations by 0.5%, but it could instead increase the efficiency of $50mil of climate change donations by 10%. Then, for you to maximize expected value, your time is best spent optimizing the climate change interventions. The weird thing here may be in explaining this to the donors. "Yea, I'm spending my career researching climate change interventions, but my guess is that all these funders are 10x less effective than they would be by donating to other things." While this may feel strange, both sides would benefit; the funders and the analysts would both be maximizing their goals. Separately, there's a second plus that teaching funders to be effectiveness-focused; it's possible that this will eventually lead some of them to optimize further. -------------------------------------------------------------------------------- I think this may be the case in our current situation. There honestly aren't too many obvious places for "effective talent" to go right now. There is a ton of potential funders out there that wouldn't be willing to go to core EA causes any time soon, but may be able to be convinced to give much more effectively in their given areas. There could potentially be a great deal of work to be done doing this sort of thing.
Load More (5/8)

Wednesday, December 11th 2019
Wed, Dec 11th 2019

Personal Blogposts
Shortform [Beta]
7Vanessa Kosoy2dIn the past I considered the learning-theoretic approach to AI theory [https://www.alignmentforum.org/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda] as somewhat opposed to the formal logic approach popular in MIRI [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic] (see also discussion [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic#fEKc88NbDWZavkW9o] ): * Learning theory starts from formulating natural desiderata for agents, whereas "logic-AI" usually starts from postulating a logic-based model of the agent ad hoc. * Learning theory naturally allows analyzing computational complexity whereas logic-AI often uses models that are either clearly intractable or even clearly incomputable from the onset. * Learning theory focuses on objects that are observable or finite/constructive, whereas logic-AI often considers objects that unobservable, infinite and unconstructive (which I consider to be a philosophical error). * Learning theory emphasizes induction whereas logic-AI emphasizes deduction. However, recently I noticed that quasi-Bayesian reinforcement learning [https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v] and Turing reinforcement learning [https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic#fEKc88NbDWZavkW9o] have very suggestive parallels to logic-AI. TRL agents have beliefs about computations they can run on the envelope: these are essentially beliefs about mathematical facts (but, we only consider computable facts and computational complexity plays some role there). QBRL agents reason in terms of hypotheses that have logical relationships between them: the order on functions corresponds to implication, taking the minimum of two functions corresponds to logical "and", taking the concave hull of two func
5DanielFilan2dBetter to concretise 3 ways than 1 if you have the time. Here's a tale I've heard but not verified: in the good old days, Intrade had a prediction market on whether Obamacare would become law, which resolved negative, due to the market's definition of Obamacare. Sometimes you're interested in answering a vague question, like 'Did Donald Trump enact a Muslim ban in his first term' or 'Will I be single next Valentine's day'. Standard advice is to make the question more specific [https://www.lesswrong.com/posts/XosKB3mkvmXMZ3fBQ/specificity-your-brain-s-superpower] and concrete into something that can be more objectively evaluated. I think that this is good advice. However, it's inevitable that your concretisation may miss out on aspects of the original vague question that you cared about. As such, it's probably better to concretise the question multiple ways which have different failure modes. This is sort of obvious for evaluating questions about things that have already happened, like whether a Muslim ban was enacted, but seems to be less obvious or standard in the forecasting setting. That being said, sometimes it is done - OpenPhil's animal welfare series of questions [https://www.metaculus.com/questions/?search=cat:series--animal-welfare] seems to me to basically be an example - to good effect. This procedure does have real costs. Firstly, it's hard to concretise vague questions, and concretising multiple times is harder than concretising once. It's also hard to predict multiple questions, especially if they're somewhat independent as is necessary to get the benefits, meaning that each question will be predicted less well. In a prediction market context, this may well manifest in having multiple thin, unreliable markets instead of one thick and reliable one.
4mr-hire2dMy biggest win lately (Courtesy of Elliot Teperman [https://www.facebook.com/elliot.teperman?__tn__=%2CdK-R-R&eid=ARDXTYGkvPIjD_TeT_Efe4IT5g5pXJ2SBqbv1M-ws-k0RiajAFSzorYCFXXxPukb5xThhwgL-7ELcrRO&fref=mentions] ) in regards to self love is to get in the habit of thinking of myself as the parent of a child (myself) who I have unconditional love for, and saying what that parent would say. An unexpected benefit of this is that I've started talking like this to others. Like, sometimes my friends just need to hear that I appreciate them as a human being, and am proud of them for what they accomplished and its' not the type of thing I used to say at all. And so do I, I didn't realize how much I needed to hear that sort of thing from myself until I started saying it regularly. One could call this Internal Parent Systems. Not to be confused with the default installed one that many of has that judges, criticizes, or blames in our parents' voice :). A close cousin of Qiaochu Yuan [https://www.facebook.com/qiaochu?__tn__=%2CdK-R-R&eid=ARD9wYBDhpDinei064JF6oy43LuvbMqe-sbMqVT3fnKzU7DBedTodCFZI2Mk-96uBHSIiCydZeZqNH9j&fref=mentions] 's Internal Puppy Systems
3TekhneMakre3dRepeated phrases stick in the mind. Repeated phrases heard in different contexts stick in the mind. Even if they're just repeated without additional information, repeated phrases stick in the mind. Repeated phrases stick in the mind and make themselves more available to say, more available to think, more available to use to recognize people who are "on the same wavelength as you". Other forces in the mind latch onto repeated phrases stuck in the mind. External forces with external goals repeat phrases so they'll stick in your mind.
2TekhneMakre3dWish: sememic translations. Most translations are not word-for-word, but instead are paraphrases of the original text; the translator interprets idioms, corrects apparent mistakes, rearranges the order of words and phrases to clarify the sentence structure according to the target language's usual structure, and so on. Obviously this is mostly good, but in some cases one wants to cleave more closely to the "raw" expression of the original. There are interlinear translations. But I've never seen, and wish to see, a good "sememic" translation . A sememe is a unit of meaning; e.g. the word "organizing" has (at least) three sememes: "organ", "iz" = make into the form of, "ing" = ongoing action (or something, IDK). A sememic translation (or maybe "hyperliteral translation") would aspire to directly translate the sememes of the source text into child-like gestures in the target language. For example, the German word "Wahrheit" might be translated as "True-ness", rather than the more standard "truth", and "Wahrscheinlichkeit" might be translated as "True-seeming-like-ness", as opposed to the more standard "probability". Obviously this would be a lot of work, and might only be worth it in cases where the author of the original is a master of language (choosing words with great care), but I think it would prove thought-provoking. One might even make a sememic translation of a text back into its own language! (Sort of related, and a fun read: https://warwick.ac.uk/fac/cross_fac/complexity/people/students/dtc/students2011/maitland/fun/ [https://warwick.ac.uk/fac/cross_fac/complexity/people/students/dtc/students2011/maitland/fun/] )

Tuesday, December 10th 2019
Tue, Dec 10th 2019

Personal Blogposts
4[Event]San Francisco Meetup: Projects170 Hawthorne St, San Francisco, CA 94107, USADec 17th
0
1[Event]Rational Ottawa - Against Rationality283 Elgin Street, OttawaDec 14th
0
Shortform [Beta]
13ozziegooen3dI feel like I've long underappreciated the importance of introspectability in information & prediction systems. Say you have a system that produces interesting probabilities pn for various statements. The value that an agent gets from them is not directly correlating to the accuracy of these probabilities, but rather to the expected utility gain they get after using information of these probabilities in corresponding Bayesian-approximating updates. Perhaps more directly, something related to the difference between one's prior and posterior after updated on pn. Assuming that prediction systems produce varying levels of quality results, agent's will need to know more about these predictions to really optimally update accordingly. A very simple example would be something like a bunch of coin flips. Say there were 5 coins flipped, I see 3 of them, and I want to estimate the number that were heads. A predictor tells me that their prediction has a mean probability of 40% heads. This is useful, but what would be much more useful is a list of which specific coins the predictor saw and what their values were. Then I could get a much more confident answer; possibly a perfect answer. Financial markets are very black-box like. Many large changes in company prices never really get explained publicly. My impression is that no one really understands the reasons for many significant market moves. This seems really suboptimal and I'm sure no one wanted this property to be the case.[1] Similarly, when trying to model the future of our own prediction capacities, I really don't think they should be like financial markets in this specific way. [1] I realize that participants in the market try to keep things hidden, but I mean the specific point that few people think that "Stock Market being a black box" = "A good thing for society."
12aaq4dScattered thoughts on how the rationalist movement has helped me: On the topic of rationalist self-improvement [https://www.lesswrong.com/posts/8xLtE3BwgegJ7WBbf/is-rationalist-self-improvement-real] , I would like to raise the point that simply feeling as though there's a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world. That generates a lot of hedons for me, which then on occasion allow me to "afford" doing other things I wouldn't otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There's a part of me that feels like I'm not just building up this knowledge for myself, but for the future possible good of "my people". I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that's honestly fricking awesome and blows standard delayed-gratification "self improvement" tactics outta the water 10/10 would recommend. Also there's the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again - nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says "mmm yes tasty good word soup i liek thanke" and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief. TLDR rationalists > rationalism
11dspeyer4dThis is a public key I wanted to document exists in a rationalist-associated place: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDSU/aJmWPV/lHsh5TiePzWimK0/Bj4VlsykTYucHv5PG+b3ogUe8zjcBqzW1Dl0pIJj+KYaEdxk5KYhEEImyaP6umMPnlKvL4VqR3lXebvTAnGxcWN27ZJDqcfgGI/Ilcf1JVEjA6g6DyvEOx3xeqBUH+oPvo8Z/VmyZjAFuuWwQ== dspeyer@dspeyerheim
6ozziegooen3dHe's an in-progress hierarchy of what's needed for information to be most useful to an organization or other multi-agent system. I'm sure there must be other very similar hierarchies out there, but don't currently know of any quite like this. Say you've come up with some cool feature that Apple could include in it's next phone. You think this is a great idea and they should add it in the future. You're outside of Apple, so the only way you have of interacting with them is by sending information through various channels. The question is: what things should you first figure out to understand how to do this? First, you need to have identified an improvement. You've done that, so you've gotten through the first step. Second, for this to be incorporated, it should make sense from Apple's perspective. If it comes out that the costs of adding the feature, including opportunity costs, outweigh the benefits, then it wouldn't make sense to them. Perhaps you could deceive them to incorporate the feature, but it would be against their interests. So you should hopefully get information about Apple's utility function and identify an intervention that would implement your improvement while being positive in expected value to them. Of course, just because it could be good for Apple does not mean that the people necessary to implement it would be in favor of doing so. Perhaps this feature involves the front-facing camera, and it so happens that people in charge of the decisions around the front-facing camera have some strange decision function and would prefer not being asked to do more work. To implement your change, these people would have to be convinced. A rough estimation for that would be an analysis that suggests that taking this feature on would have positive expected value for their utility functions. Again, it's possible that isn't a requirement, but if so, you may be needing to effectively deceive people. Once you have expected value equations showing that a specifi
4G Gordon Worley III3dAfter seeing another LW user (sorry, forgot who) mention this post [https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/] in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago: I like this because it's simple and it says what rather than how. My old guidelines were all about how: More generally, I think the SSC commenting guidelines might be a good cluster for those of us who want LW comment sections to be "nice" and so mark our posts as norm enforcing. If this catches on this might help deal with finding the few clusters of commenting norms that make people want without having lots of variation between authors.
Load More (5/13)

Sunday, December 8th 2019
Sun, Dec 8th 2019

Shortform [Beta]
40BrienneYudkowsky5dSuppose you wanted to improve your social relationships on the community level. (I think of this as “my ability to take refuge in the sangha”.) What questions might you answer now, and then again in one year, to track your progress? Here’s what’s come to mind for me so far. I’m probably missing a lot and would really like your help mapping things out. I think it’s a part of the territory I can only just barely perceive at my current level of development. * If something tragic happened to you, such as a car crash that partially paralyzed you or the death of a loved one, how many people can you name whom you'd find it easy and natural to ask for help with figuring out your life afterward? * For how many people is it the case that if they were hospitalized for at least a week you would visit them in the hospital? * Over the past month, how lonely have you felt? * In the past two weeks, how often have you collaborated with someone outside of work? * To what degree do you feel like your friends have your back? * Describe the roll of community in your life. * How do you feel as you try to describe the roll of community in your life? * When's the last time you got angry with someone and confronted them one on one as a result? * When's the last time you apologized to someone? * How strong is your sense that you're building something of personal value with the people around you? * When's the last time you spent more than ten minutes on something that felt motivated by gratitude? * When a big change happens in your life, such as loosing your job or having a baby, how motivated do you feel to share the experience with others? * When you feel motivated to share an experience with others, how satisfied do you tend to be with your attempts to do that? * Do you know the love languages of your five closest friends? To what extent does that influence how you behave toward them? * Does it seem to you that your friends know your love
7AlexMennen6dTheorem: Fuzzy beliefs (as in https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v [https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v] ) form a continuous DCPO. (At least I'm pretty sure this is true. I've only given proof sketches so far) The relevant definitions: A fuzzy belief over a set X is a concave function ϕ:ΔX→[0,1] such that sup(ϕ)=1 (where ΔX is the space of probability distributions on X). Fuzzy beliefs are partially ordered by ϕ≤ψ⟺∀μ∈ΔX:ϕ(μ)≥ψ(μ) . The inequalities reverse because we want to think of "more specific"/"less fuzzy" beliefs as "greater", and these are the functions with lower values; the most specific/least fuzzy beliefs are ordinary probability distributions, which are represented as the concave hull of the function assigning 1 to that probability distribution and 0 to all others; these should be the maximal fuzzy beliefs. Note that, because of the order-reversal, the supremum of a set of functions refers to their pointwise infimum. A DCPO (directed-complete partial order) is a partial order in which every directed subset has a supremum. In a DCPO, define x<<y to mean that for every directed set D with supD≥y, ∃d∈D such that d≥x. A DCPO is continuous if for every y , y=sup{x∣x<<y}. Lemma: Fuzzy beliefs are a DCPO. Proof sketch: Given a directed set D , (supD)(μ)=min{d(μ)∣d∈D} is convex, and {μ∣(supD)(μ)=1}=⋂d∈D{μ∣d(μ)=1}. Each of the sets in that intersection are non-empty, hence so are finite intersections of them since D is directed, and hence so is the whole intersection since ΔX is compact. Lemma: ϕ<<ψ iff {μ∣ψ(μ)=1} is contained in the interior of {μ∣ϕ(μ)=1} and for every μ such that ψ(μ)≠1, ϕ(μ)>ψ(μ). Proof sketch: If supD≥ψ, then ⋂d∈D{μ∣d(μ)=1}⊆{μ∣ψ(μ)=1} , so by compactness of ΔX and directedness of D, there should be d∈D such that {μ∣d(μ)=1}⊆int({μ∣ϕ(μ)=1}). Similarly, for each μ such that ψ(μ)≠1, there should be dμ∈D s

Saturday, December 7th 2019
Sat, Dec 7th 2019

Personal Blogposts
Shortform [Beta]
11Adam Scholl7dSo apparently Otzi the Iceman [https://en.wikipedia.org/wiki/%C3%96tzi] still has a significant amount [https://sci-hub.tw/https://link.springer.com/article/10.1007/s00018-013-1360-y] of brain tissue. Conceivably some memories are preserved?
10Raemon7dIn response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).

Friday, December 6th 2019
Fri, Dec 6th 2019

Personal Blogposts
Shortform [Beta]
42BrienneYudkowsky7dSome advice to my past self about autism: Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens. Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit done and having a good life. It’s just hard to get them all working together. Try leaning into those behaviors and see what’s good about them. For example, you know how when you accidentally do something three times in a row, you then feel compelled to keep doing it the same way at the same time forever? Studying this phenomenon in yourself will lead you to build solid and carefully designed routines that allow you to be a lot more reliably vibrant. You know how some autistic people have one-on-one aides, caretakers, and therapists who assist in their development and day-to-day wellbeing? Read a bit about what those aides do. You’ll notice right away that the state of the art in this area is crap, but try to imagine what professional autism aides might do if they really had things figured out and were spectacular at their jobs. Then devote as many resources as you can spare for a whole year to figuring out how to perform those services for yourself. It seems to me that most of what’s written about autism by neurotypicals severely overemphasizes social stuff. You’ll find almost none of it compelling. Try to understand what’s really going on with autism, and your understanding will immediately start paying off in non-social quality of life improvements. Keep at it, and it’ll eventually start paying off in deep and practical social insights as well (which I know you don’t care about right now, but it’s true). I
21Raemon8dOver in this thread, Said asked [https://www.lesswrong.com/posts/5zSbwSDgefTvmWzHZ/affordance-widths#iM4Jfa3ThJcFii2Pm] the reasonable question "who exactly is the target audience with this Best of 2018 book?" I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. But, a quick "best guess" answer for now. I see the overall review process as having two "major phases": * Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting * Phase 2: Compilation and Publication I think the first phase should be oriented entirely around "internal consumption" – figuring out what epistemic standard to hold ourselves to, and how, so that we can do better in the future. (As well as figuring out what ideas we've developed that should be further built upon). Any other benefits are incidental. The final book/sequence is at least somewhat externally facing. I do expect it to be some people's first introduction to LessWrong, and other people's "one thing they read from LW this year". And at least some consideration should be given to those people's reading experience (which will be lacking a lot of context). But my guess is that should come more in the form of context-setting editor commentary than in decisions about what to include. I think “here are the fruits of our labors; take them and make use of them” is more of what I was aiming for. (Although "what standards are we internally holding ourselves to, and what work should we build towards?" is still an important function of the finished product). It'd be nice if people were impressed, but a better frame for that goal is "Outsiders looking in can get an accurate picture of how productive our community is, and what sort of things we do", and maybe they are impressed by that or maybe not. (I re

Thursday, December 5th 2019
Thu, Dec 5th 2019

Frontpage Posts
Shortform [Beta]
8BrienneYudkowsky8dThread on The Abolition of Man by C. S. Lewis
8Raemon9dAfter this weeks's stereotypically sad experience with the DMV.... (spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot) ...and having just looked over a lot of 2018 posts [https://www.lesswrong.com/nominations] investigating coordination failure... I find myself wondering if it's achievable to solve one particular way in which bureaucracy is terrible: the part where each node/person in the system only knows a small number of things, so you have to spend a lot of time rehashing things, and meanwhile can't figure out if your goal is actually achievable. (While attempting to solve this problem, it's important to remember that at least some of the inconvenience of bureaucracy may be an active ingredient [https://slatestarcodex.com/2018/08/30/bureaucracy-as-active-ingredient/] rather than inefficiency. But at least in this case it didn't seem so: drivers licenses aren't a conserved resource that the DMV wants to avoid handing out. If I had learned early on that I couldn't get my license last Monday it would have not only saved me time, but saved DMV employee hassle) I think most of the time there's just no incentive to really fix this sort of thing (while you might have saved DMV employee hassle, you probably wouldn't save them time, since they still just work the same 8 hour shift regardless. And if you're the manager of a DMV you probably don't care too much about your employees having slightly nicer days. But, I dunno man, really!?. Does it seem like at least Hot New Startups could be sold on software that, I dunno, tracks all the requirements of a bureaucratic process and tries to compile "will this work?" at sta
4AABoyles8dAttention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence. The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is. Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd. What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline: 1. Create a random DAG representing some complex related phenomena. 2. Create an agent which holds beliefs about the relationship between nodes in the graph, and updates its beliefs when it discovers a correlation with p > 0.95. 3. Create a second agent with the same belief structure, but which updates on every experiment regardless of the correlation. 4. On each iteration have each agent select two nodes in the graph, measure their correlation, and update their beliefs. Then have them compute the DAG corresponding to their current belief matrix. Measure the difference between the DAG they output and the original DAG created in step 1. I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There a
2Chris_Leong9dEDT agents handle Newcomb's problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box. That's the high-level description, but let's break it down further. Unlike CDT, EDT doesn't worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are comparable. In other words, any differences in hidden state, such as you being a different agent or money being placed in the box, are attributed to your decision (see my previous discussion here [https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/shortform#yKRZgXjt3qvzpWQEr])

Load More Days