All Posts

Sorted by Magic (New & Upvoted)

Friday, January 24th 2020
Fri, Jan 24th 2020

Shortform [Beta]
15mr-hire7hFITTING IN AND THE RATIONALITY COMMUNITY One of my biggest learning experiences over the last few years was moving to the Bay Area, and attempting to be accepted into the "Rationality Tribe". When I first took my CFAR workshop years ago, and interacted with the people in the group, I was enamored. A group of people who was into saving the world, self-improvement, understanding their own minds, connecting with others - I felt like I had found my people. A few short months later I moved to the Bay Area. I had never been good at joining groups or tribes. From a very early age, I made my friend group (sometimes very small) by finding solid individuals that could connect to my particular brand of manic, ambitious, and open, and bringing them together through my own events and hangouts. In Portland, where I was before moving to the Bay, I really felt I had a handle on this, meeting people at events (knowing there weren't many who would connect with me in Portland), then regularly hosting my own events like dinner parties and meetups to bring together the best people. Anyway, when I got to the Bay, I for the first time tried really hard to be accepted into existing tribes. Not only did I finally think I had found a large group of people I would fit in with, I was also operating under the assumption that I needed to be liked by all these peoples because they were allies in changing the world for the better. And honestly, this made me miserable. While I did find a few solid people I really enjoyed, trying to be liked and accepted by the majority of people in the rationality community was an exercise in frustration - Being popular has always run counter to my ability to express myself honestly and openly, and I kept having to bounce between the two choices. And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a c
6tragedyofthecomments1dProbably not so hot take. The Doomsday Clock is an out of date metaphor and should be replaced. I think it was valuable during the cold war for the doomsday clock to be a representation of risk that everyone could easily recognize and talk about, but I think it is now likely doing more harm than good. Bad Things: -Trying to include many types of risks: The doomsday clock was originally a representation of the risk of large scale nuclear war, now it includes climate, biorisk, and AI. Tracking progress and risk in these fields is complex. Tracking them all at once it's not really clear what increase or decrease in the clock means or if you should have trust in the risk evaluations from these disparate fields. (Oh, also looks like Anders Sandberg already said something like this) -Adding too much granularity (now with seconds!): This seems like a move because they want it to move forward to give people a sense of urgency, but it was already real close to midnight. Probably it should have been moved much further away from midnight when the cold war ended and increase or decrease depending on stability of current nuclear deals. Qualities I'd like in new symbols of potential global catastrophes: -Representing specific global risks -Easily explainable what heuristics/data are being used to alter the state of the new symbol -Simple Representation
4Donald Hobson9hBUT NOBODY WOULD BE THAT STUPID! Here is a flawed dynamic in group conversations, especially among large groups of people with no common knowledge. Suppose everyone is trying to build a bridge. Alice: We could make a bridge by just laying a really long plank over the river. Bob: According to my calculations, a single plank would fall down. Carl: Scientists Warn Of Falling Down Bridges, Panic. Dave: No one would be stupid enough to design a bridge like that, we will make a better design with more supports. Bob: Do you have a schematic for that better design? And, at worst, the cycle repeats. The problem here is Carl. The message should be Carl: At least one attempt at designing a bridge is calculated to show the phenomena of falling down. It is probable that many other potential bridge designs share this failure mode. In order to build a bridge that won't fall down, someone will have to check any designs for falling down behavior before they are built. This entire dynamic plays out the same, whether the people actually deciding on building the bridge are incredibly cautious, never approving a design they weren't confidant in, or totally reckless. The probability of any bridge actually falling down in the real world depends on their caution. But the process of cautious bridge builders finding a good design looks like them rejecting lots of bad ones. If the rejection of bad designs is public, people can accuse you of attacking a strawman, they can say that no-one would be stupid enough to build such a thing. If they are right that no one would be stupid enough to build such a thing, its still helpful to share the reason the design fails.
3Linda Linsefors11hI'm basically ready to announce the next Technical AI Safety Unconference (TAISU). But I have hit a bit of decision paralysis as to what dates it should be. If you are reasonably interested in attending, please help me by filling in this doodle [https://doodle.com/poll/gu5gk2uwraiuebt7] If you don't know what this is about, have a look at the information for the last one [https://www.lesswrong.com/events/yuMuDGnJ8omGhMx9y/taisu-technical-ai-safety-unconference] . The venue will be EA Hotel in Blackpool UK again.

Thursday, January 23rd 2020
Thu, Jan 23rd 2020

Shortform [Beta]
4rohinmshah1dIn my double descent newsletter [https://www.alignmentforum.org/posts/LYdvzXF6E4iXM2ZSD/an-77-double-descent-a-unification-of-statistical-theory-and] , I said: One response you could have is to think that this could apply even at training time, because typical loss functions like cross-entropy loss and squared error loss very strongly penalize confident mistakes, and so initially the optimization is concerned with getting everything right, only later can it be concerned with regularization. I don't buy this argument either. I definitely agree that cross-entropy loss penalizes confident mistakes very highly, and has a very high derivative, and so initially in training most of the gradient will be reducing confident mistakes. However, you can get out of this regime simply by predicting the frequencies of each class (e.g. uniform for MNIST). If there are N classes, the worst case loss is when the classes are all equally likely, in which case the average loss per data point is ln(1/N)=−2.3 when N=10 (as for CIFAR-10, which is what their experiments were done on), which is not a good loss value but it does seem like regularization should already start having an effect. This is a really stupid and simple classifier to learn, and we'd expect that the neural net does at least this well very early in training, well before it reaches the interpolation threshold / critical regime, which is where it gets ~perfect training accuracy. There is a much stronger argument in the case of L2 regularization on MLPs and CNNs with relu activations. Presumably, if the problem is that the cross-entropy "overwhelms" the regularization initially, then we should also see double descent if we first train only on cross-entropy, and then train with L2 regularization. However, this can't be true. When training on just L2 regularization, the gradient descent update is: w=w−λw=(1−λ)w=cw for some constant c. For MLPs and CNNs with relu activations, if you multiply all the weights by a constant, t

Wednesday, January 22nd 2020
Wed, Jan 22nd 2020

Shortform [Beta]
1StoiaMillawi2dI believe that we probably have infinite lives. And I’m not talking about religion here. Think about the life you have now. It happened once, the fact that you are alive as an intelligent being and aware of it. If it happened one time, it can happen a second, and a third, right? Even if it takes an infinite amount of years to happen again, even if you have to wait for infinite universes until it happens. The good thing is that if you are dead, infinite is a blink of an eye. Probably you won’t be the ‘you’ as you recognize yourself today, probably not even human, could be a different planet in a different universe, but still an intelligent and self aware being. And in this sense I believe we probably have infinite lives.

Tuesday, January 21st 2020
Tue, Jan 21st 2020

No posts for January 21st 2020
Shortform [Beta]
12ozziegooen3dCommunication should be judged for expected value, not intention (by consequentialists) TLDR: When trying to understand the value of information, understanding the public interpretations of that information could matter more than understanding the author's intent. When trying to understand the information for other purposes (like, reading a math paper to understand math), this does not apply. If I were to scream "FIRE!" in a crowded theater, it could cause a lot of damage, even if my intention were completely unrelated. Perhaps I was responding to a devious friend who asked, "Would you like more popcorn? If yes, should 'FIRE!'". Not all speech is protected by the First Amendment, in part because speech can be used for expected harm. One common defense of incorrect predictions is to claim that their interpretations weren't their intentions. "When I said that the US would fall if X were elected, I didn't mean it would literally end. I meant more that..." These kinds of statements were discussed at length in Expert Political Judgement. But this defense rests on the idea that communicators should be judged on intention, rather than expected outcomes. In those cases, it was often clear that many people interpreted these "experts" as making fairly specific claims that were later rejected by their authors. I'm sure that much of this could have been predicted. The "experts" often definitely didn't seem to be going out of their way to be making their after-the-outcome interpretations clear before-the-outcome. I think that it's clear that the intention-interpretation distinction is considered highly important by a lot of people, so much so as to argue that interpretations, even predictable ones, are less significant in decision making around speech acts than intentions. I.E. "The important thing is to say what you truly feel, don't worry about how it will be understood." But for a consequentialist, this distinction isn't particularly relevant. Speech acts are judged on
4Raemon3dI'm not sure which of these posts is a subset of the other: * The Backbone Bottleneck * The Leadership Bottleneck
2George4dI wonder why people don't protect themselves from memes more. Just to be clear, I mean meme in the broad memetic theory of spreading ideas/thoughts sense. I think there's almost an intuitive understanding, or at least one existed in the environment I was bought up in, that some ideas are virulent and useless. I think that from this it's rather easy to conclude that those ideas are harmful, since you only have space for so many ideas, so holding useless ideas is harmful in the sense that it eats away at a valuable resource (your mind). I think modern viral ideas also tend more and more towards the toxic side, toxic in the very literal sense of "designed to invoke a raise in cortisol and/or dopamine that makes them more engaging yet is arguably provably harmful to the human body. Though I think this is a point I don't trust that much, speculation at best. It's rather hard to figure out what memes one should protect themselves from under these conditions, some good heuristics I've come up with is: * 1. Memes that are new and seem to be embedded in the minds of many people, yet don't seem to increase their performance on any metric you care about. (e.g. wealth, lifespan, happiness) * 2. Memes that are old and seem to be embedded in the minds of many people, yet seem to decrease their performance on any metric you care about. * 3. Memes that are being recommended to you in an automated fashion by a capable algorithm you don't understand fully. I think if a meme ticks one of these boxes, it should be taken under serious consideration as harmful. Granted, there's memes that tick all 3 (e.g. wearing a warm coat during winter), but I think those are so "common" it's pointless to bring them into the discussion, they are already deeply embedded in our minds, so it's pointless to discuss them. A few examples I can think of. * Crypot currency in 2017&2018, passes 2 and 3, passes or fails 1 depending on the people you are looking at, => Depends * All a

Monday, January 20th 2020
Mon, Jan 20th 2020

No posts for January 20th 2020
Shortform [Beta]
7Chris_Leong4dThere appears to be something of a Sensemaking community developing on the internet, which could roughly be described as a spirituality-inspired attempt at epistemology. This includes Rebel Wisdom [https://www.youtube.com/channel/UCFQ6Gptuq-sLflbJ4YY3Umw], Future Thinkers [https://futurethinkers.org/], Emerge [https://www.whatisemerging.com/] and maybe you could even count post-rationality. While there are undoubtedly lots of critiques that could be made of their epistemics, I'd suggest watching this space as I think some interesting ideas will emerge out of it.
3matthewhirschey4dJust found this site, and am going through these ideas. Love the core ideas (thinking, creativity, decision making, etc). I have recently started writing on some similar ideas (matthewhirschey.com), and look forward to exchange!

Saturday, January 18th 2020
Sat, Jan 18th 2020

Shortform [Beta]
6rohinmshah6dI was reading Avoiding Side Effects By Considering Future Tasks [https://drive.google.com/file/d/0B3mY6u_lryzdMGpEbEljRmFIS2hZWno1clExMDRuVVZWMnJV/view] , and it seemed like it was doing something very similar to relative reachability. This is an exploration of that; it assumes you have already read the paper and the relative reachability paper. It benefitted from discussion with Vika. Define the reachability R(s1,s2)=Eτ∼π[γn], where π is the optimal policy for getting from s1 to s2, and n=|τ| is the length of the trajectory. This is the notion of reachability both in the original paper and the new one. Then, for the new paper when using a baseline, the future task value V∗future(s,s′) is: Eg,τ∼πg,τ′∼π′g[γmax(n,n′)] where s′ is the baseline state and g is the future goal. In a deterministic environment, this can be rewritten as: V∗future(s,s′) =Eg[γmax(n,n′)] =Eg[min(R(s,g),R(s′,g))] =Eg[R(s′,g)−max(R(s′,g)−R(s,g),0)] =Eg[R(s′,g)]−Eg[max(R(s′,g)−R(s,g),0)] =Eg[R(s′,g)]−dRR(s,s′) Here, dRR is relative reachability, and the last line depends on the fact that the goal is equally likely to be any state. Note that the first term only depends on the number of timesteps, since it only depends on the baseline state s'. So for a fixed time step, the first term is a constant. The optimal value function in the new paper is (page 3, and using my notation of V∗future instead of their V∗i): V∗(st)=maxat∈A[r(st,at)+γ∑st+1∈Sp(st+1∣st,at)V∗(st+1)+(1−γ)βV∗future]. This is the regular Bellman equation, but with the following augmented reward (here s′t is the baseline state at time t): Terminal states: rnew(st) =r(st)+βV∗future(st,s′t) =r(st)−βdRR(st,s′t)+βEg[R(s′t,g)] Non-terminal states: rnew(st,at) =r(st,at)+(1−γ)βV∗future(st,s′t) =r(st)−(1−γ)βdRR(st,s′t)+(1−γ)βEg[R(s′t,g)] For comparison, the original relative reachability reward is: rRR(st,at)=r(st)−βdRR(st,s′t) The first and third terms in rnew are very similar to the two te
4Mary Chernyshenko6dThe unshareable evidence. I have a friend, a fellow biologist. A much more focused person, in terms of "gotta do this today", with lower barriers for action (e.g., I help her with simple English, but she is the one to tutor kids in it, and so on.) I have known her for about ten years. And over time, I learned that her cousin died at seventeen. It was the time when atypical pneumonia was around, and he died in a hospital a week after he fell ill with typical symptoms, but his certificate had another kind of pneumonia in it. Officially, there was no AP in the area. And his death changed the familial structure so that it is still unbalanced, in a way, years later. Her sister has recently lost half a finger, after an accident with a saw, when there was a good chance of saving it. Both her children (one 14, the other 3 years old) usually get horrifying allergic swellings and fever from even the common bugs, and then only slowly get better. In the city region where she lives, there is one neurologist for ten thousand people, and she can't get an appointment. I keep hearing about such things when I visit her. Her kids are unvaccinated. We have talked about it, and she said all the usual things about vaccines causing autism, and the mercury, and the questionable quality etc. The Kitchen Argument uniting people all over the world. Of course, the link between vaccines and autism was disproved, but this means that somebody did take it seriously. It's not one woman's struggle or suspicions, its The Statistics. You can discuss it much like weather - you're being polite! It gives me an ugly feeling, that a friend of mine should hide behind common and expected and false - she knows it's false - lore because she knows the script and to know that it was I who forced her to it. I and people like me gave her this shield. But the pneumonia, the finger and the swellings, the life which she builds her thoughts around, never get mentioned. We've had the same education, we both know

Thursday, January 16th 2020
Thu, Jan 16th 2020

No posts for January 16th 2020
Shortform [Beta]
7MichaelA9dWAYS OF DESCRIBING THE “TRUSTWORTHINESS” OF PROBABILITIES While doing research for a post on the idea of a distinction between “risk” and “(Knightian) uncertainty [https://en.wikipedia.org/wiki/Knightian_uncertainty]”, I came across a surprisingly large number of different ways of describing the idea that some probabilities may be more or less “reliable”, “trustworthy”, “well-grounded”, etc. than others, or things along those lines. (Note that I’m referring to the idea of different degrees of trustworthiness-or-whatever, rather than two or more fundamentally different types of probability that vary in trustworthiness-or-whatever.) I realised that it might be valuable to write a post collecting all of these terms/concepts/framings together, analysing the extent to which some may be identical to others, highlighting ways in which they may differ, suggesting ways or contexts in which some of the concepts may be superior to others, etc.[1] [#fn-wGnf2warekZDiMkWj-1] But there’s already too many things I’m working on writing at the moment, so this is a low effort version of that idea - this is basically just a collection of the concepts, relevant quotes, and links where readers can find more. Comments on this post will inform whether I take the time to write something more substantial/useful on this topic later (and, if so, precisely what and how). Note that this post does not explicitly cover the “risk vs uncertainty” framing itself, as I’m already writing a separate, more thorough post on that. EPISTEMIC CREDENTIALS Dominic Roser [https://link.springer.com/article/10.1007%2Fs11948-017-9919-x] speaks of how “high” or “low” the epistemic credentials of our probabilities are. He writes: He further explains what he means by this in a passage that also alludes to many other ways of describing or framing an idea along the lines of the trustworthiness of given probabilities: RESILIENCE (OF CREDENCES) Amanda Askell discusses the idea that we can have “more” or “less” res

Load More Days