All Posts

Sorted by Magic (New & Upvoted)

Friday, June 18th 2021
Fri, Jun 18th 2021

Shortform
6steven04612hThinking out loud about some arguments about AI takeoff continuity: If a discontinuous takeoff is more likely to be local to a particular agent or closely related set of agents with particular goals, and a continuous takeoff is more likely to be global, that seems like it incentivizes the first agent capable of creating a takeoff to make sure that that takeoff is discontinuous, so that it can reap the benefits of the takeoff being local to that agent. This seems like an argument for expecting a discontinuous takeoff and an important difference with other allegedly analogous technologies. I have some trouble understanding the "before there are strongly self-improving AIs there will be moderately self-improving AIs" argument for continuity. Is there any reason to think the moderate self-improvement ability won't be exactly what leads to the strong self-improvement ability? Before there's an avalanche, there's probably a smaller avalanche, but maybe the small avalanche is simply identical to the early part of the large avalanche. Where have these points been discussed in depth?
3MikkW20hThe Roman Kingdom and Roman Empire both fell because of ineffective leaders. The Roman Republic fell because of extremely competent, but autocratic, leaders.
1

Thursday, June 17th 2021
Thu, Jun 17th 2021

Shortform
18Raemon2dA thing I might have maybe changed my mind about: I used to think a primary job of a meetup/community organizer was to train their successor, and develop longterm sustainability of leadership. I still hold out for that dream. But, it seems like a pattern is: 1) community organizer with passion and vision founds a community 2) they eventually move on, and pass it on to one successor who's pretty closely aligned and competent 3) then the First Successor has to move on to, and then... there isn't anyone obvious to take the reins, but if no one does the community dies, so some people reluctantly step up. and.... ...then forever after it's a pale shadow of its original self. For semi-branded communities (such as EA, or Rationality), this also means that if someone new with energy/vision shows up in the area, they'll see a meetup, they'll show up, they'll feel like the meetup isn't all that good, and then move on. Wherein they (maybe??) might have founded a new one that they got to shape the direction of more. I think this also applies to non-community organizations (i.e. founder hands the reins to a new CEO who hands the reins to a new CEO who doesn't quite know what to do) So... I'm kinda wondering if second-generation successors should just... err on the side of shutting the thing down when they leave, rather than trying desperately to find a replacement. The answer isn't obvious. There is value that continues to be created by the third+ generation. I think I've mostly gone from "having a firm opinion that you should be proactively training your successor" to "man, I dunno, finding a suitable successor is actually pretty hard, mrrr?"
4
9MikkW1dAsking people to "taboo [X word]" is bad form, unless you already know that the other person is sufficiently (i.e. very) steeped in LW culture to know what our specific corner of internet culture means by "taboo" [https://www.lesswrong.com/tag/rationalist-taboo]. Without context, such a request to taboo a word sounds like you are asking the other person to never use that word, to cleanse it from their vocabulary, to go through the rest of their life with that word permanently off-limits. That's a very high, and quite rude, ask to make of someone. While that's of course not what we mean by "taboo", I have seen requests to taboo made where it's not clear that the other person knows what we mean by taboo, which means it's quite likely the receiving party interpreted the request as being much ruder than was meant. Instead of saying "Taboo [X word]", instead say "could you please say what you just said without using [X word]?" - it conveys the same request, without creating the potential to be misunderstood to be making a rude and overreaching request.
2
5MikkW1dI may have discovered an interesting tool against lethargy and depression [1]: This morning, in place of my usual caffeine pill, I made myself a cup of hot chocolate (using pure cacao powder / baking chocolate from the supermarket), which made me very energetic (much more energetic than usual), which stood in sharp contrast to the past 4 days, which have been marked by lethargy and intense sadness. Let me explain: Last night, I was reflecting on the fact that one of the main components of chocolate is theobromine, which is very similar in structure to caffeine (theobromine is the reason why chocolate in poisonous to dogs & cats, for reasons similar to how caffeine was evolved to kill insects that feed on plants), and is known to be the reason why eating chocolate makes people happy. Since I have problems with caffeine, but rely on it to have energy, I figured it would be worthwhile to try using chocolate instead as a morning pick-me-up. I used baking chocolate instead of Nesquick or a hot chocolate packet because I'm avoiding sugar these days, and I figured having as pure chocolate as possible would be ideal for my experiment. I was greeted with pleasant confirmation when I became very alert almost immediately after starting to drink the chocolate, despite having been just as lethargic as the previous days until I drank the chocolate. It's always suggestive when you form a hypothesis based on facts and logic, then test the hypothesis, and exactly what you expected to happen, happens. But of course, I can't be too confident until I try repeating this experiment on future days, which I will happily be doing after today's success. [1]: There are alternative hypotheses for why today was so different from the previous days: I attended martial arts class, then did some photography outside yesterday evening, which meant I got intense exercise, was around people I know and appreciate, and was doing stuff with intentionality, all of which could have contributed to my go
5

Wednesday, June 16th 2021
Wed, Jun 16th 2021

Personal Blogposts
6[Event]Washington DC Meeetup: Reading Discussion701 Pennsylvania Avenue Northwest, WashingtonJun 20th
0
Shortform
7Viliam2dI would like to see a page like TalkOrigins [http://www.talkorigins.org/origins/faqs-qa.html], but about IQ. So that any time someone confused but generally trying to argue in good faith posts something like "but wasn't the idea of intelligence disproved scientifically?" or "intelligence is a real thing, but IQ is not" or "IQ is just an ability to solve IQ tests" or "but Taleb's article/tweet has completely demolished the IQ pseudoscience" or one of the many other versions... I could just post this link. Because I am tired of trying to explain, and the memes are going to stay here for a foreseeable future.
1
6Daniel Kokotajlo2dI recommend The Meme Machine, it's a shame it didn't spawn a huge literature. I was thinking a lot about memetics before reading it, yet still I feel like I learned a few important things. Anyhow, here's an idea inspired by it: First, here is my favorite right way to draw analogies between AI and evolution: Evolution : AI research over time throughout the world Gene : Bit of code on Github Organism : The weights of a model Past experiences of an organism : Training run of a model With that as background context, I can now present the idea. With humans, memetic evolution is a thing. It influences genetic evolution and even happens fast enough to influence the learning of a single organism over time. With AIs, memetic evolution is pretty much not a thing. Sure, the memetic environment will change somewhat between 2020 and whenever APS-AI is built, but the change will be much less than all the changes that happened over the course of human evolution. And the AI training run that produces the first APS-AI may literally involve no memetic change at all (e.g. if it's trained like GPT-3). So. Human genes must code for the construction of a brain + learning procedure that works for many different memetic environments, and isn't overspecialized to any particular memetic environment. Whereas the first APS-AI might be super-specialized to the memetic environment it was trained in. This might be a barrier to building APS-AI; maybe it'll be hard to induce a neural net to have the right sort of generality/flexibility because we don't have lots of different memetic environments for it to learn from (and even if we did, there's the further issue that the memetic environments wouldn't be responding to it simultaneously) and maybe this is somehow a major block to having APS capabilities. More likely, I think, is that APS-AI will still happen but it'll just lack the human memetic generality. It'll be "overfit" to the current memetic landscape. Maybe.
5Willa3dSeveral very important to me people whom I love told me that they would rather die than live even a few hundred years or indefinitely, that they would not choose cryopreservation if life extension capabilities aren't advanced enough by their "natural time", and so on, when I asked them how they felt about immortality (scenario was: imagine that humanity figures out how to be immortal and there are no restrictions, anyone can have it if they want it, do you take it yes or no?). There's too much deathism in this world, aahhhhhhhhh. I've already started to mourn those people, and it hurts so fucking much, it literally is keeping me awake tonight...I was meditating then trying to sleep and reminders of their choice bubbled up and now here I am, typing away. Crying about and mourning the loss of loved ones who haven't died yet, but ultimately said that that was their preference over life extension / immortality. It hurts so bad. -------------------------------------------------------------------------------- Writing this felt helpful somewhat, at least I've channeled those feelings and temporarily diminished their intensity, somewhat...well, back to trying to sleep.
6
5ozziegooen3dOn Berkeley coworking: I've recently been looking through available Berkeley coworking places. The main options seem to be WeWork, NextSpace, CoWorking with Wisdom, and The Office: Berkeley. The Office seems basically closed now, CoWorking with Wisdom seemed empty when I passed by, and also seems fairly expensive, but nice. I took a tour of WeWork and Nextspace. They both provide 24/7 access for all members, both have a ~$300/m option for open coworking, a ~$375/m for fixed desks, and more for private/shared offices. (At least now, with the pandemic. WeWork is typically $570/month for a dedicated desk apparently). Both WeWork and NextSpace were fairly empty when I visited, though there weren't many private offices available. The WeWork is much larger, but its split among several floors that I assume barely interact with each other. Overall the NextSpace seemed a fair bit nicer to me. The vibe was more friendly, the receptionist much more friendly, there were several sit/stand desks, and I think I preferred the private offices. (They were a bit bigger and more separated from the other offices). That said, the WeWork seemed a bit more professional and quiet, and might have had a nicer kitchen. If you look at the yelp/reviews for them, note that the furniture of the NextSpace changed a lot in the last ~1.5 years, so many of the old photos are outdated. I remembered one NextSpace in SF that didn't seem very nice, but this one seemed better. Also, note that they have a promotion to work there for 1 day for free. https://www.yelp.com/biz/nextspace-coworking-berkeley-berkeley-3 [https://www.yelp.com/biz/nextspace-coworking-berkeley-berkeley-3] https://www.yelp.com/biz/wework-berkeley-berkeley-2?osq=WeWork [https://www.yelp.com/biz/wework-berkeley-berkeley-2?osq=WeWork]
2Daniel Kokotajlo2dHistorical precedents for general vs. narrow AI * Household robots vs. household appliances: Score One for Team Narrow * Vehicles on roads vs. a network of pipes, tubes, and rails: Score one for Team General * Ships that can go anywhere vs. a trade network of ships optimized for one specific route: Score one for Team General (On the ships thing -- apparently the Indian Ocean trade was specialized prior to the Europeans, with cargo being transferred from one type of ship to another to handle different parts of the route, especially the red sea which was dangerous to the type of oceangoing ship popular at the time. But then the Age of Sail happened.) Obviously this is just three data points, two of which seem sorta similar because they both have to do with transporting stuff. It would be good to have more examples.

Tuesday, June 15th 2021
Tue, Jun 15th 2021

Shortform
9ChristianKl4dIt's amazing how mismanged Evernote is. Their attempt to strategically pivot away from being about remembering information is deeply flawed. They update their app to a new design and for 3 months the app just crashed when I start it on my phone (I have a Google Pixel 3A which isn't that non-standard). This Sunday, the app didn't save two notes I made, and now notes can't be saved.
3
2ChristianKl4dMedical researchers: We see that childhood cancers are associated with PGBD5 which causes a lot of mutations. What do we do with that knowledge? How about blocking the DNA repair of the mutation that are caused by PGBD5 so that the mutations kill some cancer cells. I would have guessed that preventing PGBD5 from creating the mutations would be a higher priority.

Monday, June 14th 2021
Mon, Jun 14th 2021

Frontpage Posts
Shortform
5Mati_Roy4dIn the book Superintelligence, box 8, Nick Bostrom says: 1. If the easily resource-satiable goals are persistent through time (ie. the AI wants to fulfill them for the longest period of time possible), then the AI will either try to keep the simulation running for as long as possible (and so not grab its universe) or try to escape the simulation. 2. If the easily resource-satiable goals are NOT persistent through time (ie. once the AI has created the 45 virtual paperclips, it doesn't matter if they get deleted, the goal has already been achieved), then once the AI has created the 45 paperclips, it has nothing to lose by grabbing more resources (gradually, until it has grabbed the Universe), but it has something to win, namely: a) increasing its probability (arbitrarily close to 100%) that it did in fact achieve its goal through further experiment and reasoning (ie. because it could be mistaken about having created 45 virtual paperclips), and b) if it didn't, then remedy to that.

Saturday, June 12th 2021
Sat, Jun 12th 2021

Shortform
12Alex Ray6dIntersubjective Mean and Variability. (Subtitle: I wish we shared more art with each other) This is mostly a reaction to the (10y old) LW post:Things you are supposed to like [https://www.lesswrong.com/posts/4tzEAgdbNTwB6nKyL/things-you-are-supposed-to-like] . I think there's two common stories for comparing intersubjective experiences: * "Mismatch": Alice loves a book, and found it deeply transformative. Beth, who otherwise has very similar tastes and preferences to Alice, reads the book and finds it boring and unmoving. * "Match": Charlie loves a piece of music. Daniel, who shares a lot of Charlie's taste in music, listens to it and also loves it. One way I can think of unpacking this is that there is in terms of distributions: * "Mean" - the shared intersubjective experiences, which we see in the "Match" case * "Variability" - the difference in intersubjective experiences, which we see in the "Mismatch" case Another way of unpacking this is due to factors within the piece or within the subject * "Intrinsic" - factors that are within the subject, things like past experiences and memories and even what you had for breakfast * "Extrinsic" - factors that are within the piece itself, and shared by all observers And one more ingredient I want to point at is question substitution [https://www.lesswrong.com/posts/LHtMNz7ua8zu4rSZr/the-substitution-principle]. In this case I think the effect is more like "felt sense query substitution" or "received answer substitution" since it doesn't have an explicit question. * When asked about a piece (of art, music, etc) people will respond with how they felt -- which includes both intrinsic and extrinsic factors. Anyways what I want is better social tools for separating out these, in ways that let people share their interest and excitement in things. * I think that these mismatches/misfirings (like the LW post that set this off) and the reactions to them cause a chilling effect, where t
12Alex Ray6dHow I would do a group-buy of methylation analysis. (N.B. this is "thinking out loud" and not actually a plan I intend to execute) Methylation is a pretty commonly discussed epigenetic factor related to aging. However it might be the case that this is downstream of other longevity factors [https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging?commentId=XWTXoxpf3kviZbx8o] . I would like to measure my epigenetics -- in particular approximate rates/locations of methylation within my genome.This can be used to provide an approximate biological age correlate [https://genomebiology.biomedcentral.com/articles/10.1186/gb-2013-14-10-r115#Sec31] . There are different ways to measure methylation [https://www.neb.com/applications/epigenetics/identifying-dna-methylation], but one I'm pretty excited about that I don't hear mentioned often enough is the Oxford Nanopore sequencer [https://nanoporetech.com/]. The mechanism of the sequencer is that it does direct-reads (instead of reading amplified libraries, which destroy methylation unless specifically treated for it), and off the device is a time-series of electrical signals, which are decoded into base calls with a ML model. Unsurprisingly, community members have been building their own base caller models, including ones that are specialized to different tasks. So the community made a bunch of methylation base callers, and they've been found to be pretty good [https://www.nature.com/articles/s41467-021-23778-6]. So anyways the basic plan is this: * Extract a bunch of cells (probably blood but could be other sources) * Extract DNA from cells * Prep the samples * Sequence w/ ONT and get raw data * Use the combined model approach [https://www.nature.com/articles/s41467-021-23778-6/figures/1] to analyze the targets from this analysis [https://genomebiology.biomedcentral.com/articles/10.1186/gb-2013-14-10-r115#Sec40] Why I think this is cool? Mostly because ONT makes a $1k sequencer than can
7adamzerner7dThe other day Improve your Vocabulary: Stop saying VERY! [https://www.youtube.com/watch?v=PCoyTwltu5g] popped up in my YouTube video feed. I was annoyed. This idea that you shouldn't use the word "very" has always seemed pretentious to me. What value does it add if you say "extremely" or "incredibly" instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they're probably a good idea sometimes. But other times people just want to use different words in order to sound smart. I remember there was a time in elementary school when I was working on a paper with a friend. My job was to write it, and his job was to "fix it up and making it sound good". I remember him going in and changing words like "very", that I had used appropriately, to overly dramatic words like "stupendously". And I remember feeling annoyed at the end result of the paper because it sounded pretentious. Here I want to argue for something similar to "stop saying very" though. I want to argue for "stop saying think". Consider the following: "I think the restaurant is still open past 8pm". What does that mean? Are you 20% sure? 60%? 90%? Wouldn't it be useful this ambiguity disappeared? I'm not saying that "I think" is always ambiguous and bad. Sometimes it's relatively clear from the context that you mean 20% sure, not 90%. Eg. "I thhhhhinkkk it's open past 8pm?" But you're not always so lucky. I find myself in situations where I'm not so lucky often enough. And so it seems like a good idea in general to move away from "I think" and closer to something more precise. I want to follow up with some good guidelines for what words/phrases you can say in various situations to express different degrees of confidence, as well as some other relevant things, but I am struggling to come up with such guidelines. Because of this, I'm writing this as a shortform rather than a regular post. I'd love to see someone else run with this idea and/or propose such gu
5
Wiki/Tag Page Edits and Discussion

Friday, June 11th 2021
Fri, Jun 11th 2021

Shortform
3kithpendragon7dWhat if we thought of the Almighty Org Chart of Bureaucracy as less of a pryamid (with Executive layers stacked on top) and more of a chandelier (with executives dangling uselessly below the functional bits)
2bvbvbvbvbvbvbvbvbvbvbv7dA METRIC FOR COMPARING SOCIAL CIRCLES Epistemic status : Just an idea I had on a walk, doesn't seem that stupid to me I have been thinking a bit about this topic lately, had an idea of a solution and figured LW would be interested in pointing out the unavoidable flaws in the reasoning. Here's the gistFind a formula to quantify, as objectively as possible, your filter bubble (also called social bubble or even social circle). One could also see this as measuring by how much your social circle differ from random. The metric I chose to focus on is the income in local currency unit, but I think the idea is easily generalizable. For example we could use the total number of years of education. But why ?One could use it to compare his own bubble to other people's. I can see it being used as a wakeup call (i.e. it's one way to find out how much you're privileged), or to judge a politician or something. Here's a simple algorithm I came up with : 1. ask the person to write down the names of the 10 most influential person they see more than once every 2 month. It has to be people they physically interact with, exchange ideas and so one. Any superficial friend doesn't count, one way relationships (watching someone one youtube for example) don't count either. Family members don't count. Neither do neighbours (that would skew results too much). 2. write down their income, or if they live at their parent's expense : the average of their parent's income. 3. sum the total income of your circle, add your own income, divide by 11. The difference between that value and the median income of your area of residence is your SocialCircleScore. You can compare this number to the one of others to better grasp the privileges that some may have without realizing it. What do you think? Any idea of a better formula? What is missing? How would you see this being used? Ever heard of something like that? If so, I'd love to read on it.
3
1SoerenMind7dFavoring China in the AI race In a many-polar AI deployment scenario, a crucial challenge is to solve coordination problems between non-state actors: ensuring that companies don't cut corners, monitoring them, just to name a few challenges. And in many ways, China is better than western countries at solving coordination problems within their borders. For example, they can use their authority over companies as these tend to be state-owned or owned by some fund that is owned by a fund that is state owned. Could this mean that, in a many-polar scenario, we should favor China in the race to build AGI? Of course, the benefits of China-internal coordination may be outweighed by the disadvantages of Chinese leadership in AI. But these disadvantages seem smaller in a many-polar world because many actors, not just the Chinese government, share ownership of the future.

Thursday, June 10th 2021
Thu, Jun 10th 2021

Shortform
6MikkW8dIn Zvi's most recent Covid-19 post [https://www.lesswrong.com/posts/xEFfbEMFHhtgseKz3/covid-6-10-somebody-else-s-problem] , he puts the probability of a variant escaping mRNA vaccines and causing trouble in the US at most at 10%. I'm not sure I'm so optimistic. One thing that gives reason to be optimistic, is that we have yet to see any variant that has substantial resistance to the vaccines, which might lead one to think that resistance just isn't something that is likely to come up. However, on the other hand, the virus has had more than a year for more virulent strains to crop up while people were actively sheltering in place, and variants first came on the radar (at least for the population at large) around 9 months after the start of worldwide lockdowns, and a year after the virus was first noticed. In contrast, the vaccine has only been rolling out for half a year, and only come into large-scale contact with the virus for maybe half that time, let's say a quarter of a year. It's maybe not so surprising that a resistant variant hasn't appeared yet. Right now, there's a fairly large surface area between non-resistant strains of Covid and vaccinated humans. Many vaccinated humans will be exposed to virus particles, which will for the most part be easily defended against by the immune system. However, if it's possible for the virus to change in any way to reduce the immune response it faces, we will see this happen, and particularly in areas where there's roughly half vaccinated people, half unvaccinated, such a variant will have at least a slight advantage over other variants, and will start to spread faster than non-resistant variants. Again, it's taken a while for other variants to crop up, so it's not much information that we haven't seen this happen yet. The faster we are able to get vaccines in most arms in all countries, the less likely this is to happen. If most humans worldwide are vaccinated 6 months from now, there likely won't be much opportunity fo
2Taleuntum8dHave P proxy and V value. Based on past observances P is correlated with V. Increase P! (Either directly or by introducing a reward to the agents inside the system for increasing P, who cares) Two cases: P does not cause V P causes V Case 1: Wow, Goodhart is a genius! Even though I had a correlation, I increased one variable and the other did not increase! Case 2: Wow, you are pedantic. Obviously if the relationship between the variables is so special that P causes V, Goodhart's law won't apply. If I increase the amount of weight lifted (proxy), then obviously I will get visibly bigger muscles (value). Booring! (Also, I'm really good at seeing causal relationships even when they don't exist (human universal), so I will basically never feel surprise when I actually find one. That will be the expected outcome, so I will look strangely at anyone trying to test Goodhart's law on any two pair of variables which have even a sliver of a chance of being in a causal relationship)
1
2ChristianKl8dI'm playing around with an evolutionary model for transposons and the transposons regularly kill my whole population...
1
1benwr8dI'm interested in concrete ways for humans to evaluate and verify complex facts about the world. I'm especially interested in a set of things that might be described as "bootstrapping trust". For example: Say I want to compute some expensive function f on an input x. I have access to a computer C that can compute f; it gives me a result r. But I don't fully trust C - it might be maliciously programmed to tell me a wrong answer. In some cases, I can require that C produce a proof that f(x) = r that I can easily check. In others, I can't. Which cases are which? A partial answer to this question is "the complexity class NP". But in practice this isn't really satisfying. I have to make some assumptions about what tools are available that I do trust. Maybe I trust simple mathematical facts (and I think I even trust that serious mathematics and theoretical computer science track truth really well). I also trust my own senses and memory, to a nontrivial extent. Reaching much beyond that is starting to feel iffy. For example, I might not (yet) have a computer of my own that I trust to help me with the verification. What kinds of proof can I accept with the limitations I've chosen? And how can I use those trustworthy proofs to bootstrap other trusted tools? Other problems in this bucket include "How can we have trustworthy evidence - say videos - in a world with nearly perfect generative models?" and a bunch of subquestions of "Does debate scale as an AI alignment strategy?" This class of questions feels like an interesting lens on some things that are relevant to some sorts of AI alignment work such as debate and interpretability. It's also obviously related to some parts of information security and cryptography. "Bootstrapping trust" is basically just a restatement of the whole problem. It's not exactly that I think this is a good way to decide how to direct AI alignment effort; I just notice that it seems somehow like a "fresh" way of viewing things.
1

Wednesday, June 9th 2021
Wed, Jun 9th 2021

Personal Blogposts
Shortform
3steven04619dAre We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus) [https://www.nber.org/system/files/working_papers/w21547/w21547.pdf] Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.
1

Load More Days