All Posts

Sorted by Magic (New & Upvoted)

Wednesday, November 25th 2020
Wed, Nov 25th 2020

13AllAmericanBreakfast13hThe Rationalist Move Club Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens. Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings. Maybe one way to solve this is to ask for escalating credible commitments. A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club. If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled. By the end of the pre-defined planning period, there could be one of three majority consensus states, determined by vote (approval vote, obviously!): 1. Most people feel there is a solid timetable and location for a move, and want to go forward that plan as long as half or more of the Move Club members also approve of this option. To cast a vote approving of this choice requires an additional $2,000 deposit per person into the Move Fund, which is returned along with their initial $500 deposit after they've signed a lease or bought a residence in the new city, or in 3 years, whichever is sooner. 2. Most people want to continue planning for a move, but aren't ready to commit to a plan yet. To cast a vote approving of this choice requires an additional $500 deposit per person into the Move Fund, unless they paid $2,000 to a
11DanielFilan1dA rough and dirty estimate of the COVID externality of visiting your family in the USA for Christmas when you don't feel ill: You incur some number of μCOVIDs[*] a week, let's call it x. Since the incubation time is about 5 days [], let's say that your chance of having COVID is about 5x/7,000,000 when you arrive at the home of your family with n other people. In-house attack rate is about 1/3, I estimate based off hazy recollections, so in expectation you infect 5xn/21,000,000 people, which is about xn/4,000,000 people. How bad is it to infect one family member? Well, people tend to be most infectious about 1.5 days before symptoms show [], which is about 3.5 days after they get infected. Furthermore, we empirically see that R is about 1 on average, so the people you infect each infect one person, who goes on to infect one other person, etc. How long until the chain ends? It looks like [] vaccines will be widely distributed in the USA some time between the 1st of April and the 31st of December 2021. Median date looks kinda like the 1st of September. So let's say that there's 8 months of transmission. A month has about 30.5 days, so that's 244 days of transmission, which is 244/3.5=70 people. IFR is about 0.5% [] , so you get about 70 × 0.5% = 0.35 deaths. Each death loses maybe 13 life-years [], altho that's not quality-adjusted. Since I don't want to quality-adjust that number, that's 13 × 0.35 = 4.55 life-years lost. But some infections result in bad disability but not death. I estimate the disability burden at about equal to the mortality burden, so that's 4.55 × 2 = 9.1 QALYs lost. A year is 365 days, so
6TurnTrout17hOver the last 2.5 years, I've read a lot of math textbooks. Not using Anki / spaced repetition systems over that time has been an enormous mistake. My factual recall seems worse-than-average among my peers, but when supplemented with Anki, it's far better than average (hence, I was able to learn 2000+ Japanese characters in 90 days, in college). I considered using Anki for math in early 2018, but I dismissed it quickly because I hadn't had good experience using that application for things which weren't languages. I should have at least tried to see if I could repurpose my previous success! I'm now happily using Anki to learn measure theory and ring theory, and I can already tell that it's sticking far better. This mistake has had real consequences. I've gotten far better at proofs and I'm quite good at real analysis (I passed a self-administered graduate qualifying exam in the spring), but I have to look things some up for probability theory. Not a good look in interviews. I might have to spend weeks of extra time reviewing things I could have already stashed away in an Anki deck. Oops!
6Richard_Ngo1dI suspect that AIXI is misleading to think about in large part because it lacks reusable parameters - instead it just memorises all inputs it's seen so far. Which means the setup doesn't have episodes, or a training/deployment distinction; nor is any behaviour actually "reinforced".
4mr-hire16hAlright, now somebody needs to write the "Pain is a contextually useful unit of effort of which the value varies depending on your situation, genetics, and upbringing" post. I sort of want to create a gpt-3 bot that automatically does this for any X is Good or X is Bad post.

Tuesday, November 24th 2020
Tue, Nov 24th 2020

2MakoYass2dIdea: Screen burn correction app that figures out how to exactly negate your screen's issues by pretty much looking at itself in a mirror through the selfie cam, trying to display pure white, remembering the imperfections it sees, then tinting everything with the negation of that from then on. Nobody seems to have made this yet. I think there might be things for tinting your screen in general, but it doesn't know the specific quirks of your screenburn. Most of the apps for screen burn recommend that you just burn every color over the entire screen that isn't damaged yet, so that they all get to be equally damaged, which seems like a really bad thing to be recommending.
1sguyrep1dTesting and Quarantine
1qyng2dQON #2: — Collective decision-making and the mechanics of trust Arguably one thing that has emerged from 2020 has been the concept of trust surfacing from its deep background operations of relationship maintenance. Due to some work on a project earlier in the year, I was able to have the briefest of insights into the dynamics of trust on an organisational playing field, and I don't think I've heard the term being batted about more - demands for more (or even less) of it between various parties, the consequences in the absence of it, multiple strategies for its restoration. At the same time, I don't think I heard any definitions being shared during these discussions. Stephen M. R. Covey's "The Speed of Trust" describes the notion as like the water in which fish swim - it's only noticed when it's absent. And only when it's absent do we notice it's tenuousness. Trust is part of the oil on which society's engines run (how do I know the building won't collapse under my feet, do I really believe this restaurant cleaned their utensils properly). Throughout my project work, it became clear that people were thinking more holistically, more about the human collective, as a unified system. But it also runs deeper than that. With every interaction, decision or non-decision it shifts the fabric of 7.8 billion people and counting ever so slightly. Like the window pane on a rainy day, individual droplets pelt onto the glass gradually until the weight of the collective and gravity inevitably cascades into rivulets, tracking an unpredictable course as it pulls other droplets into its path. The thought itself can become paralysing (but oh wait, that's also indecision, so may as well move) So the idea of systems thinking, considering that maybe this solution we're inventing is actually solving a problem of some previous feedback loop and that with this solution will come the development of another loop, is ever important. Especially when it's within almost every discipline where

Monday, November 23rd 2020
Mon, Nov 23rd 2020

11Bucky3dOxford / AstraZeneca vaccine effectiveness (numbers from here [] and here [], numbers inferred are marked with an asterisk) Some interesting results from the latest vaccine trial. The treatment group was split in two, one of which received 2 full doses of the vaccine, the other received a half dose followed by a full dose (separated by a month in both cases). * In the control group there were 101 COVID infections in ~11,600* participants. * With 2 full doses there were 28* infections in 8,895 participants. * In the half first dose condition there were 2* infections in 2,741 participants. So does having a low first dose actually improve the immune response? The best I can figure out the evidence is 8:1 Bayes factor in favour of the "low first dose better" hypothesis vs the "It doesn't matter either way" hypothesis. Not sure what a reasonable prior is on this but I don't think this puts the "low first dose is better" hypothesis as a strong winner on posterior. On the other hand the evidence is 14:1 in favour of "it doesn't matter either way" vs "both full doses is better" so at the moment it probably makes sense to give a half dose first and have more doses in total. I'll be interested as to what happens as the data matures - the above is apparently based on protection levels 2 weeks after receiving the second dose.
9TurnTrout3dI remarked to my brother, Josh, that when most people find themselves hopefully saying "here's how X can still happen!", it's a lost cause and they should stop grasping for straws and move on with their lives. Josh grinned, pulled out his cryonics necklace, and said "here's how I can still not die!"
6ChristianKl3dRunning simulations of driving situations is a key feature of how machine learning models for driverless cars get trained. Maybe a key reasons for why humans dream is to allow us to simulate situations and learn to act in them?
3DanielFilan3dAn interesting tension: it's kind of obvious from a micro-econ view that group houses should have Pigouvian taxes on uCOVIDs[*] (where I pay housemates for the chance I get them sick) rather than caps on how many uCOVIDs everyone can incur per week - and of course both of these are better than "just sort of be reasonable" or having no system. But uCOVID caps are nice in that they make it significantly easier to coordinate with other houses - it's much easier to figure out how risky interacting with somebody is when they can just tell you their cap, rather than having to guess how much they'll interact with others between now and when you'll meet them. It's not totally clear to me which system ends up being better, although I think for people who have stable habits Pigouvian taxes end up better. [*] a uCOVID, short for microCOVID, is a one in a million chance of catching COVID-19.
2Troy Macedon3dHOW TO SIMULATE OBSERVERS WITHOUT RISKING YOURSELF GETTING SIMULATED. A common argument for the simulation hypothesis is that if we simulate too many observers relative to our unsimulated population then we'll end up in a state where most observers are simulated, therefore by the self indication assumption we're already in a situation of being simulated ourselves! I'll refer to this as the simulation tragedy. I'll introduce a few methods we can use to avoid such an outcome. Why? Because I believe it's better to be unsimulated than simulated. The goal here will be to act in regards to manmade simulations such that the measure of unsimulated human experiences will always and forever outweigh the measure of simulatee experience. And not just within the sampled situation, but in the indicate situation entirely. As in, there'd be no conceivable way for us to be simulated. The most bruteforce way to avoid the simulation tragedy is to just enforce a global law that limits the measure of simulatee experiences to below, say, 1% of nonsimulatee experiences. We'd also assume that every other civilization capable of simulations would be rational enough to follow a protocol of at least that extent. This would be simple to enforce as well since computing power draws electricity and produces heat which can be tracked by governments who can then go audit the processing centers. A very crude solution but it'd work and we'd retain our unsimulated status. There would also be an incentive for simulation companies to fund the creation of "human experience" in meat space to open up more licensed measure for simulatee experience. If capping the number of simulatees that have, do, or ever will exist becomes unfavorable or nonviable, we can instead cap the thought-space of the simulatees. Trivially, if we prevent simulatees from using anthropic reasoning, or any method of self-location, then the only thing you'll need to do to ensure your status as a nonsimulatee is to just self-locate

Sunday, November 22nd 2020
Sun, Nov 22nd 2020

7Alex Ray4dThinking more about the singleton [] risk / global stable totalitarian government risk from Bostrom's Superintelligence [,_Dangers,_Strategies], human factors [], and theory of the firm []. Human factors represent human capacities or limits that are unlikely to change in the short term. For example, the number of people one can "know" (for some definition of that term), limits to long-term and working memory, etc. Theory of the firm tries to answer "why are economies markets but businesses autocracies" and related questions. I'm interested in the subquestion of "what factors given the upper bound on coordination for a single business", related to "how big can a business be". I think this is related to "how big can an autocracy (robustly/stably) be", which is how it relates to the singleton risk. Some thoughts this produces for me: * Communication and coordination technology (telephones, email, etc) that increase the upper bounds of coordination for businesses ALSO increase the upper bound on coordination for autocracies/singletons * My belief is that the current max size (in people) of a singleton is much lower than current global population * This weakly suggests that a large global population is a good preventative for a singleton * I don't think this means we can "war of the cradle" our way out of singleton risk, given how fast tech moves and how slow population moves * I think this does mean that any non-extinction event that dramatically reduces population also dramatically increases singleton risk * I think that it's possible to get a long-term government aligned with the values of the governed, and "singleton risk" is the risk of an unaligned global government So I think I'd be interested in tracking two "c
2ete4dThinking about some things I may write. If any of them sound interesting to you let me know and I'll probably be much more motivated to create it. If you're up for reviewing drafts and/or having a video call to test ideas that would be even better. * Memetics mini-sequence ( [] has a few good things, but no introduction to what seems like a very useful set of concepts for world-modelling) * Book Review: The Meme Machine (focused on general principles and memetic pressures toward altruism) * Meme-Gene and Meme-Meme co-evolution (focused on memeplexes and memetic defences/filters, could be just a part of the first post if both end up shortish) * The Memetic Tower of Generate and Test (a set of ideas about the specific memetic processes not present in genetic evolution, inspired by Dennet's tower of generate and test []) * (?) Moloch in the Memes (even if we have an omnibenevolent AI god looking out for sentient well-being, things maybe get weird in the long term as ideas become more competent replicators/persisters if the overseer is focusing on the good and freedom of individuals rather than memes. I probably won't actually write this, because I don't have much more than some handwavey ideas about a situation that is really hard to think about.) * Unusual Artefacts of Communication (some rare and possibly good ways of sharing ideas, e.g. Conversation Menu, CYOA, Arbital Paths, call for ideas. Maybe best as a question with a few pre-written answers?)

Saturday, November 21st 2020
Sat, Nov 21st 2020

4AllAmericanBreakfast5dAn end run around slow government The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs. According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered. But 3 years is a long time to wait. Especially when vitamin D deficiency is linked to COVID mortality. And if we want to be good progressives, we can also note that vitamin D deficiency is linked to race, and may be driving the higher rates of death in black communities due to COVID. We could call the slowness to update the RDA an example of systemic racism! What do we do when a regulatory board isn't doing its job? Well, we can disseminate the truth over the internet. But then you wind up with an asymmetric information problem. Reading the health claims of many people promising "the truth," how do you decide whom to believe? Probably you have the most sway in tight-knit communities, such as your family, your immediate circle of friends, and online forums like this one. What if you wanted to pressure the FNB to reconsider the RDA sooner rather than later? Probably giving them some bad press would be one way to do it. This is a symmetric weapon, but this is a situation where we don't actually have anybody who really thinks that incorrect vitamin D RDA levels are a good thing. Except maybe literal racists who are also extremely informed about health supplements? In a situation where we're not dealing with a partisan divide, but only an issue of bureaucratic inefficiency, applying pressure tactics seems like a good strategy to me. How do you start such a pressure campaign? Probably you reach out to leaders of

Thursday, November 19th 2020
Thu, Nov 19th 2020

16MikkW7dReligion isn't about believing false things []. Religion is about building bonds between humans, by means including (but not limited to) costly signalling. It happens that a ubiquitous form of costly signalling used by many prominent modern religions is belief taxes (insisting that the ingroup professes a particular, easily disproven belief as a reliable signal of loyalty []), but this is not neccesary for a religion to successfully build trust and loyalty between members. In particular, costly signalling must be negative-value for an individual (before the second-order benefits from the group dynamic), but need not be negative-value for the group, or for humanity. Indeed, the best costly sacrifices can be positive-value for the group or humanity, while negative-value for the performing individual. (There are some who may argue that positive-value sacrifices have less signalling value than negative value sacrifices, but I find their logic dubious, and my own observations of religion seem to suggest positive-value sacrifice is abundant in organized religion, albeit intermixed with neutral- and negative-value sacrifice) The rationalist community is averse to religion because it so often goes hand in hand with belief taxes, which are counter to the rationalist ethos, and would threaten to destroy much that rationalists value. But religion is not about belief taxes. While I believe sacrifices are an important part of the functioning of religion, a religion should avoid asking its members to make sacrifices that destroy what the collective values, and instead encourage costly sacrifices that help contribute to the things we collectively value.
9Daniel Kokotajlo7dMaybe a tax on compute would be a good and feasible idea? --Currently the AI community is mostly resource-poor academics struggling to compete with a minority of corporate researchers at places like DeepMind and OpenAI with huge compute budgets. So maybe the community would mostly support this tax, as it levels the playing field. The revenue from the tax could be earmarked to fund "AI for good" research projects. Perhaps we could package the tax with additional spending for such grants, so that overall money flows into the AI community, whilst reducing compute usage. This will hopefully make the proposal acceptable and therefore feasible. --The tax could be set so that it is basically 0 for everything except for AI projects above a certain threshold of size, and then it's prohibitive. To some extent this happens naturally since compute is normally measured on a log scale: If we have a tax that is 1000% of the cost of compute, this won't be a big deal for academic researchers spending $100 or so per experiment (Oh no! Now I have to spend $1,000! No big deal, I'll fill out an expense form and bill it to the university) but it would be prohibitive for a corporation trying to spend a billion dollars to make GPT-5. And the tax can also have a threshold such that only big-budget training runs get taxed at all, so that academics are completely untouched by the tax, as are small businesses, and big businesses making AI without the use of massive scale. --The AI corporations and most of all the chip manufacturers would probably be against this. But maybe this opposition can be overcome.
9Rafael Harth7dMore on expectations leading to unhappiness: I think the most important instance of this in my life has been the following pattern. * I do a thing where there is some kind of feedback mechanism * The reception is better than I expected, sometimes by a lot * I'm quite happy about this, for a day or so * I immediately and unconsciously update my standards upward to consider the reception the new normal * I do a comparable thing, the reception is worse than the previous time * I brood over this failure for several days, usually with a major loss of productivity OTOH, I can think of three distinct major cases in three different contexts where this has happened recently, and I think there were probably many smaller ones. Of course, if something goes worse than expected, I never think "well, this is now the new expected level", but rather "this was clearly an outlier, and I can probably avoid it in the future". But outliers can happen in both directions. The counter-argument here is that one would hope to make progress in life, but even under the optimistic assumption that this is happening, it's still unreasonable to expect things to improve monotonically.
8Daniel Kokotajlo7dThe other day I heard this anecdote: Someone's friend was several years ago dismissive of AI risk concerns, thinking that AGI was very far in the future. When pressed about what it would take to change their mind, they said their fire alarm would be AI solving Montezuma's Revenge. Well, now it's solved, what do they say? Nothing; if they noticed they didn't say. Probably if they were pressed on it they would say they were wrong before to call that their fire alarm. This story fits with the worldview expressed in "There's No Fire Alarm for AGI." I expect this sort of thing to keep happening well past the point of no return.
6Mati_Roy7dThere's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older). Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves. As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".

Wednesday, November 18th 2020
Wed, Nov 18th 2020

12Raemon8dI notice that academic papers have stupidly long, hard-to-read abstracts. My understanding is that this is because there is some kind of norm about papers having the abstract be one paragraph, while the word-count limit tends to be... much longer than a paragraph (250 - 500 words). Can... can we just fix this? Can we either say "your abstract needs to be a goddamn paragraph, which is like 100 words", or "the abstract is a cover letter that should be about one page long, and it can have multiple linebreaks and it's fine." (My guess is that the best equilibrium is "People keep doing the thing currently-called-abstracts, and start treating them as 'has to fit on one page', with paragraph breaks, and then also people start writing a 2-3 sentence thing that's more like "the single actual-paragraph that you'd read if you were skimming through a list of papers.")
5Alex Ray8dFuture City Idea: an interface for safe AI-control of traffic lights We want a traffic light that * Can function autonomously if there is no network connection * Meets some minimum timing guidelines (for example, green in a particular direction no less than 15 seconds and no more than 30 seconds, etc) * Secure interface to communicate with city-central control * Has sensors that allow some feedback for measuring traffic efficiency or throughput This gives constraints, and I bet an AI system could be trained to optimize efficiency or throughput within the constraints. Additionally, you can narrow the constraints (for example, only choosing 15 or 16 seconds for green) and slowly widen them in order to change flows slowly. This is the sort of thing Hash [] would be great for, simulation wise. There's probably dedicated traffic simulators, as well. At something like a quarter million dollars a traffic light, I think there's an opportunity here for startup. (I don't know Matt Gentzel's LW handle but credit for inspiration to him)

Tuesday, November 17th 2020
Tue, Nov 17th 2020

3Mati_Roy9dSuggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.

Load More Days