All Posts

Sorted by Magic (New & Upvoted)

Saturday, June 12th 2021
Sat, Jun 12th 2021

Shortform
12Alex Ray17hIntersubjective Mean and Variability. (Subtitle: I wish we shared more art with each other) This is mostly a reaction to the (10y old) LW post:Things you are supposed to like [https://www.lesswrong.com/posts/4tzEAgdbNTwB6nKyL/things-you-are-supposed-to-like] . I think there's two common stories for comparing intersubjective experiences: * "Mismatch": Alice loves a book, and found it deeply transformative. Beth, who otherwise has very similar tastes and preferences to Alice, reads the book and finds it boring and unmoving. * "Match": Charlie loves a piece of music. Daniel, who shares a lot of Charlie's taste in music, listens to it and also loves it. One way I can think of unpacking this is that there is in terms of distributions: * "Mean" - the shared intersubjective experiences, which we see in the "Match" case * "Variability" - the difference in intersubjective experiences, which we see in the "Mismatch" case Another way of unpacking this is due to factors within the piece or within the subject * "Intrinsic" - factors that are within the subject, things like past experiences and memories and even what you had for breakfast * "Extrinsic" - factors that are within the piece itself, and shared by all observers And one more ingredient I want to point at is question substitution [https://www.lesswrong.com/posts/LHtMNz7ua8zu4rSZr/the-substitution-principle]. In this case I think the effect is more like "felt sense query substitution" or "received answer substitution" since it doesn't have an explicit question. * When asked about a piece (of art, music, etc) people will respond with how they felt -- which includes both intrinsic and extrinsic factors. Anyways what I want is better social tools for separating out these, in ways that let people share their interest and excitement in things. * I think that these mismatches/misfirings (like the LW post that set this off) and the reactions to them cause a chilling effect, where t
12Alex Ray17hHow I would do a group-buy of methylation analysis. (N.B. this is "thinking out loud" and not actually a plan I intend to execute) Methylation is a pretty commonly discussed epigenetic factor related to aging. However it might be the case that this is downstream of other longevity factors [https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging?commentId=XWTXoxpf3kviZbx8o] . I would like to measure my epigenetics -- in particular approximate rates/locations of methylation within my genome.This can be used to provide an approximate biological age correlate [https://genomebiology.biomedcentral.com/articles/10.1186/gb-2013-14-10-r115#Sec31] . There are different ways to measure methylation [https://www.neb.com/applications/epigenetics/identifying-dna-methylation], but one I'm pretty excited about that I don't hear mentioned often enough is the Oxford Nanopore sequencer [https://nanoporetech.com/]. The mechanism of the sequencer is that it does direct-reads (instead of reading amplified libraries, which destroy methylation unless specifically treated for it), and off the device is a time-series of electrical signals, which are decoded into base calls with a ML model. Unsurprisingly, community members have been building their own base caller models, including ones that are specialized to different tasks. So the community made a bunch of methylation base callers, and they've been found to be pretty good [https://www.nature.com/articles/s41467-021-23778-6]. So anyways the basic plan is this: * Extract a bunch of cells (probably blood but could be other sources) * Extract DNA from cells * Prep the samples * Sequence w/ ONT and get raw data * Use the combined model approach [https://www.nature.com/articles/s41467-021-23778-6/figures/1] to analyze the targets from this analysis [https://genomebiology.biomedcentral.com/articles/10.1186/gb-2013-14-10-r115#Sec40] Why I think this is cool? Mostly because ONT makes a $1k sequencer than can
7adamzerner1dThe other day Improve your Vocabulary: Stop saying VERY! [https://www.youtube.com/watch?v=PCoyTwltu5g] popped up in my YouTube video feed. I was annoyed. This idea that you shouldn't use the word "very" has always seemed pretentious to me. What value does it add if you say "extremely" or "incredibly" instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they're probably a good idea sometimes. But other times people just want to use different words in order to sound smart. I remember there was a time in elementary school when I was working on a paper with a friend. My job was to write it, and his job was to "fix it up and making it sound good". I remember him going in and changing words like "very", that I had used appropriately, to overly dramatic words like "stupendously". And I remember feeling annoyed at the end result of the paper because it sounded pretentious. Here I want to argue for something similar to "stop saying very" though. I want to argue for "stop saying think". Consider the following: "I think the restaurant is still open past 8pm". What does that mean? Are you 20% sure? 60%? 90%? Wouldn't it be useful this ambiguity disappeared? I'm not saying that "I think" is always ambiguous and bad. Sometimes it's relatively clear from the context that you mean 20% sure, not 90%. Eg. "I thhhhhinkkk it's open past 8pm?" But you're not always so lucky. I find myself in situations where I'm not so lucky often enough. And so it seems like a good idea in general to move away from "I think" and closer to something more precise. I want to follow up with some good guidelines for what words/phrases you can say in various situations to express different degrees of confidence, as well as some other relevant things, but I am struggling to come up with such guidelines. Because of this, I'm writing this as a shortform rather than a regular post. I'd love to see someone else run with this idea and/or propose such gu
4
Wiki/Tag Page Edits and Discussion

Friday, June 11th 2021
Fri, Jun 11th 2021

Shortform
3kithpendragon2dWhat if we thought of the Almighty Org Chart of Bureaucracy as less of a pryamid (with Executive layers stacked on top) and more of a chandelier (with executives dangling uselessly below the functional bits)
2bvbvbvbvbvbvbvbvbvbvbv1dA METRIC FOR COMPARING SOCIAL CIRCLES Epistemic status : Just an idea I had on a walk, doesn't seem that stupid to me I have been thinking a bit about this topic lately, had an idea of a solution and figured LW would be interested in pointing out the unavoidable flaws in the reasoning. Here's the gistFind a formula to quantify, as objectively as possible, your filter bubble (also called social bubble or even social circle). One could also see this as measuring by how much your social circle differ from random. The metric I chose to focus on is the income in local currency unit, but I think the idea is easily generalizable. For example we could use the total number of years of education. But why ?One could use it to compare his own bubble to other people's. I can see it being used as a wakeup call (i.e. it's one way to find out how much you're privileged), or to judge a politician or something. Here's a simple algorithm I came up with : 1. ask the person to write down the names of the 10 most influential person they see more than once every 2 month. It has to be people they physically interact with, exchange ideas and so one. Any superficial friend doesn't count, one way relationships (watching someone one youtube for example) don't count either. Family members don't count. Neither do neighbours (that would skew results too much). 2. write down their income, or if they live at their parent's expense : the average of their parent's income. 3. sum the total income of your circle, add your own income, divide by 11. The difference between that value and the median income of your area of residence is your SocialCircleScore. You can compare this number to the one of others to better grasp the privileges that some may have without realizing it. What do you think? Any idea of a better formula? What is missing? How would you see this being used? Ever heard of something like that? If so, I'd love to read on it.
1
1SoerenMind2dFavoring China in the AI race In a many-polar AI deployment scenario, a crucial challenge is to solve coordination problems between non-state actors: ensuring that companies don't cut corners, monitoring them, just to name a few challenges. And in many ways, China is better than western countries at solving coordination problems within their borders. For example, they can use their authority over companies as these tend to be state-owned or owned by some fund that is owned by a fund that is state owned. Could this mean that, in a many-polar scenario, we should favor China in the race to build AGI? Of course, the benefits of China-internal coordination may be outweighed by the disadvantages of Chinese leadership in AI. But these disadvantages seem smaller in a many-polar world because many actors, not just the Chinese government, share ownership of the future.

Thursday, June 10th 2021
Thu, Jun 10th 2021

Shortform
6MikkW3dIn Zvi's most recent Covid-19 post [https://www.lesswrong.com/posts/xEFfbEMFHhtgseKz3/covid-6-10-somebody-else-s-problem] , he puts the probability of a variant escaping mRNA vaccines and causing trouble in the US at most at 10%. I'm not sure I'm so optimistic. One thing that gives reason to be optimistic, is that we have yet to see any variant that has substantial resistance to the vaccines, which might lead one to think that resistance just isn't something that is likely to come up. However, on the other hand, the virus has had more than a year for more virulent strains to crop up while people were actively sheltering in place, and variants first came on the radar (at least for the population at large) around 9 months after the start of worldwide lockdowns, and a year after the virus was first noticed. In contrast, the vaccine has only been rolling out for half a year, and only come into large-scale contact with the virus for maybe half that time, let's say a quarter of a year. It's maybe not so surprising that a resistant variant hasn't appeared yet. Right now, there's a fairly large surface area between non-resistant strains of Covid and vaccinated humans. Many vaccinated humans will be exposed to virus particles, which will for the most part be easily defended against by the immune system. However, if it's possible for the virus to change in any way to reduce the immune response it faces, we will see this happen, and particularly in areas where there's roughly half vaccinated people, half unvaccinated, such a variant will have at least a slight advantage over other variants, and will start to spread faster than non-resistant variants. Again, it's taken a while for other variants to crop up, so it's not much information that we haven't seen this happen yet. The faster we are able to get vaccines in most arms in all countries, the less likely this is to happen. If most humans worldwide are vaccinated 6 months from now, there likely won't be much opportunity fo
2Taleuntum3dHave P proxy and V value. Based on past observances P is correlated with V. Increase P! (Either directly or by introducing a reward to the agents inside the system for increasing P, who cares) Two cases: P does not cause V P causes V Case 1: Wow, Goodhart is a genius! Even though I had a correlation, I increased one variable and the other did not increase! Case 2: Wow, you are pedantic. Obviously if the relationship between the variables is so special that P causes V, Goodhart's law won't apply. If I increase the amount of weight lifted (proxy), then obviously I will get visibly bigger muscles (value). Booring! (Also, I'm really good at seeing causal relationships even when they don't exist (human universal), so I will basically never feel surprise when I actually find one. That will be the expected outcome, so I will look strangely at anyone trying to test Goodhart's law on any two pair of variables which have even a sliver of a chance of being in a causal relationship)
1
2ChristianKl3dI'm playing around with an evolutionary model for transposons and the transposons regularly kill my whole population...
1
1benwr3dI'm interested in concrete ways for humans to evaluate and verify complex facts about the world. I'm especially interested in a set of things that might be described as "bootstrapping trust". For example: Say I want to compute some expensive function f on an input x. I have access to a computer C that can compute f; it gives me a result r. But I don't fully trust C - it might be maliciously programmed to tell me a wrong answer. In some cases, I can require that C produce a proof that f(x) = r that I can easily check. In others, I can't. Which cases are which? A partial answer to this question is "the complexity class NP". But in practice this isn't really satisfying. I have to make some assumptions about what tools are available that I do trust. Maybe I trust simple mathematical facts (and I think I even trust that serious mathematics and theoretical computer science track truth really well). I also trust my own senses and memory, to a nontrivial extent. Reaching much beyond that is starting to feel iffy. For example, I might not (yet) have a computer of my own that I trust to help me with the verification. What kinds of proof can I accept with the limitations I've chosen? And how can I use those trustworthy proofs to bootstrap other trusted tools? Other problems in this bucket include "How can we have trustworthy evidence - say videos - in a world with nearly perfect generative models?" and a bunch of subquestions of "Does debate scale as an AI alignment strategy?" This class of questions feels like an interesting lens on some things that are relevant to some sorts of AI alignment work such as debate and interpretability. It's also obviously related to some parts of information security and cryptography. "Bootstrapping trust" is basically just a restatement of the whole problem. It's not exactly that I think this is a good way to decide how to direct AI alignment effort; I just notice that it seems somehow like a "fresh" way of viewing things.
1

Wednesday, June 9th 2021
Wed, Jun 9th 2021

Personal Blogposts
Shortform
3steven04614dAre We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus) [https://www.nber.org/system/files/working_papers/w21547/w21547.pdf] Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.
1

Tuesday, June 8th 2021
Tue, Jun 8th 2021

Frontpage Posts
Shortform
47Rob Bensinger5dShared with permission, a google doc exchange confirming Eliezer still finds the arguments for alignment optimism, slower takeoffs, etc. unconvincing: Caveat: this was a private reply I saw and wanted to share (so people know EY's basic epistemic state, and therefore probably the state of other MIRI leadership). This wasn't an attempt to write an adequate public response to any of the public arguments put forward for alignment optimism or non-fast takeoff, etc., and isn't meant to be a replacement for public, detailed, object-level discussion. (Though I don't know when/if MIRI folks plan to produce a proper response, and if I expected such a response soonish I'd probably have just waited and posted that instead.)
3
10habryka5dThis seems like potentially a big deal: https://mobile.twitter.com/DrEricDing/status/1402062059890786311 [https://mobile.twitter.com/DrEricDing/status/1402062059890786311] > Troubling—the worst variant to date, the #DeltaVariant is now the new fastest growing variant in US. This is the so-called “Indian” variant #B16172 that is ravaging the UK despite high vaccinations because it has immune evasion properties. Here is why it’s trouble—Thread. #COVID19
3
3Qria5dWhy is longevity not the number 1 goal for most humans? Any goal you'd have would be achieved better with sufficent longevity. Naturally, eternal life is the first goal of my life. But to achieve this, global cooperative effort would be required push the science forward. Therefore nowadays I'm mostly thinking about why longevity seems not in most people's concern. In my worldview, longevity should be up there with ESGs in decision making process. But in reality, no one really talks about it. In conclusion I have two questions: Is putting longevity over any other goal a rational decision? And if so, why isn't general population on board with it?
7
2Viliam5dThere is this meme about Buddhism being based on experience, where you can verify everything firsthand, etc. I challenge the fans of Buddhism to show me how they can walk through walls, walk on water, fly, remember their past lives, teleport across a river, or cause an earthquake.
1
1Choosing Beggars5dVoting pattern always seem to guide behaviors. I've been playing this game for a long time. I've learned from this site is that I shouldn't speak about topics that I haven't spent as much time as others have on this site. Others have established literature history on this site and their blogs, but I haven't and my history outside of this platform is taken into account to, which I do not have the privilege of the benefit of the doubt like most other new users have. As such, because of my lack of expertise and the unbalanced nature of my exposure, it is highly discouraged for me to post anything less than of the top notch quality literature based on my abilities. All I'm saying is that you are asking for too much.
2

Monday, June 7th 2021
Mon, Jun 7th 2021

Shortform
4MakoYass6dNoticing I've been operating under a bias where I notice existential risk precursors pretty easily (EG, biotech, advances in computing hardware), but I notice no precursors of existential safety. To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can't be right, surely?... When I think about what they might be... I find only cultural technologies, or political conditions: the strength of global governance, the clarity of global discourses, perhaps the existence of universities. But that can't be it. These are all low hanging fruit, things that already exist. Differential progress is about what could be made to exist.
1
3lsusr6d…because I used to work as a street magician.

Sunday, June 6th 2021
Sun, Jun 6th 2021

Shortform
2Viliam7dI started a new blog on Substack. The first article is not related to rationality, just some ordinary Java programming: Using Images in Java [https://kittenlord.substack.com/p/using-images-in-java]. Outside view suggests that I start many projects, but complete few. If this blog turns out to be an exception, the expected content of the blog is mostly programming and math, but potentially anything I find interesting. The math stuff will probably be crossposted to LW, the programming stuff probably not -- the reason is that math is more general and I am kinda good at it, while the programming articles will be narrowly specialized (like this one) and I am kinda average at coding. The decision will be made per article anyway. When I started learning programming as a kid, my dream was to make computer games. Other than a few very simple ones I made during high school, I didn't seriously follow in this direction. Maybe it's time to restart the childhood dream. Game programming is different from the back-end development I usually do, so I will have to learn a few things. But maybe I can write about them while I learn. Then the worst case is that I will never make the games I imagine, but someone else with a similar dream may find my articles useful. The math part will probably be about random topics that provoked my curiosity at the moment, with no overarching theme. At this moment, I have a half-written introduction to nonstandard natural numbers, but don't hold your breath, because I am really slow at writing articles.

Friday, June 4th 2021
Fri, Jun 4th 2021

Shortform
3MikkW9dRule without proportional representation is rule without representation Taxation without proportional representation is taxation without representation.
1
2Douglas_Knight9dI see many people say that we should have done vaccine challenge trials, that would have been so much quicker. But we did challenge trials. They were "approved" in September and actually begun in February. If you want fast trials, it makes just as much sense to demand that the regulators run regular trials fast. There is much more to gain on that front. The actual efficacy trials only took about 2 months* that would have been saved by challenge trials. Most of the time was spent not studying vaccines, but waiting for approval to move on to the next step of the trial, just as all a year was spent waiting for approval for challenge trials. The criterion for moving from phase 2 to phase 3 is very simple and should not have taken any time at all, nor any explicit permission. It is perfectly reasonable for regulators to not want to trust the drug companies, but they can check the data after the fact. And if there are analyses that they did not foresee, they can do those after the new trials has already begun. * The amount of time for efficacy in a non-challenge trial depends on the prevalence of the disease. The actual duration of 2 months was not predicted ahead of time. The FDA's late addition of 2 months of safety data suggests that it was surprised how fast the efficacy data came in. Also, challenge trials don't provide safety data, only efficacy. It's good to separate safety from efficacy and make an explicit decision, a decision that the FDA tried to avoid for half of the trial. When people say that challenge trials save time, they are ignoring this, implicitly endorsing no such medium-term safety data. That's probably the right choice, but people who make it should say it loud, not dodge responsibility like the FDA.
1MikkW8dCurrently I'm making a "logobet", a writing system that aims to be to logographies as alphabets are to syllabaries [1]. Primarily, I want to use emoji for the symbols [2], but some important concepts don't have good emoji to express them. In these cases, I'm using kanji from either Japanese or Chinese to express the concept. One thing that I notice is that the visual style of emoji and kanji are quite different from eachother. I wouldn't actually say it looks bad, but it is jarring. The emoji are also too bold, colourful, and detailed to really fit well as text (rather than accompaniment for the text, as they are usually used today), though the colour is actually helpful in distinguishing symbols. Ideally, I would want a font to be made for the logobet that would render emoji and kanji (at least the ones that are used in the logobet) in a similar manner, with simple contours and subdued (but existent) colours. This would require changing both ⸤kanji to have a fuller, more colourful form, and emoji to be less detailed and have less bold colours⸥. But this will be downstream of actually implementing and publishing the first logobet. [1] By breaking down concepts into component parts the way an alphabet breaks down syllables into component parts, a logobet can be more easily learned, using on the order of hundreds of symbols, rather than tens of thousands of symbols. The benefit of using a concept-based, rather than phonetic, alphabet, is that the system can be read and written by people from any background without having to learn eachother's languages [1a]. [1a] We see this already in China, where populations that speak different Sinitic languages, can all communicate with eachother through the written Chinese script (which may be the only actively used language that is primarily written, not spoken). The main reason why I think this has not spread beyond East Asia is because kanji are too hard to learn, requiring months of effort to learn, whereas most writing
Wiki/Tag Page Edits and Discussion

Load More Days