All of johnswentworth's Comments + Replies

Computing Natural Abstractions: Linear Approximation

Note to self: the summary dimension seems basically constant as the neighborhood size increases. There's probably a generalization of Koopman-Pitman-Darmois which applies.

Computing Natural Abstractions: Linear Approximation

Well, it worked reasonably well with ER graphs (which naturally embed into quite high dimensions), so it should definitely work at least that well. (I expect it would work better.)

Computing Natural Abstractions: Linear Approximation

Oh I see. Yeah, if either X or Y is unidimensional, then any linear model is really boring. They need to be high-dimensional to do anything interesting.

1tailcalled13hThey need to be high-dimensional for the linear models themselves to do anything interesting, but I think adding a large number of low-dimensional linear models might, despite being boring, still change the dynamics of the graphs to be marginally more realistic for settings involving optimization. X turns into an estimate of Y, and tries to control this estimate towards zero; that's a pattern that I assume would be rare in your graph, but common in reality, and it could lead to real graphs exhibiting certain "conspiracies" that the model graphs might lack (especially if there are many (X, Y) pairs, or many (individually unidimensional) Xs that all try to control a single common Y). But there's probably a lot of things that can be investigated about this. I should probably be working on getting my system for this working, or something. Gonna be exciting to see what else you figure out re natural abstractions.
Computing Natural Abstractions: Linear Approximation

All ways of modifying Y are only equivalent in a dense linear system. Sparsity (in a high-dimensional system) changes that. (That's a fairly central concept behind this whole project: sparsity is one of the main ingredients necessary for the natural abstraction hypothesis.)

1tailcalled14hI think I phrased it wrong/in a confusing way. Suppose Y is unidimensional, and you have Y=f(g(X), h(X)). Suppose there are two perturbations i and j that X can emit, where g is only sensitive to i and h is only sensitive to j, i.e. g(j)=0, h(i)=0. Then because the system is linear, you can extract them from the rest: Y=f(g(X+ai+bj), h(X+ai+bj))=f(g(X), h(X))+af(g(i))+bf(h(j)) This means that if X only cares about Y, it is free to choose whether to adjust a or to adjust b. In a nonlinear system, there might be all sorts of things like moderators, diminishing returns, etc., which would make it matter whether it tried to control Y using a or using b; but in a linear system, it can just do whatever.
Computing Natural Abstractions: Linear Approximation

The main thing I expect from optimization is that the system will be adjusted so that certain specific abstractions work - i.e. if the optimizer cares about f(X), then the system will be adjusted so that information about f(X) is available far away (i.e. wherever it needs to be used).

That's the view in which we think about abstractions in the optimized system. If we instead take our system to be the whole optimization process, then we expect many variables in many places to be adjusted to optimize the objective, which means the objective itself is likely a natural abstraction, since information about it is available all over the place. I don't have all the math worked out for that yet, though.

Computing Natural Abstractions: Linear Approximation

I've been thinking a fair bit about this question, though with a slightly different framing.

Let's start with a neighborhood . Then we can pick a bunch of different far-away neighborhoods , and each of them will contain some info about summary(X). But then we can flip it around: if we do a PCA on , then we should expect to see components corresponding to summary(X), since all the variables which contain that information will covary.

Switching back to your framing: if X itself is large enough to contain multiple far-apart... (read more)

1tailcalled14hAgree, I was mainly thinking of whether it could still hold if X is small. Though it might be hard to define a cutoff threshold. Perhaps another way to frame it is, if you perform PCA, then you would likely get variables with info about both the external summary data of X, and the internal dynamics of X (which would not be visible externally). It might be relevant to examine things like the relative dimensionality for PCA vs SVD, to investigate how well natural abstractions allow throwing away info about internal dynamics. (This might be especially interesting a setting where it is tiled over time? As then the internal dynamics of X play a bigger role in things.) 🤔 That makes me think of another tangential thing. In a designed system, noise can often be kept low, and redundancy is often eliminated. So the PCA method might work better on "natural" (random or searched) systems than on designed systems, while the SVD method might work equally well on both.
Computing Natural Abstractions: Linear Approximation

I'd be curious to know whether something like this actually works in practice. It certainly shouldn't work all the time, since it's tackling the #P-Hard part of the problem pretty directly, but if it works well in practice that would solve a lot of problems.

Computing Natural Abstractions: Linear Approximation

I ran this experiment about 2-3 weeks ago (after the chaos post but before the project intro). I pretty strongly expected this to work much earlier - e.g. back in December I gave a talk which had all the ideas in it.

Core Pathways of Aging

Eh, yes and no. I'm definitely on board with selection pressure as a major piece in cancer. But for purposes of finding the root causes of aging, the key question is "why does cancer become more likely with aging?", and my current understanding is that selection pressures don't play a significant role there.

Once a cancer is already underway, selection pressure for drug resistance or differentiation or whatever is a big deal. But at a young age, the mutation rate is low and problematic cells are cleared before they have time to propagate anyway. Those defen... (read more)

4AllAmericanBreakfast2dMaybe there are multiple aging clocks. * Some run independently. * In other cases, clock A 'triggers' clock B. Cancer selection might be triggered by the 'primary aging clock' you've proposed. Once that happens, you now have two clocks running simultaneously. However, the 'cancer clock' could still be potentially triggered even in the absence of the 'primary aging clock.' Then the empirical question becomes when in the life cycle the 'cancer clock' starts ticking. Is it always running in the background, but DNA damage from other sources makes it speed up over time? Or, in the absence of the 'primary aging clock,' would the 'cancer clock' stay off for most people, most of the time, barring exposure to some potent carcinogen? There's also a possibility of hard-to-specify combinations of physical and mental damage that are self-perpetuating, and operate independently of DNA damage. Each year, perhaps people face a risk of trauma that triggers a cycle of progressively self-destructive behavior, a 'behavioral dysfunction clock.' That's speculation, just meant to illustrate the idea. It still seems conceptually important to focus attention on the 'primary aging clock' as a hypothetical target. I agree with you that it's plausible the 'primary aging clock' is the bottleneck. So this is more a move from "there's one aging clock" to "there might be more than one aging clock, but one of them is a lot more responsible for diseases of aging than the others."
Specializing in Problems We Don't Understand

Oh yeah, cruising Wikipedia lists is great. Definitely stumbled on multiple unknown unknowns that way. (Hauser's Law is one interesting example, which I first encountered on the list of eponymous laws.) Related: Exercises in Comprehensive Information Gathering.

The Case for Extreme Vaccine Effectiveness

Haven't finished reading this yet, but in reaction to the opening section... there's this claim that "One meal at a crowded restaurant is enough to give even a vaccinated person hundreds of microCovids". Which, regardless of whether it's true, sounds like it's probably the wrong way to think about things.

A core part of the microcovid model is that microcovids roughly add. You do a ten-microcovid activity, then a twenty microcovid activity, your total risk is roughly thirty microcovids. Put on a mask, and it cuts microcovids in half.

But with a vaccine, I'd ... (read more)

Wanting to Succeed on Every Metric Presented

Oh, I mean that part of the point of the post is to talk about what relative advantages/disadvantages rationality should have, in principle, if we're doing it right - as opposed to whatever specific skills or strategies today's rationalist community happens to have stumbled on. It's about the relative advantages of the rationality practices which we hopefully converge to in the long run, not necessarily the rationality practices we have today.

3elriggs3dOh! That makes sense as a post on it's own. Listing pros and cons of current rationalist techniques could then be compared to your ideal version of rationality to see what's lacking (or points out holes in the "ideal version"). Also, "current rationality techniques" is ill-defined in my head and the closest I can imagine is the CFAR manual [https://www.google.com/search?q=lesswrong+CFAR+manual&oq=lesswrong+CFAR+manual&aqs=chrome..69i57j69i59l3j0i271j69i60l3.4847j0j7&sourceid=chrome&ie=UTF-8] , though that is not the list I would've made.
Wanting to Succeed on Every Metric Presented

Do you have specific skills of rationality in mind for that post?

No, which is part of the point. I do intend to start from the sequences-esque notion of the term (i.e. "rationality is systematized winning"), but I don't necessarily intend to point to the kinds of things LW-style "rationality" currently focuses on. Indeed, there are some things LW-style "rationality" currently focuses on which I do not think are particularly useful for systematized winning, or are at least overemphasized.

1elriggs3dI don't know what point you're referring to here. Do you mean that listing specific skills of rationality is bad for systematized winning? I also want to wrangle more specifics from you, but I can just wait for your post:)
Wanting to Succeed on Every Metric Presented

I actually just started drafting a post in this vein. I'm framing the question as "what are the relative advantages and disadvantages of explicit rationality?". It's a natural follow-up to problems we don't understand: absent practice in rationality and being agenty (whether we call it that or not), we'll most likely end up as cultural-adaptation-executors. That works well mainly for problems where cultural/economic selection pressures have already produced good strategies. Explicit rationality is potentially useful mainly when that's not the case - either... (read more)

2Yoav Ravid4dNice! Looking forward to reading your post. I wrote a few notes myself under the title "Should You Become Rational"*, but it turn into enough for a post. One of the things that I wanted to consider is whether its someone's duty to become more rational, which I think is an interesting question (and it's a topic that was discussed on LW, see Your Rationality is My Business [https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business] ). My current conclusion is that your obligation to become more rational is relative to how much influence you have to wish to have on the world and on other people. Of course, even if true, this point might be slightly moot, since only someone who is already interested in rationality might agree with it, others are unlikely to care. * "Rational" pretty much for lack of a better word that still kept it short, didn't want to use rationalist as that's an identification as part of a specific group, which isn't the point
3elriggs4dRegarding "problems we don't understand", you pointed out an important meta-systematic skill: figuring out when different systems apply and don't apply (by applying new systems learned to a list of 20 or so big problems). The new post you're eluding to sounds interesting, but rationality is a loaded term. Do you have specific skills of rationality in mind for that post?
Specializing in Problems We Don't Understand

If so, I'm really interested with techniques you have in mind for starting from a complex mess/intuitions and getting to a formal problem/setting.

This deserves its own separate response.

At a high level, we can split this into two parts:

  • developing intuitions
  • translating intuitions into math

We've talked about the translation step a fair bit before (the conversation which led to this post). A core point of that post is that the translation from intuition to math should be faithful, and not inject any "extra assumptions" which weren't part of the math. So, for ... (read more)

Specializing in Problems We Don't Understand

I disagree with the claim that "identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem", at least insofar as we're focused on problems for which current paradigms fail.

P vs NP is a good example here. How do you identify a good subproblem for P vs NP? I mean, lots of people have come up with subproblems in mathematically-straightforward ways, like the strong exponential time hypothesis or P/poly vs NP. But as far as we can tell so far, these are not very goo... (read more)

Specializing in Problems We Don't Understand

One aspect I feel is not emphasized enough here is the skill of finding/formulating good problems.

That's a good one to highlight. In general, there's a lot of different skills which I didn't highlight in the post (in many cases because I haven't even explicitly noticed them) which are still quite high-value.

The outside-view approach of having a variety of problems you don't really understand should still naturally encourage building those sorts of skills. In the case of problem formulation, working on a wide variety of problems you don't understand will na... (read more)

2adamShimi5dFair enough. But identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem. An example of the first would be to simplify the problem as much as possible without making it trivial (classic technique in algorithm analysis and design), whereas an example of the second would be defining the logical induction criterion, which creates the problems of finding a logical inductor (not sure that happened in this order, this is a part of what's weird with problem formulation) And I have the intuition that there are way more useful and generalizable techniques for the first case than the second case. Do you feel differently? If so, I'm really interested with techniques you have in mind for starting from a complex mess/intuitions and getting to a formal problem/setting.
Specializing in Problems We Don't Understand

Actually building bridges and Actually preventing infections requires not only improvements in applied science, but also human coordination. In the former we've improved, in the latter we've stagnated.

+1 to this. The difficulties we have today with bridges and infections do not seem to me to have anything to do with the physical systems involved; they are all about incentives and coordination problems.

(Yes, I know lots of doctors talk about how we're overly trigger-happy with antibiotics, but I'm pretty sure that hospital infection rates have a lot more to... (read more)

4ChristianKl6dIt's the sterilization that creates the niche in which those bacteria thrive because they face less competition then they would face in other normal buildings which are populated by diverse bacteria. No matter how much you sterilize you are not going to go to zero bacteria in a space occupied by humans and when human are a primary vector of moving bacteria around in the space you select for bacteria's that actually interact with humans.
Core Pathways of Aging

Oh wow, that's really neat. I doubt that it has any relevance to the aging mechanisms of multicellular organisms, but very cool in its own right. And definitely not transposon-mediated.

Covid 4/9: Another Vaccine Passport Objection

Chronicle of a parent who finally lost it when their child’s school closed for ten days due to two Covid cases… for a fifth time. New York City is finally changing that rule. It was even more absurd than it sounds...

I am confused about why this is bad. It seems like an all-around excellent rule: if you see cases and you know where they came from, then just shut down the places where they came from - i.e. affected classrooms and close contacts. If you see cases and you don't know where they came from, especially after explicitly attempting to trace, then th... (read more)

3rockthecasbah7dIf the school shuts down the kids will just go back to the street. We do not send kids back into school when we observe transmission from kids being out of school. The evidence from Emily Oster suggest that there isn't much difference in transmission. Also, I would argue that a small amount of transmission is worth educating our children, especially with 70-80% of the vulnerable vaccinated. Overall dividing life years lost by transmissions comes to 2 weeks per confirmed infections, so call that the base cost. Reduce it by 75% for targeted vaccination and each case is costing ~3 days of a persons life. And the student infections are the least dangerous kind. I could go either way on it if the alternative were no transmission. Since the alternative is about the same transmission rate but somewhere else, I say keep the schools open. OTOH, the incentive argument is much stronger. Maybe the collective punishment forces the school to internalize the cost of transmission, leading to a pareto improving safe-school equilibrium.
6MichaelLowe8dAgreed. In addition, the quoted article is summarizing the policy incorrectly it seems: They write that the school will be closed when there is no evidence of in-school transmission, but that is wrong: if contact tracers find the source as outside of the school, the school will (presumably) not be closed.
Testing The Natural Abstraction Hypothesis: Project Intro

On the role of values: values clearly do play some role in determining which abstractions we use. An alien who observes Earth but does not care about anything on Earth's surface will likely not have a concept of trees, any more than an alien which has not observed Earth at all. Indifference has a similar effect to lack of data.

However, I expect that the space of abstractions is (approximately) discrete. A mind may use the tree-concept, or not use the tree-concept, but there is no natural abstraction arbitrarily-close-to-tree-but-not-the-same-as-tree. There... (read more)

1TekhneMakre7d>There is no continuum of tree-like abstractions. Some possibly related comments, on why there might be discrete clusters: https://www.lesswrong.com/posts/2J5AsHPxxLGZ78Z7s/bios-brakhus?commentId=hPfEp5r2K5BsfNe4F [https://www.lesswrong.com/posts/2J5AsHPxxLGZ78Z7s/bios-brakhus?commentId=hPfEp5r2K5BsfNe4F]
Core Pathways of Aging

Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies.

Do you have a reference on that? I'm familiar with how it works with budding yeast, but I've never heard of anything like that in a prokaryote.

3Aiyen6dhttps://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0030058 [https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0030058] This is the source I found. It’s fairly old, so if you’ve found something that supersedes it I’d be interested.
Testing The Natural Abstraction Hypothesis: Project Intro

Also interested in helping on this - if there's modelling you'd want to outsource.

Here's one fairly-standalone project which I probably won't get to soon. It would be a fair bit of work, but also potentially very impressive in terms of both showing off technical skills and producing cool results.

Short somewhat-oversimplified version: take a finite-element model of some realistic objects. Backpropagate to compute the jacobian of final state variables with respect to initial state variables. Take a singular value decomposition of the jacobian. Hypothesis: th... (read more)

Testing The Natural Abstraction Hypothesis: Project Intro

Re: dual use, I do have some thoughts on exactly what sort of capabilities would potentially come out of this.

The really interesting possibility is that we end up able to precisely specify high-level human concepts - a real-life language of the birds. The specifications would correctly capture what-we-actually-mean, so they wouldn't be prone to goodhart. That would mean, for instance, being able to formally specify "strawberry on a plate" in non-goodhartable way, so an AI optimizing for a strawberry on a plate would actually produce a strawberry on a plate... (read more)

3TekhneMakre7dI think there's an ambiguity in "concept" here, that's important to clarify re/ this hope. Humans use concepts in two ways: 1. as abstractions in themselves, like the idea of an ideal spring which contains its behavior within the mental object, and 2. as pointers / promissory notes towards the real objects, like "tree". Seems likely that any agent that has to attend to trees, will form the ~unique concept of "tree", in the sense of a cluster of things, and minimal sets of dimensions needed to specify the relevant behavior (height, hardness of wood, thickness, whatever). Some of this is like use (1): you can simulate some of the behavior of trees (e.g. how they'll behave when you try to cut them down and use them to build a cabin). Some of this is like use (2): if you want to know how to grow trees better, you can navigate to instances of real trees, study them to gain further relevant abstractiosn, and then use those new abstractions (nutrient intake, etc.) to grow trees better. So what do we mean by "strawberry", such that it's not goodhartable? We might mean "a thing that is relevantly naturally abstracted in the same way as a strawberry is relevantly naturally abstracted". This seems less goodhartable if we use meaning (2), but that's sort of cheating by pointing to "what we'd think of these strawberrys upon much more reflection in many more contexts of relevance". If we use meaning (1), that sems eminently goodhartable.
Core Pathways of Aging

Minimal cell experiments (making cells with as small a genome as possible) have already been done successfully. This presumably removes transposons, and I have not heard that such cells had abnormally long lifespans.

The minimal cell experiments were done with mycoplasma, which (as far as I know) does not age. More generally, as I understand it, most bacteria don't age, at least not in any sense similar to animals.

Also, I expect wild-type mycoplasma already had no transposons in its genome, since the organism evolved under very heavy evolutionary pressure for a small genome. (That's why it was chosen for the minimal cell experiments.)

2Aiyen8dAn initial search doesn’t confirm whether or not mycoplasma age. Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies. If mycoplasma genuinely don’t, that would be fascinating and potentially yield valuable clues on the aging mechanism.
Core Pathways of Aging

The key idea here is the difference between "local" vs "nonlocal" changes in a multistable system - moving around within one basin vs jumping to another one. The prototypical picture:

For your finger example, one basin would be with-finger, one basin without-finger. For small changes (including normal cell turnover) the system returns to its with-finger equilibrium state, without any permanent changes. In order to knock it into the other state, some large external "shock" has to push it - e.g. cutting off a finger. Once in the other state, it's there perman... (read more)

Core Pathways of Aging

My understanding is that transposon repression mechanisms (like piRNAs) are dramatically upregulated in the germ line. They are already very close to 100% effective in most cells under normal conditions, and even more so in the germ line, so that most children do not have any more transposons than their parents.

(More generally, my understanding is that germ line cells have special stuff going to make sure that the genome is passed on with minimal errors. Non-germ cells are less "paranoid" about mutations.)

Once the rate is low enough, it's handled by natural selection, same as any other mutations.

Core Pathways of Aging

A private message (from someone who may name themselves if they wish) asked about the claim that "the effects of senolytics rapidly wear off once the drug stops being administered".

This is a lower-confidence claim than most in this piece; I do not have a study on hand directly proving it. The vast majority of papers on senolytics either use regular administration (frequency ~1 per 2 weeks or faster) or do a short regimen and then measure results within ~2 weeks; the ubiquity of those practices is itself a significant piece of evidence here. If a single dos... (read more)

How do we prepare for final crunch time?

Re: picking up new tools, skills and practice designing and building user interfaces, especially to complex or not-very-transparent systems, would be very-high-leverage if the tool-adoption step is rate-limiting.

4Eli Tyre17dI suspect that it becomes more and more rate limiting as technological progress speeds up. Like, to a first approximation, I think there's a fixed cost to learning to use and take full advantage of a new tool. Let's say that cost if a few weeks of experimentation and tinkering. If importantly new tools are are invented on a cadence of once ever 3 years, that fixed cost is negligible. But if importantly new tools are dropping every week, the fixed cost becomes much more of a big deal.
How do we prepare for final crunch time?

Relevant topic of a future post: some of the ideas from Risks From Learned Optimization or the Improved Good Regulator Theorem offer insights into building effective institutions and developing flexible problem-solving capacity.

Rough intuitive idea: intelligence/agency are about generalizable problem-solving capability. How do you incentivize generalizable problem-solving capability? Ask the system to solve a wide variety of problems, or a problem general enough to encompass a wide variety.

If you want an organization to act agenty, then a useful technique ... (read more)

Core Pathways of Aging

Great comments!

In dynamical system terms, I'd call the MAF scenario a single bistable feedback loop with many redundant components. ("Redundant" in the sense that many component subsets are sufficient to support the bistable feedback loop.) The senescence feedback loop is an example of this: there's multiple components, and only a subset are needed to support the state change. For instance, either mitochondrial dysfunction or transposon activation would be sufficient to trigger the state change, and either one will cause the other once the state change is ... (read more)

Transparency Trichotomy

This post seems to me to be beating around the bush. There's several different classes of transparency methods evaluated by several different proxy criteria, but this is all sort of tangential to the thing which actually matters: we do not understand what "understanding a model" means, at least not in a sufficiently-robust-and-legible way to confidently put optimization pressure on it.

For transparency via inspection, the problem is that we don't know what kind of "understanding" is required to rule out bad behavior. We can notice that some low-level featur... (read more)

6Mark Xu19dI agree it's sort of the same problem under the hood, but I think knowing how you're going to go from "understanding understanding" to producing an understandable model controls what type of understanding you're looking for. I also agree that this post makes ~0 progress on solving the "hard problem" of transparency, I just think it provides a potentially useful framing and creates a reference for me/others to link to in the future.
Core Pathways of Aging

In terms of experimental endpoints, would this mainly just be an experiment to see how long the mice live? If so, that does seem like a high-upside experiment which even someone with relatively little domain knowledge could just go do. The main investment would be time - it would take at least a couple years of mouse-care, and hopefully longer.

If the project were undertaken by someone with more domain expertise, the main value-add (relative to the bare-minimum version of the experiment) would probably be in checking more endpoints, especially as a debuggin... (read more)

6AllAmericanBreakfast18dIt does sound like this research is already planned or underway [https://www.brown.edu/news/2016-09-12/lifespan]. The wording is a little ambiguous as to whether the CRISPR approach is merely being contemplated, or whether they're just floating the idea. Working with flies first makes sense, since it gives you a faster feedback loop on whether transposon elimination affects lifespan. Stephen Helfand, the researcher quoted in the article, seems not to have published a new article since 2016, when the report I linked was published appears not to have updated his publication page since 2016, but you can find his later works on Google Scholar by searching his name (SL Helfand) [https://scholar.google.com/scholar?q=%22SL+Helfand%22&hl=en&as_sdt=0%2C48&as_ylo=2016&as_yhi=] . I've emailed him to ask whether this idea has been acted upon. I'll post back here if I hear from him. In the meantime, I'm going to investigate the work of the followup project and the leaders associated with it.
What is the Difference Between Cheerful Price and Shadow Price?

Suppose I have a state consisting of : number of apples, number of bananas. My utility is  - i.e. I have no terminal desire for money, and my utility for apples and bananas is separable (aka a sum, with each term dependent on only one of the two). I also have a budget constraint: , i.e. price of apples times number of apples plus price of bananas times number of bananas is at most , the amount of money I start with.

A useful technique for this sort of problem is to separate it into two optimization problems... (read more)

1Zachary Robertson19dYour example is interesting and clarifies exchange rates. However, This is an interpretive point I'd like to focus on. When you move a constraint, in this case with price, the underlying equilibrium of the optimization shifts. From this perspective your usage of the word 'barely' stops making sense to me. If you were to 'overshoot' you wouldn't be optimal in the new optimization problem. At this point I understand that the cheerful price will be equivalent to or more than the shadow price. You want to be able to shift the equilibrium point and have slack left over. It just seems obvious, to me, that shadow price isn't an exactly measurable thing in this context and so you'd naturally be led to make a confidence interval (belief) for it. Cheerful price is just the upper estimate on that. Hence, I'm surprised why this is being treated as a new / distinct concept.
Core Pathways of Aging

I would love to see a study like that.

Core Pathways of Aging

Solid reasoning.

Transposon activity is indeed believed to be repressed in the gonads to a much greater extent than elsewhere. I've also seen a few papers talking about health problems in the children of old parents, though I don't know as much about that.

Core Pathways of Aging

Yeah, I haven't read up on the topic in depth, but there's a few toolkits specifically intended for sequencing transposons. So it's probably not something which would require major breakthroughs at this point, but it does require specialized tools/knowledge, rather than just the standard sequencing toolkit.

Core Pathways of Aging

This suggests an interesting way to test the theory. JCVI had their "minimal cell" a few years back: they took a bacteria with an already-pretty-small genome, stripped out everything they could while still maintaining viability, then synthesized a plasmid with all the genes and promoters but with the "junk" DNA between them either removed or randomized (to make sure there was no functionality hiding in there which they didn't know about), and grew the bacteria with the synthesized plasmid. More recently, they have a project to do something similar with yea... (read more)

3NaiveTortoise19dGood point, this also suggests that Genome Project-Write [https://engineeringbiologycenter.org/] is an important project.
Core Pathways of Aging

Yup, exactly right. This would be the most direct possible test of the hypothesis.

Re:thymus, this study found that a mitochondrially-targetted antioxidant prevented thymic involution, so there is at least some evidence that thymic involution is caused by the same core pathways. Though the timing of thymic involution is pretty suspicious, when compared to the other core-pathway diseases.

Core Pathways of Aging

My guess would be transposon suppression rather than evolving away all of the transposons - upregulating existing repression mechanisms would be easier than removing every single active transposon copy. Though I'd still be interested in the test - even if suppression is the main mechanism, I'd still be interested to see how the number of transposons in naked mole rats compare to other rodents. (It is a nontrivial test to run, though - DNA sequencing is particularly unreliable when it comes to transposons, because there are so many near-copies.)

Alternativel... (read more)

Core Pathways of Aging

So, DNA methylation. This is another area where the things-people-typically-say seem to be completely wrong. I had also heard that methylation was long-lived (making it a natural candidate for a root cause of aging), but at one point I looked for experimental evidence on the turnover time of epigenetic methyl groups. And it turns out that most methyl groups turn over on a timescale of ~weeks. The mechanism is enzymatic - i.e. there are enzymes constantly removing and replacing epigenetic methyl groups, so they're in equilibrium.

I'm glad this came up, in hi... (read more)

9dkirmani20dWow, I had no idea that methylation was that impermanent, thank you for the belief update. I guess that leaves upregulation (via acetylation?) of transposon-suppressing RNA, extending lifespan by varying expression of other genes that alter chromatin structure to be more transposon-hostile [https://pubmed.ncbi.nlm.nih.gov/27621458/], or as this comment [https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging?commentId=kpthr3SCKGnXjTiHC] says, using Crispr/CAS9 to incapacitate transposons. I wonder if anyone has done/will soon do an experiment like this in mammals.
Core Pathways of Aging

Nice comment!

A couple minor expansions on this (you might know these already, but I want to make sure it's clear to everyone else):

  • siRNAs and piRNAs don't quite make babies have fewer transposons than their parents. The babies have the same number of transposons as the parents' egg/sperm. The piRNA/siRNA activity in the egg/sperm is just higher than in other (somatic) cells to make extra sure that the transposons don't copy before the genome is passed on.
  • There is a little more to it than just injecting RNAs. The RNAs would have to get into at least the cel
... (read more)
3awenonian16dI'm still confused. My biology knowledge is probably lacking, so maybe that's why, but I had a similar thought to dkirmani after reading this: "Why are children born young?" Given that sperm cells are active cells (which should give transposons opportunity to divide), why do they not produce children with larger transposon counts? I would expect whatever sperm divide from to have the same accumulation of transposons that causes problems in the divisions off stem cells. Unless piRNA and siRNA are 100% at their jobs, and nothing is explicitly removing transposons in sperm/eggs better than in the rest of the body, then surely there should be at least a small amount of accumulation of transposons across generations. Is this something we see? I vaguely remember that women are born with all the egg cells they'll have, so, if that's true, then maybe that offers a partial explanation (only half the child genome should be as infected with transposons?). I'm not sure it holds water, because since egg cells are still alive, even if they aren't dividing more, they should present opportunities for transposons to multiply. Another possible explanation I thought of was that, in order to be as close to 100% as possible, piRNA and siRNA work more than normal in the gonads, which does hurt the efficacy of sperm, but because you only need 1 to work, that's ok. Still, unless it is actually 100%, there should be that generational accumulation. This isn't even just about transposons. It feels like any theory of aging would have to contend with why sperm and eggs aren't old when they make a child, so I'm not sure what I'm missing.
3dkirmani20dThanks! I changed "transposons" to "active transposons" to be more accurate. Much of my knowledge in this domain comes from a genetics course I took in the 10th grade, so it's not super comprehensive. My understanding was that methylated DNA stayed methylated (silenced), and methyltransferases [https://en.wikipedia.org/wiki/Methyltransferase] made sure that copies of the methylated DNA sequences were also themselves methylated. If all transposons in a cell were methylated by piRNAs and siRNAs, wouldn't all descendants of the cell also have methylated transposons, making those transposons effectively removed? (Of course, that assumes that methyltransferases and transposon-suppressing RNAs have 100% success rates, which I'm sure they don't. This would explain why babies have a few active transposons, but not nearly as many as their parents.) This paper [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4991928/] asserts that piRNAs both methylate transposons, and also cleave the RNA transcripts of transposons in a cell's cytoplasm, and that doing so guards the germline against transposons. Cleaving the transcripts of transposons would repress transposon replication in the short term, but, as I understand it, methylation of transposons would silence them in the long term, including in daughter cells. Therefore, even if there's a one-time transposon-methylating event (as opposed to a permanent epigenetic upregulation in transposon-suppression mechanisms, which seems to be a promising idea as well), the number of active transposons in the genome should still be reduced, pushing the growth trajectory of transposons backward.
The Fusion Power Generator Scenario

One way in which the analogy breaks down: in the lever case, we have two levers right next to each other, and each does something we want - it's just easy to confuse the levers. A better analogy for AI might be: many levers and switches and dials have to be set to get the behavior we want, and mistakes in some of them matter while others don't, and we don't know which ones matter when. And sometimes people will figure out that a particular combination extends the flaps, so they'll say "do this to extend the flaps", except that when some other switch has th... (read more)

Another RadVac Testing Update

Yeah, that had occurred to me too. If people have other suggestions for what to snort, I'm open to ideas. Though my priors are still pretty strong here - I've accidentally snorted enough things to know that most things don't induce significant congestion.

Toward A Bayesian Theory Of Willpower

Expanding a bit on this correspondence: I think a key idea Scott is missing in the post is that a lot of things are mathematically identical to "agents", "markets", etc. These are not exclusive categories, such that e.g. the brain using an internal market means it's not using Bayes' rule. Internal markets are a way to implement things like (Bayesian) maximum a-posteriori estimates; they're a very general algorithmic technique, often found in the guise of Lagrange multipliers (historically called "shadow prices" for good reason) or intermediates in backpropagation. Similar considerations apply to "agents".

5DanielFilan22dSee also the correspondence between prediction markets of kelly bettors and Bayesian updating [https://shlegeris.com/2018/04/11/kelly.html].
Making Vaccine

I've been putting off answering this, because a proper answer would require diving into a lot of disparate evidence for some background models. But developments over the past few weeks have provided more direct data on the key assumption, so I can now point directly to that evidence rather than going through all the different pieces of weaker prior information.

The key assumption underlying my belief (and, presumably, most other peoples' belief) in the efficacy of the commercial vaccines is that the data from the clinical trials is basically true and repres... (read more)

Another RadVac Testing Update

I would do a placebo control too, just to make sure.

My prior that snorting DI water would do nothing was pretty strong, but I had intended to test it anyway, so thanks for the reminder.

I snorted some DI water last night, in the same manner that I snorted vaccine/peptides. With the vaccine/peptides, I pretty consistently woke up congested the next morning, and blew my nose every few minutes throughout the day. None of that has happened with the DI water - it's just been a normal day so far, in terms of congestion.

3jmh21dI'm not sure DI water would be a suitable "placebo" here. Perhaps a placebo effect is not even what is occurring. Previously you were inhaling something with small particles -- a bit like what happens every spring with pollen. Perhaps a test with some other inert matter that might not even be able to invade your body much less produce some type of chemical reactions with the cells or cellular processes?
Another RadVac Testing Update

I could prep both some peptides and a control and have my girlfriend randomly pick one, although the peptides do have a detectable scent to them, so I don't think this would be enough to actually blind me.

5ejacob23dYou could add some other scented ingredient to both peptide and control solutions. Rosewater would be a pleasant option. I wouldn't expect this to interfere with any immune responses too much, but you should do some research to check if you decide to try this.
Load More