Note to self: the summary dimension seems basically constant as the neighborhood size increases. There's probably a generalization of Koopman-Pitman-Darmois which applies.
Well, it worked reasonably well with ER graphs (which naturally embed into quite high dimensions), so it should definitely work at least that well. (I expect it would work better.)
Oh I see. Yeah, if either X or Y is unidimensional, then any linear model is really boring. They need to be high-dimensional to do anything interesting.
All ways of modifying Y are only equivalent in a dense linear system. Sparsity (in a high-dimensional system) changes that. (That's a fairly central concept behind this whole project: sparsity is one of the main ingredients necessary for the natural abstraction hypothesis.)
The main thing I expect from optimization is that the system will be adjusted so that certain specific abstractions work - i.e. if the optimizer cares about f(X), then the system will be adjusted so that information about f(X) is available far away (i.e. wherever it needs to be used).
That's the view in which we think about abstractions in the optimized system. If we instead take our system to be the whole optimization process, then we expect many variables in many places to be adjusted to optimize the objective, which means the objective itself is likely a natural abstraction, since information about it is available all over the place. I don't have all the math worked out for that yet, though.
I've been thinking a fair bit about this question, though with a slightly different framing.
Let's start with a neighborhood X. Then we can pick a bunch of different far-away neighborhoods Y1,...,Yn, and each of them will contain some info about summary(X). But then we can flip it around: if we do a PCA on (X,Y1,...,Yn), then we should expect to see components corresponding to summary(X), since all the variables which contain that information will covary.
Switching back to your framing: if X itself is large enough to contain multiple far-apart... (read more)
I'd be curious to know whether something like this actually works in practice. It certainly shouldn't work all the time, since it's tackling the #P-Hard part of the problem pretty directly, but if it works well in practice that would solve a lot of problems.
I ran this experiment about 2-3 weeks ago (after the chaos post but before the project intro). I pretty strongly expected this to work much earlier - e.g. back in December I gave a talk which had all the ideas in it.
Eh, yes and no. I'm definitely on board with selection pressure as a major piece in cancer. But for purposes of finding the root causes of aging, the key question is "why does cancer become more likely with aging?", and my current understanding is that selection pressures don't play a significant role there.
Once a cancer is already underway, selection pressure for drug resistance or differentiation or whatever is a big deal. But at a young age, the mutation rate is low and problematic cells are cleared before they have time to propagate anyway. Those defen... (read more)
Oh yeah, cruising Wikipedia lists is great. Definitely stumbled on multiple unknown unknowns that way. (Hauser's Law is one interesting example, which I first encountered on the list of eponymous laws.) Related: Exercises in Comprehensive Information Gathering.
Haven't finished reading this yet, but in reaction to the opening section... there's this claim that "One meal at a crowded restaurant is enough to give even a vaccinated person hundreds of microCovids". Which, regardless of whether it's true, sounds like it's probably the wrong way to think about things.
A core part of the microcovid model is that microcovids roughly add. You do a ten-microcovid activity, then a twenty microcovid activity, your total risk is roughly thirty microcovids. Put on a mask, and it cuts microcovids in half.
But with a vaccine, I'd ... (read more)
Oh, I mean that part of the point of the post is to talk about what relative advantages/disadvantages rationality should have, in principle, if we're doing it right - as opposed to whatever specific skills or strategies today's rationalist community happens to have stumbled on. It's about the relative advantages of the rationality practices which we hopefully converge to in the long run, not necessarily the rationality practices we have today.
Do you have specific skills of rationality in mind for that post?
No, which is part of the point. I do intend to start from the sequences-esque notion of the term (i.e. "rationality is systematized winning"), but I don't necessarily intend to point to the kinds of things LW-style "rationality" currently focuses on. Indeed, there are some things LW-style "rationality" currently focuses on which I do not think are particularly useful for systematized winning, or are at least overemphasized.
I actually just started drafting a post in this vein. I'm framing the question as "what are the relative advantages and disadvantages of explicit rationality?". It's a natural follow-up to problems we don't understand: absent practice in rationality and being agenty (whether we call it that or not), we'll most likely end up as cultural-adaptation-executors. That works well mainly for problems where cultural/economic selection pressures have already produced good strategies. Explicit rationality is potentially useful mainly when that's not the case - either... (read more)
If so, I'm really interested with techniques you have in mind for starting from a complex mess/intuitions and getting to a formal problem/setting.
This deserves its own separate response.
At a high level, we can split this into two parts:
We've talked about the translation step a fair bit before (the conversation which led to this post). A core point of that post is that the translation from intuition to math should be faithful, and not inject any "extra assumptions" which weren't part of the math. So, for ... (read more)
I disagree with the claim that "identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem", at least insofar as we're focused on problems for which current paradigms fail.
P vs NP is a good example here. How do you identify a good subproblem for P vs NP? I mean, lots of people have come up with subproblems in mathematically-straightforward ways, like the strong exponential time hypothesis or P/poly vs NP. But as far as we can tell so far, these are not very goo... (read more)
One aspect I feel is not emphasized enough here is the skill of finding/formulating good problems.
That's a good one to highlight. In general, there's a lot of different skills which I didn't highlight in the post (in many cases because I haven't even explicitly noticed them) which are still quite high-value.
The outside-view approach of having a variety of problems you don't really understand should still naturally encourage building those sorts of skills. In the case of problem formulation, working on a wide variety of problems you don't understand will na... (read more)
Actually building bridges and Actually preventing infections requires not only improvements in applied science, but also human coordination. In the former we've improved, in the latter we've stagnated.
+1 to this. The difficulties we have today with bridges and infections do not seem to me to have anything to do with the physical systems involved; they are all about incentives and coordination problems.
(Yes, I know lots of doctors talk about how we're overly trigger-happy with antibiotics, but I'm pretty sure that hospital infection rates have a lot more to... (read more)
Oh wow, that's really neat. I doubt that it has any relevance to the aging mechanisms of multicellular organisms, but very cool in its own right. And definitely not transposon-mediated.
Chronicle of a parent who finally lost it when their child’s school closed for ten days due to two Covid cases… for a fifth time. New York City is finally changing that rule. It was even more absurd than it sounds...
I am confused about why this is bad. It seems like an all-around excellent rule: if you see cases and you know where they came from, then just shut down the places where they came from - i.e. affected classrooms and close contacts. If you see cases and you don't know where they came from, especially after explicitly attempting to trace, then th... (read more)
On the role of values: values clearly do play some role in determining which abstractions we use. An alien who observes Earth but does not care about anything on Earth's surface will likely not have a concept of trees, any more than an alien which has not observed Earth at all. Indifference has a similar effect to lack of data.
However, I expect that the space of abstractions is (approximately) discrete. A mind may use the tree-concept, or not use the tree-concept, but there is no natural abstraction arbitrarily-close-to-tree-but-not-the-same-as-tree. There... (read more)
Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies.
Do you have a reference on that? I'm familiar with how it works with budding yeast, but I've never heard of anything like that in a prokaryote.
Also interested in helping on this - if there's modelling you'd want to outsource.
Here's one fairly-standalone project which I probably won't get to soon. It would be a fair bit of work, but also potentially very impressive in terms of both showing off technical skills and producing cool results.
Short somewhat-oversimplified version: take a finite-element model of some realistic objects. Backpropagate to compute the jacobian of final state variables with respect to initial state variables. Take a singular value decomposition of the jacobian. Hypothesis: th... (read more)
Re: dual use, I do have some thoughts on exactly what sort of capabilities would potentially come out of this.
The really interesting possibility is that we end up able to precisely specify high-level human concepts - a real-life language of the birds. The specifications would correctly capture what-we-actually-mean, so they wouldn't be prone to goodhart. That would mean, for instance, being able to formally specify "strawberry on a plate" in non-goodhartable way, so an AI optimizing for a strawberry on a plate would actually produce a strawberry on a plate... (read more)
Minimal cell experiments (making cells with as small a genome as possible) have already been done successfully. This presumably removes transposons, and I have not heard that such cells had abnormally long lifespans.
The minimal cell experiments were done with mycoplasma, which (as far as I know) does not age. More generally, as I understand it, most bacteria don't age, at least not in any sense similar to animals.
Also, I expect wild-type mycoplasma already had no transposons in its genome, since the organism evolved under very heavy evolutionary pressure for a small genome. (That's why it was chosen for the minimal cell experiments.)
The key idea here is the difference between "local" vs "nonlocal" changes in a multistable system - moving around within one basin vs jumping to another one. The prototypical picture:
For your finger example, one basin would be with-finger, one basin without-finger. For small changes (including normal cell turnover) the system returns to its with-finger equilibrium state, without any permanent changes. In order to knock it into the other state, some large external "shock" has to push it - e.g. cutting off a finger. Once in the other state, it's there perman... (read more)
My understanding is that transposon repression mechanisms (like piRNAs) are dramatically upregulated in the germ line. They are already very close to 100% effective in most cells under normal conditions, and even more so in the germ line, so that most children do not have any more transposons than their parents.
(More generally, my understanding is that germ line cells have special stuff going to make sure that the genome is passed on with minimal errors. Non-germ cells are less "paranoid" about mutations.)
Once the rate is low enough, it's handled by natural selection, same as any other mutations.
A private message (from someone who may name themselves if they wish) asked about the claim that "the effects of senolytics rapidly wear off once the drug stops being administered".
This is a lower-confidence claim than most in this piece; I do not have a study on hand directly proving it. The vast majority of papers on senolytics either use regular administration (frequency ~1 per 2 weeks or faster) or do a short regimen and then measure results within ~2 weeks; the ubiquity of those practices is itself a significant piece of evidence here. If a single dos... (read more)
Re: picking up new tools, skills and practice designing and building user interfaces, especially to complex or not-very-transparent systems, would be very-high-leverage if the tool-adoption step is rate-limiting.
Relevant topic of a future post: some of the ideas from Risks From Learned Optimization or the Improved Good Regulator Theorem offer insights into building effective institutions and developing flexible problem-solving capacity.
Rough intuitive idea: intelligence/agency are about generalizable problem-solving capability. How do you incentivize generalizable problem-solving capability? Ask the system to solve a wide variety of problems, or a problem general enough to encompass a wide variety.
If you want an organization to act agenty, then a useful technique ... (read more)
In dynamical system terms, I'd call the MAF scenario a single bistable feedback loop with many redundant components. ("Redundant" in the sense that many component subsets are sufficient to support the bistable feedback loop.) The senescence feedback loop is an example of this: there's multiple components, and only a subset are needed to support the state change. For instance, either mitochondrial dysfunction or transposon activation would be sufficient to trigger the state change, and either one will cause the other once the state change is ... (read more)
This post seems to me to be beating around the bush. There's several different classes of transparency methods evaluated by several different proxy criteria, but this is all sort of tangential to the thing which actually matters: we do not understand what "understanding a model" means, at least not in a sufficiently-robust-and-legible way to confidently put optimization pressure on it.
For transparency via inspection, the problem is that we don't know what kind of "understanding" is required to rule out bad behavior. We can notice that some low-level featur... (read more)
In terms of experimental endpoints, would this mainly just be an experiment to see how long the mice live? If so, that does seem like a high-upside experiment which even someone with relatively little domain knowledge could just go do. The main investment would be time - it would take at least a couple years of mouse-care, and hopefully longer.
If the project were undertaken by someone with more domain expertise, the main value-add (relative to the bare-minimum version of the experiment) would probably be in checking more endpoints, especially as a debuggin... (read more)
Suppose I have a state consisting of (a,b): number of apples, number of bananas. My utility is Ua(a)+Ub(b) - i.e. I have no terminal desire for money, and my utility for apples and bananas is separable (aka a sum, with each term dependent on only one of the two). I also have a budget constraint: Paa+Pbb≤m, i.e. price of apples times number of apples plus price of bananas times number of bananas is at most m, the amount of money I start with.
A useful technique for this sort of problem is to separate it into two optimization problems... (read more)
I would love to see a study like that.
Transposon activity is indeed believed to be repressed in the gonads to a much greater extent than elsewhere. I've also seen a few papers talking about health problems in the children of old parents, though I don't know as much about that.
Yeah, I haven't read up on the topic in depth, but there's a few toolkits specifically intended for sequencing transposons. So it's probably not something which would require major breakthroughs at this point, but it does require specialized tools/knowledge, rather than just the standard sequencing toolkit.
This suggests an interesting way to test the theory. JCVI had their "minimal cell" a few years back: they took a bacteria with an already-pretty-small genome, stripped out everything they could while still maintaining viability, then synthesized a plasmid with all the genes and promoters but with the "junk" DNA between them either removed or randomized (to make sure there was no functionality hiding in there which they didn't know about), and grew the bacteria with the synthesized plasmid. More recently, they have a project to do something similar with yea... (read more)
Yup, exactly right. This would be the most direct possible test of the hypothesis.
Re:thymus, this study found that a mitochondrially-targetted antioxidant prevented thymic involution, so there is at least some evidence that thymic involution is caused by the same core pathways. Though the timing of thymic involution is pretty suspicious, when compared to the other core-pathway diseases.
My guess would be transposon suppression rather than evolving away all of the transposons - upregulating existing repression mechanisms would be easier than removing every single active transposon copy. Though I'd still be interested in the test - even if suppression is the main mechanism, I'd still be interested to see how the number of transposons in naked mole rats compare to other rodents. (It is a nontrivial test to run, though - DNA sequencing is particularly unreliable when it comes to transposons, because there are so many near-copies.)
Alternativel... (read more)
So, DNA methylation. This is another area where the things-people-typically-say seem to be completely wrong. I had also heard that methylation was long-lived (making it a natural candidate for a root cause of aging), but at one point I looked for experimental evidence on the turnover time of epigenetic methyl groups. And it turns out that most methyl groups turn over on a timescale of ~weeks. The mechanism is enzymatic - i.e. there are enzymes constantly removing and replacing epigenetic methyl groups, so they're in equilibrium.
I'm glad this came up, in hi... (read more)
A couple minor expansions on this (you might know these already, but I want to make sure it's clear to everyone else):
One way in which the analogy breaks down: in the lever case, we have two levers right next to each other, and each does something we want - it's just easy to confuse the levers. A better analogy for AI might be: many levers and switches and dials have to be set to get the behavior we want, and mistakes in some of them matter while others don't, and we don't know which ones matter when. And sometimes people will figure out that a particular combination extends the flaps, so they'll say "do this to extend the flaps", except that when some other switch has th... (read more)
Yeah, that had occurred to me too. If people have other suggestions for what to snort, I'm open to ideas. Though my priors are still pretty strong here - I've accidentally snorted enough things to know that most things don't induce significant congestion.
Expanding a bit on this correspondence: I think a key idea Scott is missing in the post is that a lot of things are mathematically identical to "agents", "markets", etc. These are not exclusive categories, such that e.g. the brain using an internal market means it's not using Bayes' rule. Internal markets are a way to implement things like (Bayesian) maximum a-posteriori estimates; they're a very general algorithmic technique, often found in the guise of Lagrange multipliers (historically called "shadow prices" for good reason) or intermediates in backpropagation. Similar considerations apply to "agents".
I've been putting off answering this, because a proper answer would require diving into a lot of disparate evidence for some background models. But developments over the past few weeks have provided more direct data on the key assumption, so I can now point directly to that evidence rather than going through all the different pieces of weaker prior information.
The key assumption underlying my belief (and, presumably, most other peoples' belief) in the efficacy of the commercial vaccines is that the data from the clinical trials is basically true and repres... (read more)
I would do a placebo control too, just to make sure.
My prior that snorting DI water would do nothing was pretty strong, but I had intended to test it anyway, so thanks for the reminder.
I snorted some DI water last night, in the same manner that I snorted vaccine/peptides. With the vaccine/peptides, I pretty consistently woke up congested the next morning, and blew my nose every few minutes throughout the day. None of that has happened with the DI water - it's just been a normal day so far, in terms of congestion.
I could prep both some peptides and a control and have my girlfriend randomly pick one, although the peptides do have a detectable scent to them, so I don't think this would be enough to actually blind me.