All of Quinn's Comments + Replies

Quinn's Shortform

Positive and negative longtermism

I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.

In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.

Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the an... (read more)

Quinn's Shortform

I should be more careful not to imply I think that we have solid specimens of computational reward functions; more that I think it's a theoretically important region of the space of possible minds, and might factor in idealizations of agency

Quinn's Shortform

I've had a background assumption in my interpretation of and beliefs about reward functions for as long as I can remember (i.e. since first reading the sequences), that I suddenly realized I don't believe is written down. Over the last two years I've gained experience writing coq sufficient to inspire a convenient way of framing it.

Computational vs axiomatic reward functions

Computational vs axiomatic in proof engineering

A proof engineer calls a proposition computational if it's proof can be broken down into parts.

For example, a + (b + c) = (a + b) + c i... (read more)

1Quinn11dI should be more careful not to imply I think that we have solid specimens of computational reward functions; more that I think it's a theoretically important region of the space of possible minds, and might factor in idealizations of agency
Discussion with Eliezer Yudkowsky on AGI interventions

I'm uncertain about what abstractions/models are best to use for reasoning about AI. Coq has a number of options, which might imply different things about provability of different propositions and economic viability of those proofs getting anywhere near prod. Ought a type theorist or proof engineer concern themselves with IEEE754 edge cases?

4Vanessa Kosoy24dWe are not at the stage of formal verification, we are merely at the stage of constructing a theory from which we might be able to derive verifiable specifications. Once we have such specifications we can start discussing the engineering problem of formally verifying them. But the latter is just the icing on the cake, so to speak.
In software engineering, what are the upper limits of Language-Based Security?

it looks less like "move everyone to Haskell and Coq" and more like "design a good core-library crypto API for X" and "reform common practices around npm".

+1, coq is my dayjob and I'm constantly telling people to be less messianic when they talk about formal verification.

by designing the APIs so that doing things the safe way is easy, and doing things the dangerous way requires calling functions with "dangerous" in the name

+1, want to extend it by paraphrasing the ancient folk wisdom that I think comes in flavors for both libraries and languages: "t... (read more)

Study Guide

Could you say more about the value proposition of chemistry?

The value prop of physics as I understand it, and I'm pretty behind on physics myself, is

  1. classical mechanics is the bare bones proof of concept for working with "the scientific method"
  2. included in 1. is a bare bones instance of predicting and modeling, which gives you firm ground for your feet when you're predicting and modeling things that aren't so straight forward.
  3. if you're a logic freak it trains you to be comfortable with the insufficiency of symbols, memes and jokes of mathematicians and
... (read more)
3particularuniversalbeing24dI don't think this is an accurate description of the cultural difference between physicists and mathematicians. Tiny respective minorities of die-hard instrumentalists and formalists aside, both fields agree that the symbols are just tools for talking about the relevant objects of study more clearly and concisely than natural language permits. Plenty of published math research is formally incorrect in an extremely strong sense, but no one cares as long as all the errors can be corrected trivially. In fact, an important aspect of what's often called "mathematical maturity" is the ability to make those corrections automatically and mostly unconsciously, instead of either falling into genuine sloppiness or getting hung up on every little "the the". The real core difference is the obvious one. To zeroth order: physicists study physics, and mathematicians study math. To first order: physicists characterize phenomena which definitely exist, mathematicians characterize structures which may or may not. The universe definitely exists, it definitely has a structure, and any method which reliably makes correct predictions reflects a genuine aspect of that structure, whatever it might be. Put another way: physicists have an oracle for consistency. Mathematicians don't have that option, because structures are the things they study. That's what makes them mathematicians, and not physicists. They can retreat to higher and higher orders, and study classes of theories of logics for ..., but the regress has to stop somewhere, and the place it stops has to stand on its own, because there's no outside model to bear witness to its consistency. If all known calculations of the electron mass rely on some nonsensical step like "let d = 4 - epsilon where d is the dimensionality of spacetime", then this just means we haven't found the right structure yet. The electron mass is what it is, and the calculation is accurate or it isn't. But if all known "proofs" of a result rely on a nonsensic
4johnswentworth1moI roughly agree with that value prop for physics. I'd add that physics is the archetype of the sciences, and gets things right that haven't necessarily been made a legible part of "the scientific method" yet, so it's important to study physics to get an intuitive idea of science-done-right beyond what we already know how to explain well. (Gears-level models [https://www.lesswrong.com/tag/gears-level] are a good example here - physics is a good way to gain an intuition for "gears" and their importance, even if that's not explicitly brought to attention or made legible. Your point about how we use symbols and logic in physics is another good example.) The main value proposition of 101-level chemistry is to just to understand the basics of stoichiometry, reaction kinetics, and thermodynamics, especially in biological systems. Beyond that, chemistry is one of my dump stats, for good reason: more advanced chemistry (and materials science) tends to have relatively narrow focus on particular domains, like polymers or ceramics or whatever, and doesn't offer much generalizable knowledge (as far as I can tell).
Study Guide

I agree that tooling based on curry howard correspondences isn't great. We live in a somewhat primitive time. Maybe you're interested in my brief speculation about formal verification as a sort of dark horse in alignment, I look at FV and the developer ecosystem question which lags behind the state of the art theory question.

And, you mentioned proof and how it's disappointing that analysis is fodder for proofs, which I roughly agree with and roughly disagree with (analysis is nice because you can review and strengthen calculus while leveling up in logic. P... (read more)

Study Guide

Love this! You're really mild on programming language theory and functional programming. Any comments on the omission?

2johnswentworth1moI do mention that one should probably work with a LISP variant at some point, at a bare minimum. Being able to think in functional programming ways is definitely important, especially when you start dealing with things which blur the lines [https://www.lesswrong.com/posts/Xd9FLs4geRAWxkQPE/writing-causal-models-like-we-write-programs] between math notation and programming language. On PL theory and functional programming beyond the basics of LISP (and also the related topic of compilers), I mostly think the existing tools/theory just aren't that great and will likely look quite different in the future. That said, it's an area which goes a lot deeper than my knowledge, and my belief on the matter is weakly-held.
Quinn's Shortform

capabilities-prone research.

I come to you with a dollar I want to spend on AI. You can allocate p pennies to go to capabilities and 100-p pennies to go to alignment, but only if you know of a project that realizes that allocation. For example, we might think that GAN research sets p = 98 (providing 2 cents to alignment) while interpretability research sets p = 10 (providing 90 cents to alignment).

Is this remotely useful? This is a really rough model (you might think it's more of a venn diagram and that this model doesn't provide a way of reasoning about t... (read more)

Saving Time

Is logical time at all like cost semantics? also this -- cost semantics of a programming language are sort of like "number of beta reductions" from lambda calculus.

Quinn's Shortform

New discord server dedicated to multi-multi delegation research

DM me for invite if you're at all interested in multipolar scenarios, cooperative AI, ARCHES, social applications & governance, computational social choice, heterogeneous takeoff, etc.

(side note I'm also working on figuring out what unipolar worlds and/or homogeneous takeoff worlds imply for MMD research).

What are some claims or opinions about multi-multi delegation you've seen in the memeplex that you think deserve scrutiny?

I think both sets of bullets (multi-multi (eco?)systems either replicating cooperation-etc-as-we-know-it or making new forms of cooperation etc) are important, I think I'll call them prosaic cooperation and nonprosaic cooperation, respectively, going forward. When I say "cooperation etc." I mean cooperation, coordination, competition, negotiation, compromise.

You've provided crisp scenarios, so thanks for that!

In some sense "most" of that work will presumably be done by AI systems, but doing the work ourselves may unlock those benefits much earlier.

But ... (read more)

What are some claims or opinions about multi-multi delegation you've seen in the memeplex that you think deserve scrutiny?

I wrote out the 2x2 grid you suggested in MS paint Connor's Geometry

I'm not sure I'm catching how multi-inner is game theory. Except that I think "GT is the mesa- of SCT" is an interesting, reasonable (to me) claim that is sort of blowing my mind as I contemplate it, so far.

What are some claims or opinions about multi-multi delegation you've seen in the memeplex that you think deserve scrutiny?

Thanks! Trust, compromise, and communication are all items in Dafoe et. al. 2020, if you're interested in exploring. I agree that primitive forms of these issues are present in multi-single and single-multi, it's not clear to me whether we should think of solving these primitive forms then solving some sort of extension to multi-multi or if we should think of attacking problems that are unique to multi-multi directly. It's just not clear to me which of those better reflects the nature of what's going on.

Quinn's Shortform

No you're right. I think I'm updating toward thinking there's a region of nonprosaic short-timelines universes. Overall it still seems like that region is relatively much smaller than prosaic short-timelines and nonprosaic long-timelines, though.

Quinn's Shortform

That's totally fair, but I have a wild guess that the pipeline from google brain to google products is pretty nontrivial to traverse, and not wholly unlike the pipeline from arxiv to product.

Quinn's Shortform

I should've mentioned in OP but I was lowkey thinking upper bound on "short" would be 10 years.

I think developer ecosystems are incredibly slow (longer than ten years for a new PL to gain penetration, for instance). I guess under a singleton "one company drives TAI on its own" scenario this doesn't matter, because tooling tailored for a few teams internal to the same company is enough which can move faster than a proper developer ecosystem. But under a CAIS-like scenario there would need to be a mature developer ecosystem, so that there could be competition.

I feel like 7 years from AlexNet to the world of PyTorch, TPUs, tons of ML MOOCs, billion-parameter models, etc. is strong evidence against what you're saying, right? Or were deep neural nets already a big and hot and active ecosystem even before AlexNet, more than I realize? (I wasn't paying attention at the time.)

Moreover, even if not all the infrastructure of deep neural nets transfers to a new family of ML algorithms, much of it will. For example, the building up of people and money in ML, the building up of GPU / ASIC servers and the tools to use them... (read more)

Quinn's Shortform

nonprosaic ai will not be on short timelines

I think a property of my theory of change is that academic and commercial speed is a bottleneck. I recently realized that my mass assignment for timelines synchronized with my mass assignment for the prosaic/nonprosaic axis. The basic idea is that let's say a radical new paper that blows up and supplants the entire optimization literature gets pushed to the arxiv tomorrow, signaling the start of some paradigm that we would call nonprosaic. The lag time for academics and industry to figure out what's going on, fi... (read more)

2ChristianKl5moThe reasoning assumes that ideas are first generated in academia and don't arise inside of companies. With DeepMind outperforming the academic protein folding community when protein folding isn't even the main focus of DeepMind I consider it plausible that new approaches arise within a company and get only released publically when they are strong enough to have an effect. Even if there's a paper most radical new papers get ignored by most people and it might be that in the beginning only one company takes the idea seriously and doesn't talk about it publically to keep a competive edge.
2Steven Byrnes5moHow short is "short" for you? Like, AlexNet was 2012, DeepMind patented deep Q learning in 2014, the first TensorFlow release was 2015, the first PyTorch release was 2016, the first TPU was 2016, and by 2019 we had billion-parameter GPT-2 … So if you say "Short is ≤2 years", then yeah, I agree. If you say "Short is ≤8 years", I think I'd disagree, I think 8 years might be plenty for a non-prosaic approach. (I think there are a lot of people for whom AGI in 15-20 years still counts as "short timelines". Depends on who you're talking to, I guess.)
johnswentworth's Shortform

You might check out Donald Braben's view, it says "transformative research" (i.e. fundamental results that create new fields and industries) is critical for the survival of civilization. He does not worry that transformative results might end civilization.

Quinn's Shortform

Question your argument as your readers will - thoughts on chapter 10 of Craft of Research

Three predictable disagreements are

  • There are causes in addition to the one you claim
  • What about these counterexamples?
  • I don't define X as you do, to me X means...

There are roughly two kinds of queries readers will have about your argument

  1. intrinsic soundness - "challenging the clarity of a claim, relevance of reasons, or quality of evidence"
  2. extrinsic soundness - "different ways of framing the problem, evidence you've overlooked, or what others have written on t
... (read more)
Quinn's Shortform

Excellence and adequacy

I asked a friend whether I should TA for a codeschool called ${{codeschool}}.

You shouldn't hang around ${{codeschool}}. People at ${{codeschool}} are not pursuing excellence.

A hidden claim there that I would soak up the pursuit of non-excellence by proximity or osmosis isn't what's interesting (though I could see that turning out either way). What's interesting is the value of non-excellence, which I'll call adequacy.

${{codeschool}} in this case is effective and impactful at putting butts in seats at companies, and is thereby re... (read more)

2Viliam7moSeems to me that on the market there are very few jobs for the SICP types. The more meta something is, the less of that is needed. If you can design an interactive website, there are thousands of job opportunities for you, because thousands of companies want an interactive website, and somehow they are willing to pay for reinventing the wheel. If you can design a new programming language and write a compiler for it... well, it seems that world already has too many different programming languages, but sure there is a place for maybe a dozen more. The probability of success is very small even if you are a genius. The best opportunity for developers who think too meta is probably to design a new library for an already popular programming language, and hope it becomes popular. The question is how exactly you plan to get paid for that. Probably another problem is that it requires intelligence to recognize intelligence, and it requires expertise to recognize expertise. The SICP type developer seems to most potential employers and most potential colleagues as... just another developer. The company does not see individual output, only team output; it does not matter that your part of code does not contain bugs, if the project as a whole does. You cannot use solutions that are too abstract for your colleagues, or for your managers. Companies value replaceability, because it is less fragile and helps to keep developer salaries lower than they might be otherwise. (In theory, you could have a team full of SICP type developers, which would allow them to work smarter, and yet the company would feel safe. In practice, companies can't recognize this type and don't appreciate it, so this is not going to happen.) Again, probably the best position for a SICP type developer in a company would be to develop some library that the rest of the company would use. That is, a subproject of a limited size that the developer can do alone, so they are not limited in the techniques they us
Quinn's Shortform

thoughts on chapter 9 of Craft of Research

Getting the easy things right shows respect for your readers and is the best training for dealing with the hard things.

If they don't believe the evidence, they'll reject the reasons and, with them, your claim.

We saw previously that claims ought to be supported with reasons, and reasons ought to be based on evidence. Now we will look closer at reasons and evidence.

Reasons must be in a clear, logical order. Atomically, readers need to buy each of your reasons, but compositionally they need to buy your logic. S... (read more)

Quinn's Shortform

Claims - thoughts on chapter eight of Craft of Research

Broadly, the two kinds of claims are conceptual and practical.

Conceptual claims ask readers not to ask, but to understand. The flavors of conceptual claim are as follows:

  • Claims of fact or existence
  • Claims of definition and classification
  • Claims of cause and consequence
  • Claims of evaluation or appraisal

There's essentially one flavor of practical claim

  • Claims of action or policy.

If you read between the lines, you might notice that a kind of claim of fact or cause/consequence is that a policy work... (read more)

2Viliam7moThis may be context-dependent. Different countries probably have different cultural norms. Norms may differ for higher-status and lower-status speakers. Humble speech may impress some people, but others may perceive it as a sign of weakness. Also, is your audience fellow scientists or are you writing a popular science book? (More hedging for the former, less hedging for the latter.)
Quinn's Shortform

Good arguments - notes on Craft of Research chapter 7

Arguments take place in 5 parts.

  1. Claim: What do you want me to believe?
  2. Reasons: Why should I agree?
  3. Evidence: How do you know? Can you back it up?
  4. Acknowledgment and Response: But what about ... ?
  5. Warrant: How does that follow?

This can be modeled as a conversation with readers, where the reader prompts the writer to taking the next step on the list.

Claim ought to be supported with reasons. Reasons ought to be based on evidence. Arguments are recursive: a part of an argument is an acknowledgment of... (read more)

Quinn's Shortform

Sources - notes on Craft of Research chapters 5 and 6

Primary, secondary, and tertiary sources

Primary sources provide you with the "raw data" or evidence you will use to develop, test, and ultimately justify your hypothesis or claim. Secondary sources are books, articles, or reports that are based on primary sources and are intended for scholarly or professional audiences. Tertiary sources are books and articles that synthesize and report on secondary sources for general readers, such as textbooks, articles in encyclopedias, and articles in mass-circulat

... (read more)
Quinn's Shortform

Questions and Problems - thoughts on chapter 4 of Craft of Doing Research

Last time we discussed the difference between information and a question or a problem, and I suggested that the novelty-satisfied mode of information presentation isn't as good as addressing actual questions or problems. In chapter 3 which I have not typed up thoughts about, A three step procedure is introduced

  1. Topic: "I am studying ..."
  2. Question: "... because I want to find out what/why/how ..."
  3. Significance: "... to help my reader understand ..." As we elaborate on the different k
... (read more)
Quinn's Shortform

The audience models of research - thoughts on Craft of Doing Research chapter 2

Writers can't avoid creating some role for themselves and their readers, planned or not

Before considering the role you're creating for your reader, consider the role you're creating for yourself. Your broad options are the following

  1. I've found some new and interesting information - I have information for you
  2. I've found a solution to an important practical problem - I can help you fix a problem
  3. I've found an answer to an important question - I can help you understand somethi
... (read more)
2Michaël Trazzi7modone [https://www.stitcher.com/show/630264]! should be live in a few hours
[timeboxed exercise] write me your model of AI human-existential safety and the alignment problems in 15 minutes

Given that systems of software which learn can eventually bring about 'transformative' impact (defined as 'impact comparable to the industrial revolution'), the most important thing to work on is AI. Given that the open problems in learning software between now and its transformativity can be solved in a multitude of ways, some of those solutions will be more or less beneficial, less or more dangerous, meaning there's a lever that altruistic researchers can use to steer outcomes in these open problems. Given the difficulty of social dilemmas and coordinati... (read more)

Quinn's Shortform

there's a gap in my inside view of the problem, part of me thinks that capabilities progress such as out-of-distribution robustness or the 4 tenets described in open problems in cooperative ai is necessary for AI to be transformative, i.e. a prereq of TAI, and another part of me that thinks AI will be xrisky and unstable if it progresses along other aspects but not along the axis of those capabilities.

There's a geometry here of transformative / not transformative cross product with dangerous not dangerous.

To have an inside view I must be able to adequately navigate between the quadrants with respect to outcomes, interventions, etc.

2Pattern7moIf something can learn fast enough, then it's out-of-distribution performance won't matter as much. (OOD performance will still matter -but it'll have less to learn where it's good, and more to learn where it's not.*) *Although generalization ability seems like the reason learning matters. So I see why it seems necessary for 'transformation'.
We need a career path for invention

You might like Scientific Freedom by Donald Braben. It's a whole book about the problem of developing incentives for basic research.

2jasoncrawford7moYup, I've read it, thanks!
Quinn's Shortform

notes (from a very jr researcher) on alignment training pipeline

Training for alignment research is one part competence (at math, cs, philosophy) and another part having an inside view / gears-level model of the actual problem. Competence can be outsourced to universities and independent study, but inside view / gears-level model of the actual problem requires community support.

A background assumption I'm working with is that training as a longtermist is not always synchronized with legible-to-academia training. It might be the case that jr researchers oug... (read more)

2ChristianKl7moI don't think Critch's saying that the best way to get his attention is through cold emails backed up by credentials. The whole post is about him not using that as a filter to decide who's worth his time but that people should create good technical writing to get attention.
Announcing the Technical AI Safety Podcast

Thanks for reaching out! Alex had passed onto me the note about transcripts, I hope to get to it (including the backlog of already released episodes) in the next few months.

Averting suffering with sentience throttlers (proposal)

Right, I feel like there's a tradeoff between interestingness of consciousness theory and the viability of computational predicates. IIT gives you a nice computer program, but isn't very interesting.

Could degoogling be a practice run for something more important?

I think the litmus test for the value of reducing dependency on a given product/technology is whether we think it's empowering or enfeebling. Consider arithmetic calculators: is it empowering to delegate boring stuff to subroutines freeing up your mind to do harder stuff, or is it enfeebling because it reduces incentive to learn to do mental arithmetic well? Dependence can be a problem in either case.

Each product needs to be assessed individually.

Open and Welcome Thread - April 2021

I'm trying to decide if i'm going to write up a thought about longtermism I had.

I think there are two schools of thought-- that the graph of a value function over time is continuous or discontinuous. The continuous school of thought suggests that you get near term evidence about long term consequences, and the discontinuous school of thought does not interpret local perturbation in this way at all.

I'm sure this is covered in one of the many posts about longtermism, and the language of continuous functions could either make it clearer or less clear depending on the audience.

1eigen8moI don't think there's enough written about long-termism. You have a reader here if you ever decide to write something. I wonder as to where between those two school of thoughts you fall.
Takeaways from the Intelligence Rising RPG

I can't post a complete ruleset, but I can add some insight-- each party had "stats" representing hard power, soft power, budget, that sort of thing. Each turn you could spend "talent" stats on research arbitrarily, and you could take two "actions" which were GM-mediated expenditures of things like soft power, budget, etc. The game board was a list of papers and products that could be unlocked, unlocking papers released new products onto the board

Reading recommendations on social technology: looking for the third way between technocracy and populism

isn't increasing the competence of the voter akin to increasing the competence of the official, by proxy? I'm pattern matching this to yet another push-pull compromise between the ends of the spectrum, with a strong lean toward technocracy's side.

I'm assuming I'll have to read Brennan for his response to the criticism that it was tried in u.s. and made a lot of people very upset / is widely regarded as a bad move.

I agree with Gerald Monroe about the overall implementation problems even if you assume it wouldn't just be a proxy for race or class war (which ... (read more)

0vernamcipher9moImplementation problems are definitely a problem with Brennan's Knowledge Test To Vote idea and consist of two parts: (1) getting the present voters to agree to it (2) setting a test that is discriminatory in the right rather than the wrong ways. One would hope a good answer to (2) would help with (1), though convincing people to give up the vote would be very hard. I have been thinking a fair bit lately about the content of a Voting Test. Presumably one would want tests of knowledge that are proxies for being what Brennan calls a Vulcan - an informed Non-partisan voter who considers things like evidence - rather than a Hooligan - informed partisan - or Hobbit - uninformed and nonpartisan. Brennan's idea to test for basic knowledge about government is a good start - how does a bill become law, how do the different branches of government work, how much does your country spend on foreign aid as a percentage of government expenditures (the latter being something surveyed voters consistently and overwhelmingly get wrong). I would add to such a test sections for basic probability, statistics, and economics as these are vital for understanding public policy issues. Anyone who thinks the difference between 2% annual GDP growth and 3% annual GDP growth is 1% has next to nothing to contribute to public discourse.
Scott and Rohin doublecrux on AI with human models

should i be subscribed to a particular youtube channel where these things get posted?

4Ben Pace9moWe occasionally post them to the LessWrong YouTube channel: https://www.youtube.com/channel/UCt7yNB7lIW89N0hvISKJMpA [https://www.youtube.com/channel/UCt7yNB7lIW89N0hvISKJMpA] But mostly the best place to check is the event post (like this one), where it will eventually be updated.
Anki decks by LW users

Quick Bayes Table, by alexvermeer. A simple deck of cards for internalizing conversions between percent, odds, and decibels of evidence.

link broken

2Pablo9moThanks, but this post is no longer updated and the link is not broken on my website. (If you think that's confusing, despite the notice at the top, I may consider replacing its contents with just a link, though retaining the content may make it more discoverable.)
2TurnTrout10moThe spoiler seems to be empty?
Lessons I've Learned from Self-Teaching

Leverage the Pareto principle, get 80% of the benefit out of the key 20/30/40% of the concepts and exercises, and then move on.

This is hard to instrumentalize regarding difficulty. I find that the hardest exercises are likeliest to be skipped (after struggling with them for an hour or two), but it doesn't follow that I can expect the easier ones (which I happened to have completed) to lie in that key 20%.

2TurnTrout10moI like to randomly sample X% of the exercises, and read the rest; this lets me know later whether or not I missed something important. Simple rules like "do every fifth exercise" should suffice, with the rule tailored to the number of exercises in the book.
Quinn's Shortform

::: latex :::

Quinn's Shortform

:::what about this:::

:::hm? x :: Bool -> Int -> String :::

1Quinn1y::: latex Ax+1:={} :::
Quinn's Shortform

testing latex in spoiler tag

Testing code block in spoiler tag

1Quinn1y:::what about this::: :::hm? x :: Bool -> Int -> String :::
Infodemics: with Jeremy Blackburn and Aviv Ovadya

7p on thursday the 14th for New York, 4p in San Fransisco

Announcing the Technical AI Safety Podcast

When I submitted to pocketcasts it said we were already on it :) https://pca.st/9froevor

Have general decomposers been formalized?

Thank you Abram. Yes, factored cognition is more what I had in mind. However, I think it's possible to speak of decomposition generally enough to say that PCA/SVD is a decomposer, albeit an incredibly parochial one that's not very useful to factored cognition.

Like, my read of IDA is that the distillation step is proposing a class of algorithms, and we may find that SVD was a member of that class all along.

Load More