JenniferRM

JenniferRM's Comments

Credibility of the CDC on SARS-CoV-2
I don't recommend the site to friends or family because I know posts like this always pop up and I don't want to expose people to this...

This is just basically correct! Good job! :-)

Arguably, most thoughts that most humans have are either original or good but not both. People seriously attempting to have good, original, pragmatically relevant thoughts about nearly any topic normally just shoot themselves in the foot. This has been discussed ad nauseum.

This place is not good for cognitive children, and indeed it MIGHT not be good for ANYONE! It could be that "speech to persuade" is simply a cultural and biological adaptation of the brain which primarily exists to allow people to trick other people into giving them more resources, and the rest is just a spandrel at best.

It is admirable that you have restrained yourself from spreading links to this website to people you care about and you should continue this practice in the future. One experiment per family is probably more than enough.

--

HOWEVER, also, you should not try to regulate speech here so that it is safe for dumb people without the ability to calculate probabilities, detect irony, doubt things they read, or otherwise tolerate cognitive "ickiness" that may adhere to various ideas not normally explored or taught.

There is a possibility that original thinking is valuable, and it is possible that developing the capacity for such thinking through the consideration of complex topics is also valuable. This site presupposes the value of such cognitive experimentation, and then follows that impulse to whatever conclusions it leads to.

Regulating speech here to a level so low as to be "safe for anyone to be exposed to" would basically defeat the point of the site.

Credibility of the CDC on SARS-CoV-2

The word "cuarenta", in Spanish, means 40.

In English, if the word "quarantine" is applied to an infection-avoiding isolation period of either more or less than 40 days, that's arguably an abuse of linguistic tradition that reveals whoever says it to be in need of remedial education.

Maybe? *I* probably need remedial education, too! Very prestigious linguists have asserted here or there that linguistics is a descriptivist science, and so, from their very prestigious perspective, any use of language is as good as any other use of language...

Still, it does give one pause.

How many people in public health read or write latin anymore? Maybe there are some things that people used to take so MUCH for granted that no one thought to spell them out? Like "40 day periods should last 40 days" is basically a tautology. Should THAT go into a medical book and become testable knowledge for doctors?

It would be scary for medical inferences based in the obvious literal meaning of words to be valid, so they are probably not valid. I'm sure everything is fine.

The LessWrong 2018 Review

I hunted your comment down here and upvoted it strongly.

I basically only write comments, and when I write "comments for the ages" that I feel proud of, I consider it a good sign if they (1) get many upvotes (especially votes that arrive after lots of competing sibling comments already exist) and (2) do not get any responses (except "Wow! Good! Thanks!" kind of stuff).

Looking at "first level comments" to worthwhile OPs according to a measure like this might provide some interesting and reasonably brief postscripts.

Applying the same basic measure to posts themselves, if an OP gets a large number of direct replies that are highly upvoted that OP may not be dense with relatively useful and/or flawless content. (Though there are probably exceptions that could be detected by thoughtful curating... for example, if the OP is a request for ideas then a lot of highly voted comments are kinda the point.)

The unexpected difficulty of comparing AlphaStar to humans

I think the abstract question of how to cognitively manage a "large action space" and "fog of war" is central here.

In some sense StarCraft could be seen as turn based, with each turn lasting for 1 microsecond, but this framing makes the action space of a beginning-to-end game *enormous*. Maybe not so enormous that a bigger data center couldn't fix it? In some sense, brute force can eventually solve ANY problem tractable to a known "vaguely O(N*log(N))" algorithm.

BUT facing "a limit that forces meta-cognition" is a key idea for "the reason to apply AI to an RTS next, as opposed to a turn based game."

If DeepMind solves it with "merely a bigger data center" then there is a sense in which maybe DeepMind has not yet found the kinds of algorithms that deal with "nebulosity" as an explicit part of the action space (and which are expected by numerous people (including me) to be widely useful in many domains).

(Tangent: The Portia spider is relevant here because it seems that its whole schtick is that it scans with its (limited, but far seeing) eyes, builds up a model of the world via an accumulation of glances, re-uses (limited) neurons to slowly imagine a route through that space, and then follows the route to sneak up on other (similarly limited, but less "meta-cognitive"?) spiders which are its prey.)

No matter how fast something can think or react, SOME game could hypothetically be invented that forces a finitely speedy mind to need action space compression and (maybe) even compression of compression choices. Also, the physical world itself appears to contain huge computational depths.

In some sense then, the "idea of an AI getting good *at an RTS*" is an attempt (which might have failed or might be poorly motivated) to point at issues related to cognitive compression and meta-cognition. There is an implied research strategy aimed at learning to use a pragmatically finite mind to productively work on a pragmatically infinite challenge.

The hunch is that maybe object level compression choices should always have the capacity to suggest not just a move IN THE GAME of doing certain things, but also a move IN THE MIND to re-parse the action space, compress it differently, and hope to bring a different (and more appropriate) set of "reflexes" to bear.

The idea of a game with "fog of war" helps support this research vision. Some actions are pointless for the game, but essential to ensuring the game is "being understood correctly" and game designers adding fog of war to a video game could be seen as an attempt to represent this possibly universally inevitable cognitive limitation in a concretely-ludic symbolic form.

If an AI is trained by programmers "to learn to play an RTS" but that AI doesn't seem to be learning lessons about meta-cognition or clock/calendar management, then it feels a little bit like the AI is not learning what we hoped it was suppose to learn from "an RTS".

This is why these points made by maximkazhenkov in a neighboring comment are central:

The agents on [the public game] ladder don't scout much and can't react accordingly. They don't tech switch midgame and some of them get utterly confused in ways a human wouldn't.

I think this is conceptually linked (through the idea of having strategic access to the compression strategy currently employed) to this thing you said:

...you can have a conversation with a starcraft player while he's playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however... I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy... [that said] there is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.

In the story about metacognition, verbal powers seem to come up over and over.

I think a lot of people who think hard about this understand that "mere reflexes" are not mere (especially when deeply linked to a reasoning engine that has theories about reflexes).

Also, I think that human meta-cognitive processes might reveal themselves to some degree in the apparent fact that a verbal summary can be generated by a human *in parallel without disrupting the "reflexes" very much*... then sometimes there is a pause in the verbalization while a player concentrates on <something>, and then the verbalization resumes (possibly with a summary of the 'strategic meaning' of the actions that just occurred).

Arguably, to close the loop and make the system more like the general intelligence of a human, part of what should be happening is that any reasoning engine bolted onto the (constrained) reflex engine should be able to be queried by ML programmers to get advice about what kinds of "practice" or "training" needs to be attempted next.

The idea is that by *constraining* the "reflex engine" (to be INadequate for directly mastering the game) we might be forced to develop a reasoning engine for understanding the reflex engine and squeezing the most performance out of it in the face of constraints on what is known and how much time there is to correlate and integrate what is known.

A decent "reflexive reasoning engine" (ie a reasoning engine focused on reflexive engines) might be able to nudge the reflex engine (every 1-30 seconds or so?) to do things that allow the reflex engine to scout brand new maps or change tech trees or do whatever else "seems meta-cognitively important".

A good reasoning engine might be able to DESIGN new maps that would stress test a specific reflex repertoire that it thinks it is currently bad at.

A *great* reasoning engine might be able to predict in the first 30 seconds of a game that it is facing a "stronger player" (with a more relevant reflex engine for this game) such that it will probably lose the game for lack of "the right pre-computed way of thinking about the game".

A really FANTASTIC reflexive reasoning engine might even be able to notice a weaker opponent and then play a "teaching game" that shows that opponent a technique (a locally coherent part of the action space that is only sometimes relevant) that the opponent doesn't understand yet, in a way that might cause the opponent's own reflexive reasoning engine to understand its own weakness and be correctly motivated to practice a way to fix that weakness.

(Tangent: To recall the tangent above to the Portia spider. It preyed on other spiders with similar spider limits. One of the fears here is that all this metacognition, when it occurs in nature, is often deployed in service to competition, either with other members of the same species or else to catch prey. Giving these powers to software entities that ALREADY have better thinking hardware than humans in many ways... well... it certainly gives ME pause. Interesting to think about... but scary to imagine being deployed in the midst of WW3.)

It sounds, Mathias, like you understand a lot of the centrality and depth of "trained reflexes" intuitively from familiarity with BOTH StarCraft and ML both, and part of what I'm doing here is probably just restating large areas of agreement in a new way. Hopefully I am also pointing to other things that are relevant and unknown to some readers :-)

If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?

Personally, I did not know that Frozen Synapse existed before I read your comment here. I suspect a lot of people didn't... and also I suspect that part of using StarCraft was simply for its PR value as a beloved RTS classic with a thriving pro scene and deep emotional engagement by many people.

I'm going to go explore Frozen Synapse now. Thank you for calling my attention to it!

The Power to Demolish Bad Arguments
"...go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation."

I think maybe we agree... verbosely... with different emphasis? :-)

At least I think we could communicate reasonably well. I feel like the danger, if any, would arise from playing example ping pong and having the serious disagreements arise from how we "cook (instantiate?)" examples into models, and "uncook (generalize?)" models into examples.

When people just say what their model "actually is", I really like it.

When people only point to instances I feel like the instances often under-determine the hypothetical underlying idea and leave me still confused as to how to generate novel instances for myself that they would assent to as predictions consistent with the idea that they "meant to mean" with the instances.

Maybe: intensive theories > extensive theories?

The Power to Demolish Bad Arguments
I appreciate your high-quality comment.

I likewise appreciate your prompt and generous response :-)

I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.

In this case, I don't think your example works super well and might almost cause more problems that not?

Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.

Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:

EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.

This example is quite different from your example. In your example medical treatment is good, and the key difference is basically just "pre-pay" vs "post-pay".

(Also, neither of our examples covers the issue where many innovative medical treatments often lower mortality due to the disease they aim at while, somehow (accidentally?) RAISING all cause mortality...)

In my mind, the substantive big picture claim rests ultimately on the sum of many positive and negative factors, each of which arguably deserves "an example of its own". (Things that raise my confidence quite a lot is often hearing the person's own best argument AGAINST their own conclusion, and then hearing an adequate argument against that critique. I trust the winning mind quite a bit more when someone is of two minds.)

No example is going to JUSTIFIABLY convince me, and the LACK of an example for one or all of the important factors wouldn't prevent me from being justifiably convinced by other methods that don't route through "specific examples".

ALSO: For that matter, I DO NOT ACTUALLY KNOW if Robin Hanson is actually right about medical insurance's net results, in the past or now. I vaguely suspect that he is right, but I'm not strongly confident. Real answers might require studies that haven't been performed? In the meantime I have insurance because "what if I get sick?!" and because "don't be a weirdo".

---

I think my key crux here has something to do with the rhetorical standards and conversational norms that "should" apply to various conversations between different kinds of people.

I assumed that having examples "ready-to-hand" (or offered early in a written argument) was something that you would actually be strongly in favor of (and below I'll offer a steelman in defense of), but then you said:

I wouldn't insist that he has an example "ready to hand during debate"; it's okay if he says "if you want an example, here's where we can pull one up".

So for me it would ALSO BE OK to say "If you want an example I'm sorry. I can't think of one right now. As a rule, I don't think in terms of fictional stories. I put effort into thinking in terms of causal models and measurables and authors with axes to grind and bridging theories and studies that rule out causal models and what observations I'd expect from differently weighed ensembles of the models not yet ruled out... Maybe I can explain more of my current working causal model and tell you some authors that care about it, and you can look up their studies and try to find one from which you can invent stories if that helps you?"

If someone said that TO ME I would experience it as a sort of a rhetorical "fuck you"... but WHAT a fuck you! {/me kisses her fingers} Then I would pump them for author recommendations!

My personal goal is often just to find out how the OTHER person feels they do their best thinking, run that process under emulation if I can, and then try to ask good questions from inside their frames. If they have lots of examples there's a certain virtue to that... but I can think of other good signs of systematically productive thought.

---

If I was going to run "example based discussion" under emulation to try to help you understand my position, I would offer the example of John Hattie's "Visible Learning".

It is literally a meta-meta-analysis of education.

It spends the first two chapters just setting up the methodology and responding preemptively to quibbles that will predictable come when motivated thinkers (like classroom teachers that the theory says are teaching suboptimally) try to hear what Hattie has to say.

Chapter 3 finally lays out an abstract architecture of principles for good teaching, by talking about six relevant factors and connecting them all (very very abstractly and loosely) to: tight OODA loops (though not under that name) and Popperian epistemology (explicitly).

I'll fully grant that it can take me an hour to read 5 pages of this book, and I'm stopping a lot and trying to imagine what Hattie might be saying at each step. The key point for me is that he's not filling the book with examples, but with abstract empirically authoritative statistical claims about a complex and multi-faceted domain. It doesn't feel like bullshit, it feels like extremely condensed wisdom.

Because of academic citation norms, in some sense his claims ultimately ground out in studies that are arguably "nothing BUT examples"? He's trying to condense >800 meta-analyses that cover >50k actual studies that cover >1M observed children.

I could imagine you arguing that this proves how useful examples are, because his book is based on over a million examples, but he hasn't talked about an example ONCE so far. He talks about methods and subjectively observed tendencies in meta-analyses mostly, trying to prepare the reader with a schema in which later results can land.

Plausibly, anyone could follow Hattie's citations back to an interesting meta-analysis, look at its references, track back to a likely study, look in their methods section, and find their questionnaires, track back to the methods paper validating an the questionnaire, then look in the supplementary materials to get specific questionnaire items... Then someone could create an imaginary kid in their head who answered that questionnaire some way (like in the study) and then imagine them getting the outcome (like in the study) and use that scenario as "the example"?

I'm not doing that as I read the book. I trust that I could do the above, "because scholarship" but I'm not doing it. When I ask myself why, it seems like it is because it would make reading the (valuable seeming) book EVEN SLOWER?

---

I keep looping back in my mind to the idea that a lot of this strongly depends on which people are talking and what kinds of communication norms are even relevant, and I'm trying to find a place where I think I strongly agree with "looking for examples"...

It makes sense to me that, if I were in the role of an angel investor, and someone wanted $200k from me, and offered 10% of their 2-month-old garage/hobby project, then asking for examples of various of their business claims would be a good way to move forward.

They might not be good at causal modeling, or good at stats, or good at scholarship, or super verbal, but if they have a "native faculty" for building stuff, and budgeting, and building things that are actually useful to actual people... then probably the KEY capacities would be detectable as a head full of examples to various key questions that could be strongly dispositive.

Like... a head full of enough good examples could be sufficient for a basically neurotypical person to build a valuable company, especially if (1) they were examples that addressed key tactical/strategic questions, and (2) no intervening bad examples were ALSO in their head?

(Like if they had terrible examples of startup governance running around in their heads, these might eventually interfere with important parts of being a functional founder down the road. Detecting the inability to give bad examples seems naively hard to me...)

As an investor, I'd be VERY interested in "pre-loaded ready-to-hand theories" that seem likely to actually work. Examples are kinda like "pre-loaded ready-to-hand theories"? Possession of these theories in this form would be a good sign in terms of the founder's readiness to execute very fast, which is a virtue in startups.

A LACK of ready-to-hand examples would suggest that even a good and feasible idea whose premises were "merely scientifically true" might not happen very fast if an angel funded it and the founder had to instantly start executing on it full time.

I would not be offended if you want to tap out. I feel like we haven't found a crux yet. I think examples and specificity are interesting and useful and important, but I merely have intuitions about why, roughly like "duh, of course you need data to train a model", not any high church formal theory with a fancy name that I can link to in wikipedia :-P

The Power to Demolish Bad Arguments

I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?

If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.

It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.

Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.

"Mel the meta-meta-analyst" might be communicating summary claims that are important and generally true that "Sophia the specificity demander" might rhetorically "win against" in a way that does not structurally correspond to the central tendencies of the actual world.

Mel might know things about medical practice without ever having treated a patient or even talked to a single doctor or nurse. Mel might understand something about how classrooms work without being a teacher or ever having visited a classroom. Mel might know things about the behavior of congressional representatives without ever working as a congressional staffer. If forced to confabulate an exemplar patient, or exemplar classroom, or an exemplar political representative the details might be easy to challenge even as a claim about the central tendencies is correct.

Naively, I would think that for Mel to be justified in his claims (even WITHOUT having exemplars ready-to-hand during debate) Mel might need to be moderately scrupulous in his collection of meta-analytic data, and know enough about statistics to include and exclude studies or meta-analyses in appropriately weighed ways. Perhaps he would also need to be good at assessing the character of authors and scientists to be able to predict which ones are outright faking their data, or using incredibly sloppy data collection?

The core point here is that Sophia might not be lead to the truth SIMPLY by demanding specificity without regard to the nature of the claims of her interlocutor.

If Sophia thinks this tactic gives her "the POWER to DEMOLISH arguments" in full generality, that might not actually be true, and it might even lower the quality of her beliefs over time, especially if she mostly converses with smart people (worth learning from, in their area(s) of expertise) rather than idiots (nearly all of whose claims might perhaps be worth demolishing on average).

It is totally possible that some people are just confused and wrong (as, indeed, many people seem to be, on many topics... which is OK because ignorance is the default and there is more information in the world now than any human can integrate within a lifetime of study). In that case, demanding specificity to demolish confused and wrong arguments might genuinely and helpfully debug many low quality abstract claims.

However, I think there's a lot to be said for first asking someone about the positive rigorous basis of any new claim, to see if the person who brought it up can articulate a constructive epistemic strategy.

If they have a constructive epistemic strategy that doesn't rely on personal knowledge of specific details, that would be reasonable, because I think such things ARE possible.

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

If I was asked to offer a single specific positive example of "general arguments being worthwhile" I might nominate Visible Learning by John Hattie as a fascinating and extremely abstract synthesis of >1M students participating in >50k studies of K-12 learning. In this case a core claim of the book is that mindless teaching happens sometimes, nearly all mindful attempts to improve things work a bit, and very rarely a large number of things "go right" and unusually large effect sizes can be observed. I've never seen one of these ideal classrooms I think, but the arguments that they have a collection of general characteristics seem solid so far.

Maybe I'll change my mind by the end? I'm still in progress on this particular book, which makes it sort of "top of mind" for me, but the lack of specifics in the book present a readability challenge rather than an epistemic challenge ;-P

The book Made to Stick, by contrast, uses Stories that are Simple, Surprising, Emotional, Concrete, and Credible to argue that the best way to convince people of something is to tell them Stories that are Simple, Surprising, Emotional, Concrete, and Credible.

As near as I can tell, Made to Stick describes how to convince people of things whether or not the thing is true, which means that if these techniques work (and can in fact cause many false ideas to spread through speech communities with low epistemic hygiene, which the book arguably did not really "establish") then a useful epistemic heuristic might be to give a small evidential PENALTY to all claims illustrated merely via vivid example.

I guess one thing I would like to say here at the end is that I mean this comment in a positive spirit. I upvoted this article and the previous one, and if the rest of the sequence has similar quality I will upvote those as well.

I'm generally IN FAVOR of writing imperfect things and then unpacking and discussing them. This is a better than median post in my opinion, and deserved discussion, rather than deserving to be ignored :-)

Unconscious Economics

David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than "strangepoop" did :-)

In "Law's Order" (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be "winning" at something and copy them.

(This take is sort of friendly to your "selectionist #3" option but explored in more detail, and applied in more contexts than to simply explain "bad things".)

Friedman doesn't use the term "mimesis", but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman's version of the basic mimetic theory, it is simply "monkey see, monkey do" :-P

If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.

The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied "strong mimesis" theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.

Friedman pretty much needed to bring up a form of "economic rationality" in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven't even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so "can't be affected by such numbers".

(Note the contrast to LW's standard inspirational theorizing about a theoretically derived life plan... around here actively encouraging people to look up numbers before making major life decisions is common.)

Friedman's larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor... That kid will be likely to mimic that uncle without looking up any laws or anything.

Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?

(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that's definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)

This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a "viable life strategy" that includes committing a crime over and over and then treating the punishment as a mere cost of business.

If you sent burglars to prison for "life without parole" on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.

(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with "life without parole on first offense" AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable... If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)

In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget's "Stage 4" and start applying formal operational reasoning to some things. It works "good enough" a lot of the time that I could imagine it being a core part of any organism's epistemic repertoire?

Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don't actually know how or why their farming practices work. They just "do what everyone has always done", and it basically works...

One keyword that offers another path here is one Piaget himself coined: "genetic epistemology". This wasn't meant in the sense of DNA, but rather in the sense of "generative", like "where and how is knowledge generated". I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.

Transhumanists Don't Need Special Dispositions

I can see two senses for what you might be saying...

I agree with one of them (see the end of my response), but I suspect you intend the other:

First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.

However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.

Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy... not entirely (because the it might have been "the right play, given the visible cards" with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.

So when you say that "the actual values of transhumanism" might be distinguished from less abstract "things done in the name of transhumanism" that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn't address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.

(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of "caring" that have low actual utility with respect to desirable health outcomes. Like... it is arguably PART OF OUR CULTURE that "standard non-efficacious bullshit medicine" isn't "real transhumanism". However, that part of our culture maybe deserves to be pushed forward a bit more right now?)

A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one's hands doing confused things.

When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)

If this latter thing is all you meant, then... cool? :-)

Transhumanists Don't Need Special Dispositions

Has someone been making bad criticisms of transhumanism lately?

In 2007, when this was first published, I think I understood which bravery debate this essay might apply to (/me throws some side-eye in the direction of Leon Kass et al), but in 2018 this sort of feels like something that (at least for a LW audience I would think?) has to be read backwards to really understand its valuable place in a larger global discourse.

If I'm trying to connect this to something in the news literally in the last week, it occurs to me to think about He Jiankui's recent attempt to use CRISPR technology to give HIV-immunity to two girls in China, which I think is very laudable in the abstract but also highly questionable as actually implemented based on current (murky and confused) reporting.

Basically, December of 2018 seems like a bad time to "go abstract" in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed, and the real and specific challenges of getting the technical and ethical details right are the central issue.

Load More