Spiracular

Comments

Memetic Hazards in Videogames

Some of the other F-grade feed-ins, for completion's sake...

  • A lot of people went to a bad high school. Some have learned helplessness, and don't know how to study. Saw the occasional blatant cheating habit, too.
    • Community colleges know this, and offer some courses that are basically "How to study"
    • So much of many middle-class cultures is just hammering "academics matter" and "advice on how to study or network" into your brain. Most middle-class students still manage to miss the memo on 1-2 key study skills or resources, though. Maybe everyone should go to "how to study" class...
      • Personally? As a teen, I didn't know how to ask for help, and I couldn't stand sounding like an idiot. Might have saved myself some time, if I'd learned how to do that earlier.
  • Nobody uses office-hours enough.
    • At worst, it's free tutoring. At best, it's socially motivating and now the teacher feels personally invested in your story and success.
    • "High-achievers who turned an early D into an A" are frequently office-hour junkies.
    • Someone with a big family crisis, is probably still screwed even if they go to office hours. Past some threshold, people should just take a W.
  • A few people just genuinely can't do math, in a "it doesn't fit in their brain" kind of way
    • My mom thinks this exists, but only accounts for <1%
Memetic Hazards in Videogames

TL;DR: As people get older, it's common for people to acquire responsibilities that make it hard to focus on school (ex: kids, elderly parents). Fairly high confidence that this is a big factor in community college grades.


As someone whose parent teaches basic math at community college, and who attended community college for 2 years myself (before transferring)...

I have absolutely seen some people pick up these skills late. The work ethic & directedness of community college high-achievers is often notably better than that of people in their late teens.

They also usually have healthier attitudes around failure (relative to the high-achieving teens), which sometimes makes them better at recovering from an early bad grade. Relatedly, the UCs say their CC transfers have much lower drop-out rates.

One major "weakness" I can think of, is that adults are probably going in fully-cognizant that school feels like an "artificial environment." Some kids manage to not notice this until grad school.


From my mom's work, I know that the grading distribution in high-school-remedial math classes is basically bimodal: "A"s and "F"s, split almost 50-50.

The #1 reason my mom cites for this split, is probably a responsibilities and life-phase difference?

A lot of working class adults are incredibly busy. Many are under more stress and strain than they can handle, at least some of the time. (The really unlucky ones, are under more strain than they can really handle basically all of the time, but those are less likely to try to go to community college.)

If someone is holding down a part-time job, doesn't have a lot in savings, is married, is taking care of a kid, and is caring for their elderly mother? That basically means a high load of ambient stress and triage, and also having 5 different avenues for random high-priority urgent crises (ex: health problems involving any of these) to bump school out of the prioritization matrix.

(Notably, "early achievers" on the child-having front usually also end up a bit crippled academically. I think that's another point in favor of "life phase" or "ambient responsibility load" theory being a big deal here, in a way that competes with or even cannibalizes academic focus/achievement.)

My take-away is that if you have a bunch in savings, and don't have a kid, then my bet is that learning a lot of curricula late is likely to not be a problem. Might actually be kinda fun?

But if you're instead juggling a dozen other life responsibilities, then God help you. If your class has tight deadlines, you may have to conduct a whole lot of triage to make it work.

Frame Control

There's actually 1 additional dynamic, that I can't quite put my finger on, but here's my attempt.

It's shaped something like...

If you are a pretty powerful person, and you take a desperate powerless person, and you hand them something that could indiscriminately destroy you? That is very likely to be a horrible mistake that you will one day regret. It's a bit like handing some rando a version of The One Ring, which is specific to controlling you.

Unless you had really good judgement and the person you handed it to is either Tom Bombdil or a hobbit who manages to spastically fling it into a volcano even despite himself? It is likely to corrupt them, and they are probably going to end up doing terrible things with it.

Never jump someone from 0 to 11 units of power over you, until you've seen what they're like with a 3 or a 5.

Frame Control

I think I have seen the "sanity-check"/"sanity-guillotine" thing done well. I have also seen it done poorly, in a way that mostly resembles the "finger-trap" targeting any close friends who notice problems.

For actual accountability/protection? "Asking to have it reported publicly/to an outside third party" seems to usually work better than "Report it to me privately."

(A very competent mass-crowd-controller might have a different dynamic, though; I haven't met one yet.)


For strong frame-controllers? "Encouraging their students to point out a vague category of issue in private," has a nasty tendency to speed up evaporative cooling, and burns out the fire of some of the people who might otherwise have reported misbehavior to a more-objective third-person.

It can set up the frame-controller as the counter/arbiter of "how many real complains have been leveled their way about X" (...which they will probably learn to lie about...), frames them as "being careful about X," and gives the frame-controller one last pre-reporting opportunity to re-frame-control things in the sender.

I think the "private reporting" variant is useful to protect a leader from unpleasant surprises, gives them a quick chance to update out of a bad pattern early on, and is slightly good for that reason. But I think as an "accountability method," this is simply not a viable protection against an even halfway-competent re-framer.


I think the gold-standard for actual accountability, is closer to the "outside HR firm" model. Having someone outside your circle, who people report serious issues to, and who is not primarily accountable to you.

Not everyone has access to the gold-standard, though.

When I single a person out for my future accountability? I pick people who I view as (high-integrity low-jealousy) peers-or-higher, AND/OR people on a totally different status-ladder. I want things set up such that even a maximally-antagonistic me, probably has no way to easily undermine them.

If I have a specific concern, I give them a very clear sense in advance of: "Here is a concrete threshold condition. If I ever trigger this, please destroy me unless I remove myself from a position of power over others. I am asking you in specific (negates bystander effect). I will thank you later."

(Probably also hand them something that would make it easier to selectively shut me down, such as "A signed letter from myself." Concrete thresholds are useful, because it is hard to frame-obscure your way out of hard facts.)

I think this variant requires knowing, and trusting, someone pretty non-petty and non-jealous who has a higher bar of integrity than you do. I do kinda think most people's judgement around identifying those is terrible, unfortunately?

But I think the drawbacks of this are at least... different. And I generally take that shape of thing, as a strong signal of real vulnerability and accountability.

Spiracular's Shortform Feed

Working out how this applies to other fields is left as an exercise to the reader, because I'm lazy and the space of places I use this metaphor is large (and paradoxically, so overbuilt that it's probably quite warped).

Also: minimally-warped lenses aren't always the most useful lens! Getting work done requires channeling attention, and doing it disproportionately!

And most heavily-built things are pretty warped; it's usually a safe default assumption. Doesn't make heavily-built things pointless, that is not what I'm getting at.

...but stuff that hews close to base-reality has the the important distinction of surviving most cataclysms basically-intact, and robustness is a virtue that works in their favor.

Spiracular's Shortform Feed

I do think some things are actually quite real and grounded? Everything is shoved through a lens as you perceive it, but not all lenses are incredibly warping.

If you're willing to work pretty close to the lower-levels of perception, and be quite careful while building things up, well and deeply-grounded shit EXISTS.


To give an evocative, and quite literally illustrative, example?

I think learning how to see the world well enough to do realistic painting is an exceptionally unwarping and grounding skill.

Any other method of seeing while drawing, doubles up on your attentional biases and lets you see the warped result*. When you view it, you re-apply your lens to your lens' results, and see the square any warping you were doing.

It's no coincidence that most people who try take one look at their first attempt at realistic drawing, will cringe and go "that's obviously wrong..."

When you can finally produce an illustration that isn't "obviously wrong," it stands as a piece of concrete evidence that you've learned some ability to engage at-will with your visual-perception, in a way that is relatively non-warping.

Or, to math-phrase it badly...

* Taking as a totally-unreasonable given, that your "skill at drawing" is good enough to not get in the way.

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

Now to actually comment...

(Ugh, I think I ended up borderline-incoherent myself. I might revisit and clean it up later.)

I think it's worth keeping in mind that "common social reality" is itself sometimes one of these unstable/ungrounded top-heavy many-epicycles self-reinforcing collapses-when-reality-hits structures.

I am beyond-sick of the fights about whether something is "erroneous personal reality vs social reality" or "personal reality vs erroneous social reality," so I'm going to leave simulating that out as an exercise for the reader.

loud sigh

Jumping meta, and skipping to the end.

Almost every elaborate worldview is built on at least some fragile low-level components, and might also have a few robustly-grounded builds in there, if you're lucky.

"Some generalizable truth can be extracted" is more likely to occur, if there were incentives and pressure to generate robust builds.*

* (...God, I got a sudden wave of sympathy for anyone who views Capitalists and Rationalists as some form of creepy scavengers. There is a hint of truth in that lens. I hope we're more like vultures than dogs; vultures have a way better "nutrition to parasite" ratio.)


By pure evolutionary logic: whichever thing adhered closer to common properties of base-reality, and/or was better-trained to generalize or self-update, will usually hold up better when some of its circumstances change. This tends to be part of what boils up when worldview conflicts and cataclysms play out.

I do see "better survival of a worldview across a range of circumstances" as somewhat predictive of attributes that I consider good-to-have in a worldview.

I also think surviving worldviews aren't always the ones that make people the happiest, or allow people to thrive? Sometimes that sucks.

(If anyone wants to get into "everything is all equally-ungrounded social reality?" No. That doesn't actually follow, even from the true statement that "everything you perceive goes through a lens." I threw some quick commentary on that side-branch here, but I mostly think it's off-topic.)

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

On the one hand, I think this is borderline-unintelligible as currently phrased? On the other hand, I think you have a decent point underneath it all.

Let me know if I'm following, while I try to rephrase it.


When insulated from real-world or outer-world incentives, a project can build up a lot of internal-logic and inferential distance by building upon itself repeatedly.

The incentives of insulated projects can be almost artificially-simple? So one can basically Goodhart, or massage data and assessment-metrics, to an incredible degree. This is sometimes done unconsciously.

When such a project finally comes into contact with reality, this can topple things at the very bottom of the structure, which everything else was built upon.

So for some heavily-insulated, heavily-built, and not-very-well-grounded projects, finally coming into exposure with reality can trigger a lot of warping/worldview-collapse/fallout in the immediate term.

Zoe Curzi's Experience with Leverage Research

My impression is that Leverage's bodywork is something closer to what other people call "energy work," which probably puts it... closer to Reiki than massage?

But I never had it done to me, and I don't super understand it myself! Pretty low confidence in even this answer.

Speaking of Stag Hunts

Hm... I notice I'm maybe feeling some particular pressure to personally address this one?

Because I called out the deliberate concentration of force in the other direction that happened on an earlier copy of the BayAreaHuman thread.


I am not really recanting that? I still think something "off" happened there.

But I could stand up and give a more balanced deposition.

To be clear? I do think BAH's tone was a tad aggressive. And I think there were other people in the thread, who were more aggressive than that. I think Leverage Basic Facts EA had an even more aggressive comment thread.

I also think each of the concrete factual claims BAH made, did appear to check out with at least one corner of Leverage, according to my own account-collecting (although not always at the same time).

(I also think a few of the LBFEA's wildest claims, were probably true. Exclusion of the Leverage website from Wayback Machine is definitely true*. The Slack channels characterizing each Pareto attendee as a potential recruit, seems... probably true?)

There were a lot of corners of Leverage, though. Several of them were walled off from the corners BAH talked about, or were not very near to it.

For what it's worth, I think the positive accounts in the BAH comment thread were also basically honest? I up-voted several of them.

Side-note: As much as I don't entirely trust Larissa? I do think some of her is at least trying to hold the fact that both good and bad things happened here. I trust her thoughts, more than Geoff's.

* Delisted from Wayback: The explanation I've heard, is that Geoff was sick of people dragging old things up to make fun of the initial planning document, and critiquing the old Connection Theory posts.


I am also dead-certain that nobody was going into the full story, and some of that was systematic. "BAH + commentary" put together, still doesn't sum to enough of the whole truth, to really make sense of things.

Anna & Geoff's initial twitch-stream included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that "Leverage was poaching donors," which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned. That all was an important missing piece of the puzzle.

((Meta: Noticing I should add this to Timeline and Threads somewhere? Doing that now-ish.))

(I also personally just really liked Anna's thoughts on "narrative addiction" being something to watch out for? Maybe that's just me.)

The dissolution & information agreement was another important part. Thank you, Matt Falshaw, for putting some of that in a form that could be viewed by people outside of the ecosystem.

I also haven't met anybody except Zoe (and now me, I guess?) who seems to have felt able to even breathe a word about the "objects & demons" memetics thing. I think that was another important missing piece.

Some people do report feeling incapable of speaking positively about Leverage in EA circles? I personally didn't experience a lot of this, but I saw enough surprise when I said good things about Reserve, that it doesn't surprise me particularly. Leverage's social network and some of its techniques were clearly quite meaningful to some people, so I can imagine how rough 'needing to write that out of your personal narrative' could have been.

Load More