a gently pricked vein

Wiki Contributions

Comments

The inordinately slow spread of good AGI conversations in ML

It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)

Steam

Thanks for clarifying! And for the excellent post :)

Finally, when steam flows out to the world, and the task passes out of our attention, the consequences (the things we were trying to achieve) become background assumptions. 

To the extent that Steam-in-use is a kind of useful certainty about the future, I'd expect "background assumptions" to become an important primitive that interacts in this arena as well, given that it's a useful certainty about the present. I realize that's possibly already implicit in your writing when you say figure/ground.

Steam

I think some equivalent of Steam pops out as an important concept in enabling-agency-via-determinism (or requiredism, as Eliezer calls it), when you have in your universe both:

  • iron causal laws coming from deterministic physics and
  • almost iron "telic laws" coming from regulation by intelligent agents with something to protect.

The latter is something that can also become a very solid (full of Steam) thing to lean on for your choice-making, and that's an especially useful model to apply to your selves across time or to a community trying to self-organize. It seems very neglected, formally speaking. Economically-minded thinking tends to somewhat respect it as a static assumption, but not so much the dynamics of formation AFAIK (and so dynamic Steam is a pretty good metaphor).

However, shouldn't "things that have faded into the background" be the other kind of trivial, ie. have "maximal Steam" rather than have "no Steam"? It's like an action that will definitely take place. Something that will be in full force. Trivially common knowledge. You yourself seem to point at it with "Something with a ton of steam feels inevitable", but I suppose that's more like the converse.

(EDIT: Or at least something like that. If a post on the forum has become internalized by the community, a new comment on it won't get a lot of engagement, which fits with "losing steam" after it becomes "solid". But even if we want to distinguish where the action is currently, it makes sense to have a separate notion of what's finished and can easily re-enter attention compared to what was never started.)

Also when you say, in your sunk costs example, "no steam to spend time thinking", I'd say a better interpretation than "time thinking" would be "not enough self-trust to repledge solidity in a new direction". Time to think sounds to me more like Slack, but maybe I'm confused.

A Bayesian Aggregation Paradox

I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you're actually working with a negation-of-concept. And "believing in" might be one of those things that you can't really do with negation-of-concepts.

Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in topological semantics for intuitionistic logic, the "logical complement" is in fact defined to be the interior of the set-theoretic complement, which guarantees an open.)

MIRI announces new "Death With Dignity" strategy

I began reading this charitably (unaware of whatever inside baseball is potentially going on, and seems to be alluded to), but to be honest struggled after "X" seemed to really want someone (Eliezer) to admit they're "not smart"? I'm not sure why that would be relevant. 

I think I found these lines especially confusing, if you want to explain:

  • "I just hope that people can generalize from "alignment is hard" to "generalized AI capabilities are hard".

    Is capability supposed to be hard for similar reasons as alignment? Can you expand/link? The only argument I can think of relating the two (which I think is a bad one) is "machines will have to solve their own alignment problem to become capable." 
  • Eliezer is invalidating the second part of this but not the first.

    This would be a pretty useless machiavellian strategy, so I'm assuming you're saying it's happening for other reasons? Maybe self-deception? Can you explain?
  • Eliezer thinks that OpenAI will try to make things go faster rather than slower, but this is plainly inconsistent with things like the state of vitamin D research

    This just made me go "wha" at first but my guess now is that this and the bits above it around speech recognition seem to be pointing at some AI winter-esque (or even tech stagnation) beliefs? Is this right?
A Bayesian Aggregation Paradox

There's probably a radical constructivist argument for not really believing in open/noncompact categories like . I don't know how to make that argument, but this post too updates me slightly towards such a Tao of conceptualization.

(To not commit this same error at the meta level: Specifically, I update away from thinking of general negations as "real" concepts, disallowing statements like "Consider a non-chair, ...").

But this is maybe a tangent, since just adopting this rule doesn't resolve the care required in aggregation with even compact categories.

February 2022 Open Thread

(A suggestion for the forum)

You know that old post on r/ShowerThoughts which went something like "People who speak somewhat broken english as their second language sound stupid, but they're actually smarter than average because they know at least one other language"?

I was thinking about this. I don't struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I'm sure it's noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here. Things like strange sentence structure, or weird use of italics, or overuse of a word, or over/under hedging... all the writing skills you already mastered in grade school. And you probably grew up seeing that the ones who continued to struggle with it often didn't get other things quickly either. 

Maybe you notice some of these already in what you're reading right now (despite my painstaking efforts otherwise). It's likely to look "wannabe" or "amateurish" because it isone learns language and rhythm by imitating. But this imitation game is confined to language & rhythm, and it would be a mistake to also infer from this that the ideas behind them would be unoriginal or amateurish.

I'd like to think it wouldn't bother anyone on LW because people here believe that linguistic faux pas, as much as social ones, ought to be screened off by the content. 

But it probably still happens. You might believe it but not alieve it. Imagine someone saying profound things but using "u" and "ur" everywhere, even for "you're". You could actually try this (even though it would be a somewhat shallow experiment, because what I'm pointing at with "cadence" is deeper than spelling mistakes) to get a flavor for it.

A solution I can think of: make a [Non-Native Speaker] tag and allow people to self-tag. Readers could see it and shoot for a little bit more charity across anything linguistically-aesthetically displeasing. The other option is to take advantage of customizable display names here, but I wonder if that'd be distracting if mass-adopted, like twitter handles that say "[Name] ...is in New York".

I would (maybe, at some point) even generalize it to [English Writing Beginner] or some such, which you can self-assign even if you speak natively but are working on your writing skills. This one is more likely to be diluted though.

Measure's Shortform

I like this question. I imagine the deeper motivation is to think harder about credit assignment. 

I wrote about something similar a few years ago, but with the question of "who gets moral patienthood" rather than "who gets fined for violating copyright law". In the language of that comment, "you publishing random data" is just being an insignificant Seed.

Is "gears-level" just a synonym for "mechanistic"?

Yeah, this can be really difficult to bring out. The word "just" is a good noticer for this creeping in.

It's like a deliberate fallacy of compression: sure you can tilt your view so they look the same and call it "abstraction", but maybe that view is too lossy for what we're trying to do! You're not distilling, you're corrupting!

I don't think the usual corrections for fallacies of compression can help either (eg. Taboo) because we're operating at the subverbal layer here. It's much harder to taboo cleverness at that layer. Better off meditating on the virtue of The Void instead.

But it is indeed a good habit to try to unify things, for efficiency reasons. Just don't get caught up on those gains.

Biology-Inspired AGI Timelines: The Trick That Never Works

The "shut up"s and "please stop"s are jarring.

Definitely not, for example, norms to espouse in argumentation (and tbf nowhere does this post claim to be a model for argument, except maybe implicitly under some circumstances).

Yet there's something to it.

There's a game of Chicken arising out of the shared responsibility to generate (counter)arguments. If Eliezer commits to Straight, ie. refuses to instantiate the core argument over and over again (either explicitly, by saying "you need to come up with the generator" or implicitly, by refusing to engage with a "please stop."), then the other will be incentivized to Swerve, ie. put some effort into coming up with their own arguments and thereby stumble upon the generator.

This isn't my preferred way of coordinating on games of Chicken, since it is somewhat violent and not really coordination. My preferred way is to proportionately share the price of anarchy, which can be loosely estimated with some honest explicitness. But that's what (part of) this post is, a very explicit presentation of the consequences!

So I recoil less. It feels inviting instead, about a real human issue in reasoning. And bold, given all the possible ways to mischaracterize it as "Eliezer says 'shut up' to quantitative models because he has a pet theory about AGI doom".

But is this an important caveat to the fifth virtue, at least in simulated dialogue? That remains open for me.

Load More