Shortform Content [Beta]

jp's Shortform

I found myself saying recently, "While this strategy does not in this case seem to have much causal connection to good outcomes, I feel like following the strategy in the past few months has been good for my soul."*

Humans don't have souls. I could imagine substituting, "This strategy has made me an easier agent to coordinate with and has moved me closer to the morality I was taught growing up, which has reduced my cognitive dissonance with my formerly more consequentialist actions. And it's an important part of the strategy that I don't alter it just becau... (read more)

Showing 3 of 5 replies (Click to show all)

Just wanted to add that "made me an easier agent to coordinate with" applies not only to coordination with other people, but also to coordination with your past/future selves. That is, what is "good for your soul" is good even when other people are not involved.

It may even be the more important aspect, because if you can't trust your future selves, how could other people? (Your deals with other people implicitly involve deals with your future selves.)

1jp9hI think you're gonna need to define soul here. Not in a way that implies you've understood everything, but in the way that you might describe fire as the red hot stuff.
2mr-hire8hThe soul is the metaphorical red hot stuff :D.
capybaralet's Shortform

Moloch is not about coordination failures.

Moloch is about the triumph of instrumental goals.

Coordination *might* save us from that. Or not. "it is too soon to say"

2Mati_Roy4dWorking a lot is an instrumental goal. If you start tracking your time, and optimizing that metric, you might end up working more than optimal. That seems like a triumph of instrumental goals that isn't a coordination failure. I wouldn't assign this failure to Moloch. Thoughts?

I basically agree, but I do assign it to Moloch. *shrug

Sherrinford's Shortform

I would love to see examples of contributions with actual steelmanning instead of just seeing people who pay lipservice to it.

Showing 3 of 4 replies (Click to show all)
3Kaj_Sotala6hITTs and steelmanning feel like they serve different (though overlapping) purposes to me. For example, if I am talking with people who are not X (libertarians, socialists, transhumanists, car-owners...), we can try to steelman an argument in favor of X together. But we can't do an ITT of X, since that would require us to talk to someone who is X.

Yes, though I assume the best test for whether you really steelman someone would be if you can take a break and ask her whether your representation fits.

4Sherrinford6hWhat I mean is: I would like to see that people who write articles about what the supposed actions or motivations of other people - or government agencies, firms, or whatever - are to actually try to present their actions and motivations in a way that at least assumes that they are not completely dumb or evil or pathetic. It seems to be fashionable that when people do not see the sense behind actions, they do not try hard but jump to the conclusion that it must be due to some despicable, stupid, or at least equilibrium-inefficient behavior (e.g. some claims about "signalling"; no proper analysis whether the claim makes sense in a given situation required). This may feel very insightful; after all, the writer seemingly has a deeper insight into social structures than the social agents. But supposed insights that feel too good can be dangerous. And that a model is plausible does not mean that it applies to every situation.
Hazard's Shortform Feed

The way I see "Politics is the Mind Killer" get used, it feels like the natural extension is "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own Is The Mind Killer".

From this angle, a commitment to prevent things from getting "too political" to "avoid everyone becoming angry idiots" is also a commitment to not having an impact.

I really like how jessica re-frames things in this comment. The whole comment is interesting, here's a snippet:

Basically, if the issue is adversar

... (read more)
avturchin's Shortform

"Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty"

I uploaded the draft of my article about curing past sufferings.

Abstract:

The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoret... (read more)

Sunny's Shortform

When somebody is advocating taking an action, I think it can be productive to ask "Is there a good reason to do that?" rather than "Why should we do that?" because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.

Eli's shortform feed

Something that I've been thinking about lately is the possibility of an agent's values being partially encoded by the constraints of that agent's natural environment, or arising from the interaction between the agent and environment.

That is, an agent's environment puts constraints on the agent. From one perspective removing those constraints is always good, because it lets the agent get more of what it wants. But sometimes from a different perspective, we might feel that with those constraints removed, the agent goodhearts or wire-heads, or otherwise fails... (read more)

Side note, which is not my main point: I think this also has something to do with what meditation and psychedelics do to people, which was recently up for discussion on Duncan's Facebook. I bet that mediation is actually a way to repair psychblocks and trauma and what-not. But if you do that enough, and you remove all the psych constraints...a person might sort of become so relaxed that they become less and less of an agent. I'm a lot less sure of this part.

adamzerner's Shortform

There's a concept I want to think more about: gravy.

Turkey without gravy is good. But adding the gravy... that's like the cherry on top. It takes it from good to great. It's good without the gravy, but the gravy makes it even better.

An example of gravy from my life is starting a successful startup. It's something I want to do, but it is gravy. Even if I never succeed at it, I still have a great life. Eg. by default my life is, say, a 7/10, but succeeding at a startup would be so awesome it'd make it a 10/10. But instead of this happening, my brain pulls a ... (read more)

Hazard's Shortform Feed

I started writing on LW in 2017, 64 posts ago. I've changed a lot since then, and my writing's gotten a lot better, and writing is becoming closer and closer to something I do. Because of [long detailed personal reasons I'm gonna write about at some point] I don't feel at home here, but I have a lot of warm feelings towards LW being a place where I've done a lot of growing :)

I'm glad about your growth here :)

Raemon's Shortform

I’ve noticed myself using “I’m curious” as a softening phrase without actually feeling “curious”. In the past 2 weeks I’ve been trying to purge that from my vocabulary. It often feels like I'm cheating, trying to pretend like I'm being a friend when actually I'm trying to get someone to do something. (Usually this is a person I'm working with it and it's not quite adversarial, we're on the same team, but it feels like it degrades the signal of true open curiosity)

Showing 3 of 8 replies (Click to show all)
2mr-hire2dHave you tried becoming curious each time you feel the urge to say it? Seems strictly better than not being curious.
2Raemon2dDunno about that. On one hand, being curious seems nice on the margin. But, the whole deal here is when I have some kinda of agenda I'm trying to accomplish. I do care about accomplishing the agenda in a friendly way. I don't obviously care about doing it in a curious way – the reason I generated the "I'm curious" phrase is because it was an easy hack for sounding less threatening, not because curiosity was important. I think optimizing for curiosity here is more likely to fuck up my curiosity than to help with anything.

I went through something similar with phrases like "I'm curious if you'd be willing to help me move." While I really meant "I hope that you'll help me move."

My personal experience was that shifting this hope/expectation toba real sense of curiosity "Hmm, Does this person want to help me move?" Made it more pleasant for both of us. I became genuinely curious about their answer, and there was less pressure both internally and externally.

ofer's Shortform

[Online dating services related]

The incentives of online dating service companies are ridiculously misaligned with their users'. (For users who are looking for a monogamous, long-term relationship.)

A "match" between two users that results in them both leaving the platform for good is a super-negative outcome with respect to the metrics that the company is probably optimizing for. They probably use machine learning models to decide which "candidates" to show a user at any given time, and they are incentivized to train these models to avoid matches that cause users to leave their platform for good. (And these models may be way better at predicting such matches than any human).

2Dagon2dI think this is looking at obvious incentives, and ignoring long-term incentives. It seems likely that owners/funders of platforms have both data and models of customer lifecycles and variability, including those who are looking to hook-up and those who are looking for long-term partners (and those in-between and outside - I suspect there is a large category of "lookey-lous", who pay but never actually meet anyone), and the interactions and shifts between those. Assuming that most people eventually exit, it's FAR better if they exit via a match on the platform - that likely influences many others to take it seriously.
2TurnTrout2dWhy is this true? Is there any word-of-mouth benefit for e.g. Tinder at this point, which plausibly outweighs the misaligned incentives ofer points out?

I don't know much about their business and customer modeling specifically.  In other subscription-based information businesses, a WHOLE LOT of weight is put on word of mouth (including reviews and commentary on social media), and it's remarkably quantifiable how valuable that is.  For the cases I know of, the leaders are VERY cognizant of the Goodhart problem that the easiest-to-measure things encourage churn, at the expense of long-term satisfaction.

Troy Macedon's Shortform

I keep seeing people say that the Self-Indication Assumption implies that given two possible theories with equal posterior probability of being true, SIA says the one that implies more observers is by default more likely to be true. But this would only be true if possible-universes were equally distributed by observer count. But they're not. Universes, even the set of possible universes, fall under either a Normal Distribution, or a Power Distribution. Either distribution implies that universes with more observers are less likely even though each one has m... (read more)

1Tetraspace Grouping4dThe number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.

Posterior Probability doesn't take SIA into account. So the theories would be equally likely before applying SIA. Then, applying SIA, theory that predicts 2Y observers would become twice as likely. But then applying a type of "Universe Indication Assumption", where universes with twice as many observers are intrinsically a third less likely, the theory that predicts Y number of observers becomes more likely.

Matt Goldenberg's Short Form Feed

Is society just a tool to get Kegan 3 frames to want to LARP Kegan 4 and Kegan 5 frames?

4G Gordon Worley III4dI mean, this is a weird way to put it, but kinda. At Kegan 3 the ground truth is taken for granted, and is heavily constructed via social reality. You can have a traditional society that isn't trying to do anything other than maintain the existing reality that works for people at this stage of development. On the other hand, modern civilization (as in, modern industrial civilization with loose family ties and trusting strangers and impersonal organizations that function like machinery) basically demands people at least come up to Kegan 4 to really succeed, and historically put lots of systems in place to help people get there. It does end up asking people to try their best and fake it until they actually develop, with people playing at Kegan 4 without actually being there. A classic example I can think of is the way modern society, and especially modern organizations, expect people to function in compartmentalized ways. Like, say you work at a company, and you, Alice, have beef with your coworker, Bob. The expectation is that you'll act "professionally", which is essentially the LARPing thing you're getting at, where there are rules around how you are supposed to behave in the workplace, and one of those is engaging with people in the workplace only on limited terms. The whole person doesn't come to work, only their work "mask". So your beef with Bob must be kept out of the workplace, lest you be fired yourself, and Bob can readily get himself out of trouble if you break the rules and bring the beef to work by saying "hey, Alice isn't acting professionally!".

I feel like I only wrote half that comment. Here's the rest.

That kind of compartmentalization is not something that comes naturally to people without systems in place to push them to it. In a traditional society, there's just sort of one social sphere (attempts at secret groups for ritual purposes notwithstanding) that overlaps with everything and you can bring your whole self all the time everywhere and people will expect you to do that. It's only that we ask more of people in our modern world because compartmentalization works well as a bridge to help pe... (read more)

Rafael Harth's Shortform

I was initially extremely disappointed with the reception of this post. After publishing it, I thought it was the best thing I've ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)

If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn't really doing that. Actually, I was trying to do something much more ambitious: solving the ... (read more)

(Datapoint on initial perception: at the time, I had glanced at the post, but didn't vote or comment, because I thought Steven was in the right in the precipitating discussion and the "a prediction can assign less probability-mass to the actual outcome than another but still be better" position seemed either confused or confusingly phrased to me; I would say that a good model can make a bad prediction about a particular event, but the model still has to take a hit.)

Matt Goldenberg's Short Form Feed

(Taken from a comment)

One of the problem's with Rao's Gervais principle that I later realized(that I think Zvi's sequence shares to some degree) is that it doesn't distinguish between Kegan 4.5 Sociopaths, and Kegan 5 leaders.  This creates the impossible choice between having freedom as a loser, meaning as a clueless, or influence as as a sociopath, pick one.

Similarly, Zvi's sequence gives the choice of truth as a simulacra 1,  belonging as Simulacra 2, and influence as Simulacra 4.

Neither framing admits that it's possible to get to a stage of l... (read more)

Showing 3 of 10 replies (Click to show all)

Yes, I agree with that.  Of course it's meaningful!  It wouldn't be a reflection of reality if it wasn't.  But meaningful isn't the same as complete or undistorted.

For example, I think it's meaningful (maybe not the most insightful thing that could possibly be said, but meaningful) to talk about the original Star Trek in terms of head, heart, and gut as reflected in the characters of Spock, McCoy, and Kirk.  I don't think this covers everything that Star Trek is, or everything that those characters are, or everything that real people ca... (read more)

6Vladimir_Nesov4dMy takeaway was that awareness of all levels is necessary if you want to reliably remain on level 1 (make sure that you don't trigger responses for levels 2-4 by crafting statements that have no salient interpretations at levels 2-4). So both the problem and the solution involve reading statements at multiple levels. (The innovation is in how this heuristic is more principled/general than things like "don't talk about politics or religion". You might even manage to talk about politics and religion without triggering levels 2-4.)
4G Gordon Worley III4dThanks, I think this helps me see what I find slightly off about both, and also Zvi's writing on "moral mazes". In all three cases, it's acting as if the frames and roles people feel themselves to be trapped in are the ground reality, rather than a way of being those people are choosing to take on. They present models that seem to claim a complete description, but fail to realize that even if they are complete descriptions it's possible to pull back and see people and statements and roles to be in multiple states at once, or for parts of the model to be under or over specified such that stuff gets lumped together that should be split apart.
Thomas Kwa's Shortform

Is it possible to make an hourglass that measures different amounts of time in one direction than the other? Say, 25 minutes right-side up, and 5 minutes upside down, for pomodoros. Moving parts are okay (flaps that close by gravity or something) but it should not take additional effort to flip.

Showing 3 of 4 replies (Click to show all)
11mingyuan4dI don't see why this wouldn't be possible? It seems pretty straightforward to me; the only hard part would be the thing that seems hard about making any hourglass, which is getting it to take the right amount of time, but that's a problem hourglass manufacturers have already solved. It's just a valve that doesn't close all the way: Unless you meant, "how can I make such an hourglass myself, out of things I have at home?" in which case, idk bro.

One question I have about both your solution and mine is how easy it is to vary the time drastically by changing the size of the hole.  My intuition says that too large holes behave much differently than smaller holes and if you want a drastic 5x difference in speed you might get into this "too large and the sand sort of just rushes through" behavior.

2effective-egret7dWhile I'm sure there's a mechanical solution, my preferred solution (in terms of implementation time) would be to simply buy two hourglasses - one that measures 25 minutes and one that measures 5 minutes - and alternate between them.
DanielFilan's Shortform Feed

A rough and dirty estimate of the COVID externality of visiting your family in the USA for Christmas when you don't feel ill [EDIT: this calculation low-balls the externality, see below]:

You incur some number of μCOVIDs[*] a week, let's call it x. Since the incubation time is about 5 days, let's say that your chance of having COVID is about 5x/7,000,000 when you arrive at the home of your family with n other people. In-house attack rate is about 1/3, I estimate based off hazy recollections, so in expectation you infect 5xn/21,000,000 people, which is about... (read more)

I recently realized, thanks to a FB comment by Paul Christiano, that this is thinking about things in kind of the wrong way. R is approximately 1 because society is tamping down infection rates when infections are high and 'loosening' when infections are low. So, by infecting people, you cause some chain of counterfactual infections that perhaps ends when society notices and tamps down infection, but also you cause the rest of society to do less fun interacting in order to tamp down the virus. So the cost of infecting somebody is to cause everybody else to be more conservative. I'm still not quite sure how to think about that cost tho.

2DanielFilan22dNote: this calculation only accounts for you infecting your relatives who then infect others, and not your relatives infecting you and you infecting others. Accounting for this should probably raise the cost by a factor of 2.
2DanielFilan22dNote: this calculation assumes that travelling is not risky at all. Realistically that should be bundled into x.
df fd's Shortform

[Epistemic status: conspiracy theory/raving of the mads]

We all know that GPD/standard of living all track with energy use, yet arguably the most convenient and widespread energy sources current are fossil fuels, which saw mass adoption with the start of the industrial revolution. Which happened, it can be said without hyperbole, eons ago [citation needed].

For some time, nuclear fission seems poised to replace fossil fuel, yet a series of unfortunate events permanently sour the public perception of this technology [Chernobyl, Fukushima]. Even in countries t... (read more)

alkjash's Shortform

Instrumental Rationality Mini-Retrospective

I promised several years ago to write a retrospective on Hammertime a year after it was released. I broke that promise but I wanted to take some time to do the work now, and to summarize my current beliefs about how much rationalist self-improvement affected my personal growth. I'd also like to estimate how it compares to other schools of self-improvement I've dabbled in.

First, I should mention that epistemic rationality has been directly useful in my career, although this is highly unlikely to generalize. At leas... (read more)

Showing 3 of 12 replies (Click to show all)
2alkjash5dVery interesting! This thread is the first time I've heard of NLP (might have seen the acronym before but I thought it was ML people referring to Natural Language Processing), I will definitely check it out. I guess I just rounded off my observations to the nearest things I recognized. I'm not surprised that Robbins stuff is embedded in a larger technique but am kind of surprised that I've been ignorant of it for so long. Is there a book or resource that you would most recommend to learn NLP?
10pjeby5dNLP stands for Neurolinguistic Programming -- a spur-of-the-moment name given by Richard Bandler after glancing at the titles of the books in his car when he was stopped by police for speeding, and was asked his occupation. Before that point, it was just a group of students and academics doing weird psychology experiments, after Bandler noticed some common language patterns between certain therapists whose books he was transcribing and editing (one a Gestalt therapist, the other a family therapist), and went to ask his linguistics professor about it. Bandler later settled on a definition of NLP as, "an attitude which is an insatiable curiosity about human beings with a methodology that leaves behind it a trail of techniques." Which, one might argue, is just another way of saying "Science!"... but the more philosophically-oriented works of the NLP creators spend a lot of time talking about how so much of psychological science at the time (60's and 70's) was "how do we define how fucked-up somebody is", not "what can we do to help". In contrast, the philosophy of NLP presupposes that people are not broken: whatever it is they're doing, they're doing perfectly according to their programming: a programming that can be understood in terms of internal processing steps (represented in sensory terms), and in terms of people's internal models, or maps of the territory. Behavior that may seem crazy or stupid can thus be understood as straightforward, even rational, when considering both a person's map and the processing steps they are using to think and respond to what they observe. The Structure of Magic (the first book on NLP, which IIUC was also Bandler's masters thesis) was written to capture something that it appeared that more-effective therapists were doing to change people: specifically, noticing map-territory gaps and getting people to confront those gaps. Bandler noticed the verbal patterns because he was typing the same kinds of questions and statements over an

Fascinating! Definitely plan to check this out, thanks for the recommendations and detailed introduction.

ricraz's Shortform

A well-known analogy from Yann LeCun: if machine learning is a cake, then unsupervised learning is the cake itself, supervised learning is the icing, and reinforcement learning is the cherry on top. (Unfortunately it seems like I can't embed images into a shortform).

I think this is useful for framing my core concerns about current safety research:

  • If we think that unsupervised learning will produce safe agents, then why will the comparatively small contributions of SL and RL make them unsafe?
  • If we think that unsupervised learning will produce dangerous agen
... (read more)

I wrote a few posts on self-supervised learning last year:

I'm not aware of any airtight argument that "pure" self-supervised learning systems, either generically or with any particular architecture, are safe to use, to arbitrary levels of intelligence, though it seems very much worth som... (read more)

Load More