Dagon

Just this guy, you know?

Wiki Contributions

Comments

Dagon21h20

[epistemic status: just what I've read in popular-ish press, no actual knowledge nor expertise]

Two main mechanisms that I know of:

- Some cancers are caused (or enabled, or activated, or something) by viruses, and there's been immense progress in tailoring vaccines for specific viruses.

- Some cancers seem to be susceptible to targeted immune response (tailored antibodies).  Vaccines for these cancers enable one's body to reduce or eliminate spread of the cancer.

Dagon1d30

Note that everything is relative and marginal ("compared to what, for what increment?").  I don't think "favor" is the right word for surplus from trade, as it goes in both directions, and is unmeasurable.  If you buy a car for $66K, the dealer makes $11k profit, but also has effort and employment costs, so that's not net.  And you're getting more than $66k of value in owning the car (or you wouldn't have bought it - you're not intending to do a favor, just making a trade that benefits you and happens to benefit them).  So they're doing you a favor as much as you doing them one.  

Which is to say that the "favor" framing isn't very helpful, except in motivational terms - you may purposfully take a worse trade than you otherwise could, in order to benefit some specific person (or even a group, if you're weirdly altruistic enough).  But most economic analysis assumes this is a very small part of trade and work choices.

The key insight in figuring out the work and purchase decisions is that most things have different values to different people.  A given hour of effort in an endeavor you're relatively skilled at ("work") is worth some amount to you, and some amount to an employer.  It's worth more to an employer than to you, and your pay for that hour will be between those values.  For simplification reasons, and measurement difficulty, and preference for stability, it's usually traded in bundles - agreement to work 40+ hours per week for multiple weeks.  That doesn't change the underlying difference in valuation as the main transactional motivation.

Dagon1d20

You probably need to be a bit more explicit in tieing your title to your text.  I'd guess you're just pointing out that these labels ("materialist" and "idealist") are both ridiculous when taken to the extreme, and that all sane people use different models for different decisions.  Oh, and that all cognition is about models and abstractions, which are always wrong (but often useful).

If I'm wrong in that, please use more words :)

As to your questions about the moon, I don't thing "observable" has ever meant only and exactly "directly viewable by the person doing the writing".  It means "inferrable from observations and experiences that are causally linked in simple/justifiable ways".

Dagon1d20

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.

Right.  That's why CDT is broken.  I suspect from the "disagree" score that people didn't realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that "free will" is an illusion.

Dagon2d40

For me, these topics seem extremely contextual and variable with the situation and specifics of the tradeoff in the moment.  For many of them, I do somewhat frequently explore consciously what it might feel like (and for cheap ones, try out) to make a different tradeoff, but those experiments don't generalize well.

I suspect that for the impactful ones (heavily repeated or large), your first two bullet points don't apply - feedback is delayed from the decision, and if harmful, it will be significant.

Still, it's VERY GOOD to be reminded that these decisions are mostly made by type-1 thinking, out of habit or instinct (aka deep/early learning) that deserves reconsideration from time to time.

Answer by DagonApr 19, 202420

If you're giving one number, that IS your all-inclusive probability.  You can't predict the direction that new evidence will change your probability (per https://www.lesswrong.com/tag/conservation-of-expected-evidence), but you CAN predict that there will be evidence with equal probability of each direction.  

An example is if you're flipping a coin twice.  Before any flips, you give 0.25 to each of HH, HT, TH, and TT.  But you strongly expect to get evidence (observing the flips) that will first change two of them to 0.5 and two to 0, then another update which will change one of the 0.5 to 1 and the other to 0.  

Likewise, p(doom) before 2035 - you strongly believe your probability will be 1 or 0 in 2036.  You currently believe 6%.  You may be able to identify intermediate updates, and specify the balance of probability * update that adds to 0 currently, but will be specific when the evidence is obtained.  

I don't know any shorthand for that - it's implied by the probability given.  If you want to specify your distribution of probable future probability assignments, you can certainly do so, as long as the mean remains 6%.  "There's a 25% chance I'll update to 15% and a 75% chance of updating to 3% over the next 5 years" is a consistent prediction.

Answer by DagonApr 19, 202430

Yes!  No!  What does "richer" actually mean to you?  For that matter, what does "we" mean to you (since the existing set of humans is changing hour to hour as people are born, come of age, and die, and even in a given set there's an extremely wide variance in what they have and in what's considered rich).

To the extent that GDP is your measure of a nation's richness, then it's tautological that increasing GDP makes the nation richer.  The weaker argument that it (often) correlates (not necessarily causes) with well-being (in some averages and aggregates) is more defensible, but makes it unsuitable for answering your question.

I think my intuition is that GDP is the wrong tool for measuring how "rich" or "overall satisfied" people are, and simple sum or average is probably the wrong aggregation function.  So I fall back on more personal and individual measures of "well-being".  This, for most people I know, and as far as I can tell, the majority of neurotypical people, is about lack of worry for near- and medium-term future, access to pleasurable experiences, and social acceptance among accessible sub-groups (family, friends, neighbors, online communities small enough to care about, etc.).

For that kind of  "general current human wants", a usable and cheap shared-but-excludable VR space seems to improve things for a lot of people, regardless of what happens to GDP.  In fact, if consumption of difficult-to-manufacture-and-deliver luxuries gets partially replaced by consumption of patterns of bits, that likely reduces GDP while increasing satisfaction.  

There will always be needs for non-virtual goods and experiences - it's not currently possible to virtualize food's nutrition OR pleasure, and this is true for many things.  Which means a mixed economy for a long long time.  I don't think anyone can tell you whether this makes those things cheaper or more expensive, relative to an hour spent working online or in the real world.

Dagon2d20

Thanks for the conversation and exploration!  I have to admit that this doesn't match my observations and understanding of power and negotiation in the human agents I've been able to study, and I can't see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.

I can't tell if you're describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I'm not convinced.  This will likely be my last comment for awhile - feel free to rebut or respond, I'll read it and consider it, but likely not post.

Dagon3d40

These are probably useful categories in many cases, but I really don't like the labels.  Garbage is mildly annoying, as it implies that there's no useful signal, not just difficult-to-identify signal.  It's also putting the attribute on the wrong thing - it's not garbage data, it's data that's useful for other purposes than the one at hand.  "verbose" or "unfiltered" data, or just "irrelevant" data might be better.  

Blessed and cursed are much worse as descriptors.  In most cases there's nobody doing the blessing or cursing, and it focuses the mind on the perception/sanctity of the data, not the use of it.  "How do I bless this data" is a question that shows a misunderstanding of what is needed.  I'd call this "useful" or "relevant" data, and "misleading" or "wrongly-applied" data.

To repeat, though, the categories are useful - actively thinking about what you know, and what you could know, about data in a dataset, and how you could extract value for understanding the system, is a VERY important skill and habit.

Load More