All of Adam Zerner's Comments + Replies

I'm remembering the following excerpt from The Scout Mindset. I think it's similar to what I say above.

My path to this book began in 2009, after I quit graduate school and threw myself into a passion project that became a new career: helping people reason out tough questions in their personal and professional lives. At first I imagined that this would involve teaching people about things like probability, logic, and cognitive biases, and showing them how those subjects applied to everyday life. But after several years of running workshops, reading studies,

... (read more)

Yeah. This matches my (limited) experience chatting with investors. They're a lot less smart than I was anticipating.

I'm reminded of something I recall Paul Graham saying (and I think I also remember hearing others saying the same thing): that you can think of investors as being like an iceberg where the tip that is above water provide a real value-add with their wisdom and guidance in addition to the money you received from them, and the bulk of the iceberg that is underwater are investors who you should just treat as providing you with no value-add on top of the money you're receiving from them.

Surprise 1: People are profoundly non-numerate. 

I wonder whether Humans are not automatically strategic is the deeper issue here.

It's one thing if you intend to be strategic about things and fail to do so in part due to lack of numeracy. It's another if you aren't even trying to be strategic in the first place. I suspect that a large majority of the time the issue is not being strategic.

Furthermore, I suspect that most people aren't strategic because they find being strategic distasteful in some way. I've experienced this a lot in my life.

  • I'll want to
... (read more)
2Raemon4d
I totally agree that's a (the?) root level cause for most people. My guess (although I'm not sure) is that in Critch's case working with CFAR participants he was filtering on people who were at least moderately strategic, and it turned out there was a third blocker.
4Adam Zerner4d
I'm remembering the following excerpt from The Scout Mindset. I think it's similar to what I say above.

If your personality type is "writing doesn't work for me", one of your biggest bottlenecks is to make writing work for you.

Thanks for the reminder here. I've thought a lot in the past about the value of writing in past but for whatever reason I feel like I've drifted away from writing. I think I should spend more time writing and am feeling motivated to start doing so now.

Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives. 

- Scott Alexander, Meditations on Moloch

I feel like a better way to approach this would be to stand on the shoulders of others and search around for product recommendations. Ie. this from America's Test Kitchen.

I am now incrementally more powerful at grocery shopping.

I apologize if this ruins any subtlety you were going for, but I'm thinking mostly about how these learnings can be applied more generally.

You kinda did what I think most people would do. The product is in bottles. There's no obvious way to tell how good the product is. So you use price point as a heuristic, and call it a day. But i... (read more)

1jenn24d
your points about taking the time to think through problems and how you can do this across many contexts is definitely what i was going for subtextually. so, thanks for ruining all of my delicate subtlety, adam :p standing on others' shoulders is definitely a reasonable play as well, although this is not something that works great for me as a Canadian - international shipping is expensive and domestic supply of any recommended product isn't guaranteed.

I see, that all makes a lot of sense. I take back my objection then. It seems at least plausible that Burns is correct here.

I am extremely skeptical for reasons described in Book Review: All Therapy Books.

All therapy books start with a claim that their form of therapy will change everything. Previous forms of therapy have required years or even decades to produce ambiguous results. Our form of therapy can produce total transformation in five to ten sessions! Previous forms of therapy have only helped ameliorate the stress of symptoms. Our form of therapy destroys symptoms at the root!

...

Previous forms of therapy have failed because they were ungrounded. They were ridiculous men

... (read more)
6ChristianKl25d
The general idea here is that the "form of therapy" isn't what's important but rather the skill of the therapist. David Burns claims that out of 50,000 people trained in his form of therapy around 0.2% have skills to achieve these kinds of results. If Scott ten colleges were randomly picked out of those 50,000 people it would not be surprising if none of them would be at that high end of the skill level. Then there's the other argument about deliberate practice. On main feature of David Burns form of therapy is that it sees therapists engaging in deliberate practice as an important aspect of becoming a good therapist. Most schools of therapy don't really go for deliberate practice. I think it's plausible that the rate of people with high skill in a school of therapy that engages in deliberate practice is higher than elsewhere.

I don't like that when you disagree with someone, as in hitting the "x" for the agree/disagree voting, the "x" appears red. It makes me feel on some level like I am saying that the comment is bad when I merely intend to disagree with it.

Disagreed. I think the thing is that a lot of these things aren't actually expensive and are instead just associated with being rich. For example, I think many people could come up with $210 to pay for a professional organizer.

1Shankar Sivarajan1mo
It's not just a question of "can I buy this and still make rent and keep the light on?" If one is making luxury purchases like that, he has bigger problems than a "little voice." That you assess these purchases (typically associated with "rich people") to provide more value to you (more comfortable sleep, the comfort of having an uncluttered apartment, whatever you imagine therapy gets you) than the money they cost or the time it'd take to do it yourself, that's usually what being rich means.  I question the premise of trying to justify them any more than you'd justify, say, buying the nicer brand of bread at the grocery. (Unless you've criticized people richer than you for spending on things you'd consider extravagant, in which case you're rightly feeling guilty of hypocrisy.) If your point is just that "some expensive things aren't just status symbols, and worth the price even to the merely middle class," fine, I agree with that in general, though perhaps not in the specifics.

I see from gears' reaction that they endorse Gerald's position here.

My feelings are any software product that I am paid to pursue is probably going to have a negligible impact on the world and a pretty nice benefit to me, and so it makes sense for me to pursue them.

That said, I think there are some projects that have non-negligible negative impacts on the world, especially those that push AI forward. Other times the actual impact is negligible but is still just icky. I kinda take these things on a case-by-case basis and try to weigh the pros and cons.

2Gerald Monroe1mo
Burnout isn't uncommon. I have a coworker who quit and came back to the industry after taking a bit over a year off. What gears is saying sounds like what someone burned out might say. With that said, I kinda also feel like anyone who isn't one of the less than 5000 SWEs allowed to work at elite AI labs may not matter at all..

Yeah, more visible things like suits and cars are more status-signaling than something like hiring a house cleaner. The latter could be status-signaling if it comes up in conversation, but I that's a little limited, and so I don't think they were great examples for me to focus on in the post. They're just what came to my mind.

I don't think I understand. Would you mind elaborating or rephrasing?

2Gerald Monroe1mo
I believe gears is saying he feels any software product he contributes to has negative value to the world as a whole, at least with respect to gears marginal contributions. So gears to ascension has quit and intends to return to the job market in 2 years if transformative AI is not available.

That makes sense as a consideration for some people. I suspect that it's usually a pretty small one though.

but I downvoted for the "affiliate link" rickroll. I was genuinely curious, and if it was a real product that seemed good enough to buy I figured I'd make sure they'd tracked my click to get you a payout as a favor to show appreciation for the recommendation. Is that really the kind of behavior you'd like to punish for a laugh?

I appreciate you being upfront about this. Maybe you missed it, but I included an actual link earlier on at "A nearly $3,000 mattress cover (cools/heats bed)".

I don't have an issue with affiliate links in general for the reasons you ... (read more)

Maybe this is an example.

I'm listening to Eric Normand's reading of Out of the Tar Pit. The paper Out of the Tar Pit kinda feels like it is saying, "complexity is the enemy in software projects, and here is the best way to tame it".

When I squint, I don't see software development. I see a a field of engineering. A very complicated one. One that has been around for maybe 50 years. And I see someone making a claim about the best way to succeed in the field.

Looking through this lens, I feel a large amount of skepticism.

Squinting

“You should have deduced it yourself, Mr Potter,” Professor Quirrell
said mildly. “You must learn to blur your vision until you can see the forest
obscured by the trees. Anyone who heard the stories about you, and who
did not know that you were the mysterious Boy-Who-Lived, could eas-
ily deduce your ownership of an invisibility cloak. Step back from these
events, blur away their details, and what do we observe? There was a great
rivalry between students, and their competition ended in a perfect tie.
That sort of thing only happens in stories, Mr Potter,

... (read more)
2Adam Zerner1mo
Maybe this is an example. I'm listening to Eric Normand's reading of Out of the Tar Pit. The paper Out of the Tar Pit kinda feels like it is saying, "complexity is the enemy in software projects, and here is the best way to tame it". When I squint, I don't see software development. I see a a field of engineering. A very complicated one. One that has been around for maybe 50 years. And I see someone making a claim about the best way to succeed in the field. Looking through this lens, I feel a large amount of skepticism.

Interesting discussion, and I'm somewhat disappointed but also somewhat relieved that you didn't discover any actual disagreement or crux, just explored some details and noted that there's far more similarity in practice than differences.

I feel very similarly actually. At first when I heard how Gordon is a big practitioner of virtue ethics it seemed likely that we'd (easily?) find some cruxes, which is something I had been wanting to do for some time.

But then when we realized how non-naive versions of these different approaches seem to mostly converge on one another, I dunno, that's kinda nice too. It kinda simplifies discussions. And makes it easier for people to work together.

As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.

I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments ... (read more)

1paragonal1mo
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don't think that technical and scientific-minded people just start thinking like this. I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.
2Viliam1mo
Speed improvements are legible (measurable), although most people are probably not measuring them. Sometimes that's okay; if the app is visibly faster, I do not need to know the exact number of milliseconds. But sometimes it's just a good feeling that I "did some optimization", ignoring the fact that maybe I just improved from 500 to 470 milliseconds some routine that is only called once per day. (Or maybe I didn't improve it at all, because the compiler was already doing the thing automatically.) Code quality is... well, from the perspective of a non-programmer (such as a manager) probably an imaginary thing that costs real money. But here, too, are diminishing returns. Changing spaghetti code to a nice architecture can dramatically reduce future development time. But if a function is thoroughly tested and it is unlikely to be changed in the future (or is likely to be replaced by something else), bringing it to perfection is probably a waste of time. Also, after you fixed the obvious code smell, you move to more controversial decisions. (Is it better to use a highly abstract design pattern, or keep the things simple albeit a little repetitive?) I'd say, if the customer complains, increase the speed; if the programmers complain, refactor the code. (Though there is an obvious bias here: you are the programmer, and in many companies you won't even meet the customer.)

I'm not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours.

Nah, for me I don't feel anywhere close to maxed out. I feel like I could do 12-14 hours a day, although I have a ton of mental energy. I wouldn't expect most people to be like that.

I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let's say) the presence of misaligned agents and other detriments

... (read more)

Why not more specialization and trade?

I can probably make something like $100/hr doing freelance work as a programmer. Yet I'll spend an hour cooking dinner for myself.

Does this make any sense? Imagine if I spent that hour programming instead. I'd have $100. I can spend, say, $20 on dinner, end up with something that is probably much better than what I would cook, and have $80 left over. Isn't that a better use of my time than cooking?

Similarly, sometimes I'll spend an hour cleaning my apartment. I could instead spend that hour making $100, and paying some... (read more)

4JBlack1mo
I'm not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours. I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let's say) the presence of misaligned agents and other detriments to beneficial coordination.

I don't think anyone will GET strong evidence about what actions COULD be successful.

Seems plausible.

I disagree that there's a good alternative, and I'm not sure if you think people should just shut up or if there are good opinions they should weigh more heavily.  I get value from the (stupid and object-level useless) discussion and posting, in that it shows people's priors and models more clearly than otherwise.

I think that it is definitely ok for people to talk about their models and opinions. I agree that getting a sense of peoples models and prior... (read more)

The question is whether this makes NOBODY justified in having an opinion, or whether some opinion-havers (or opinions themselves) are justified is the difficult part.  Your post seems to imply that "experts" exist, but eveyone else is ignoring them.

I'm agnostic on the question of whether experts are justified in having confident opinions here. I don't know enough to be able to tell. The (attempted) focus of our conversation was on whether non-experts are justified. Relevant excerpt:

Too complex for anyone to figure out? Or just for most people to? Who

... (read more)
2Dagon1mo
Ok, I think I see.  On some level, I even agree.  I don't have high confidence in any opinion on the topic, including my own.  In a Bayesian sense, it wouldn't take much evidence to move my opinion.  But (and this is stronger than just conservation of expected evidence - it's not about direction, but about strength) I don't think anyone will GET strong evidence about what actions COULD be successful.   I agree with you that all of the casual debate (I know nothing of what military or State Department leaders are thinking; they're pretty rational in being quiet rather than public) is fairly meaningless signaling and over-fitting of topics into their preferred conflict models.  I disagree that there's a good alternative, and I'm not sure if you think people should just shut up or if there are good opinions they should weigh more heavily.  I get value from the (stupid and object-level useless) discussion and posting, in that it shows people's priors and models more clearly than otherwise.

Ah, that's a great point. I hadn't thought of that but I think it is very true, and is an important reason why it is hard to be confident.

I'm having trouble understanding what you're asking here.

I'd be interested in the justification to have strong opinions about who is allowed to have strong opinions.

(I'll interpret this as "justified in" instead of "allowed to".)

I have a hard time articulating this well. Where I'm coming from is that it just seems like a complex enough system such that it is very difficult to predict what the (not-immediate-term) consequences of a given action will be.

Why do I perceive it to have this level of complexity? This level of difficulty anticipating what the consequences will be? This is where I feel like I'm kinda bla... (read more)

2Dagon1mo
Fully agreed that it's a complex system, with both historical and current-cultural antagonism that don't seem to be really solvable in any way.  My best guess is there is no reachable acceptable (to all major participants) equilibrium.  The question is whether this makes NOBODY justified in having an opinion, or whether some opinion-havers (or opinions themselves) are justified is the difficult part.  Your post seems to imply that "experts" exist, but eveyone else is ignoring them. I'm also unsure whether "justified" matters in a lot of these cases.  Especially when it's not clear how to justify nor to whom, on topics where there is no authority to determine which opinions are correct. 

The things I was trying to convey here are that when we discover a new technology, it is tempting to get excited and think about all of the cool things we could do with it. Sugar seemed like a nice example here because it is very salient and visceral that it introduces more hedons into your life.

But it's also possible that the long-term harms and n-th order effects make the new technology a (large) net-negative. Which I'd argue is the case with sugar.

Relatedly, it's possible that something that seems innocent at first -- like a little sugar to sprinkle on ... (read more)

Social media doesn't do the matchmaking stuff very much though, does it?

Many people seem to be more motivated to invest energy into pursuing romantic relationships than friendships. There are few books about making good friends and many books on dating.

Perhaps. But to the extent that people aren't motivated to invest energy into friendships, I think there is a sort of latent motivation. Friendship and conversation is in fact important, and so in taking this "live in the future" perspective, I think people will eventually realize the importance and start putting effort into it.

Omegle essentially provided an answer to that quest

... (read more)
2ChristianKl2mo
What do you think will change in the future that people put more effort into friendship than they are doing at present?

Against difficult reading

I kinda have the instinct that if I'm reading a book or a blog post or something and it's difficult, then I should buckle down, focus, and try to understand it. And that if I don't, it's a failure on my part. It's my responsibility to process and take in the material.

This is especially true for a lot of more important topics. Like, it's easy to clearly communicate what time a restaurant is open -- if you find yourself struggling to understand this, it's probably the fault of the restaurant, not you as the reader -- but for quantum ... (read more)

In How to Get Startup Ideas, Paul Graham provides the following advice:

Live in the future, then build what's missing.

Something that feels to me like it's present in the future and missing in today's world: OkCupid for friendship.

Think about it. The internet is a thing. Billions and billions of people have cheap and instant access to it. So then, logistics are rarely an obstacle for chatting with people.

The actual obstacle in today's world is matchmaking. How do you find the people to chat with? And similarly, how do you communicate that there is a strong m... (read more)

1johnvon2mo
This is actually what social media is for, but you don't have to fill out a questionnaire. You also don't have to out yourself as being so lonely and without friends that you're using a special matchmaking service to find new friends, this in itself could be unattractive to new acquaintances. 
3ChristianKl2mo
Many people seem to be more motivated to invest energy into pursuing romantic relationships than friendships. There are few books about making good friends and many books on dating. Omegle essentially provided an answer to that question that was highly used. It didn't do a lot of matchmaking but it might be a starting point. If you want to pursue this as a business, maybe buy the recently shutdown Omegle domain from Leif K-Brooks (who's a rationalist) and try to switch from chatting to random people to chatting to highly match-made connections.   
2Gunnar_Zarncke2mo
I have thought about it too, and I think something like an automated Kickstarter for interest groups is want one would need. It would work like this: You enter your interests into the system (or let it be inferred automatically from your online profiles) and the system generates recommendations for ad-hoc groups to meet in places nearby (or not so nearby if more attributes match). Bonus: Set up a ChatGPT DJ or entertainer to engage people with each other. Best if done as an open protocol where different clients can offer different interactivity or different profile extraction. I started some code for the match-making but due to many other obligations it is currently abandoned: https://github.com/GunnarZarncke/okgoto/tree/master/ 

I appreciate this framing a lot and I really enjoyed the post.

Thanks!

On the topic of living forever, I worry that people who aren't super smart might not be able to find nearly as much joy in random activities/concepts. If I were locked in a room with Richard Feynman I'm not sure that I would actually love physics; I might just come out very confused and a bit drained. 

I worry that my brain is simply unable to deeply understand and appreciate theoretical physics

I am of the opinion that this stuff is just generally interesting to the human mind, and th... (read more)

Why hasn't someone already done this?

I think this is a very important question to ask.

Sometimes the reasons are encouraging:

  • An inadequacy analysis reveals that no one is properly incentivized to.
  • The funding isn't there.
  • It's too schelp-y and unsexy.
  • It's too science fiction-y.
  • No one thought of it.

Other times the reasons are discouraging:

  • There are good technical reasons.
  • There are annoying roadblocks that await. Maybe legal things. Things that are actually really difficult to bypass.

These definitely aren't exhaustive lists. It's just what came to me after a fe... (read more)

2GeneSmith2mo
I think at this point we've probably spoken with about 10 people I consider to have some reasonable level of expertise in the field. And there have been a number of very high quality comments from knowledgeable people in the comments. We will continue to talk with more and perhaps we will learn something that will definitely not work. I think if that is the case, the polygenic editing path is still worth pursuing because it could potentially be repurposed for many other applications.

No, I'm definitely not against either of those things. I just think that it would make sense to begin by understanding what is currently known about a topic.

The post The Neglected Virtue of Scholarship elaborates on this idea. An excerpt:

The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of d

... (read more)

I expected more of an update. Do you think I missing anything specific & significant? (Other than our likely crux about priors.)

It's hard for me to say since I don't know much about massage, but my guess is probably not too much.

Unlike the other commenters, I am not too worried about the safety risks here. People in romantic relationships massage each other all of the time without training, and that seems to be pretty safe. I suppose certain health conditions might put people at risk, but if so they've probably been informed of this by their doctor as ... (read more)

Please recommend resources. 

PainScience has lots of great content about massage. The author is a former massage therapist and has excellent epistemics IMO.

5Chipmonk2mo
Ty. rhabdomyolysis is interesting. But after poking around on that website I'm like "Yep, I only massage healthy people. Don't push anything that hurts a lot. Don't do anything obviously bad like massaging weak areas around injuries. Also the neck is sensitive (I already avoided this intuitively)".  I expected more of an update. Do you think I missing anything specific & significant? (Other than our likely crux about priors.)

Weird idea: a Uber Eats-like interface for EA-endorsed donations.

Imagine: You open the app. It looks just like Uber Eats. Except instead of seeing the option to spend $12 on a hamburger, you see the option to spend $12 to provide malaria medicine to a sick child.

I don't know if this is a good idea or not. I think evaluating the consequences of this sort of stuff is complicated. Like, maybe it ends up being a PR problem or something, which hurts EA as a movement, which has large negative consequences.

2Bohaska2mo
Would more people donate to charity if they could do so in one click? Maybe...

I downvoted this post because I think it is low quality. For various reasons. A big one is that the author hasn't attempted to stand on the shoulders of others by researching what is currently known about massage, and so the material in the post I see as quite unreliable.

But I wish there were a way to upvote this topic. Massage has always seemed like a really great thing for people to do for each other. It feels great. It plausibly helps with things like stress, anxiety, and pain. It probably doesn't take too long to learn. Then once you do learn it, you can spend a half hour a day or whatever massaging your romantic partner (or whoever else) and they can do the same in return. Seems pretty win-win.

3Chipmonk2mo
this sounds like you're against first-principles reasoning and (re)discovery, which seems odd to me on LW

Practicing without a license is a crime in many jurisdictions.

Good call out. I think this is worth keeping in mind. Although it sounds like the author isn't practicing or proposing anything that is for-profit, in which case I doubt there are any legal things to worry about.

It's a field susceptible to pseudoscience, so an understanding of anatomy and the medicinal risks is important to make sure you don't screw anything up.

I disagree here. People in romantic relationships massage each other all the time and that isn't considered risky.

I think the pseudoscie... (read more)

My own thoughts and reactions are somewhat illegible to me, so I'm not certain this is my true objection.

That makes sense. I feel like that happens to me sometimes as well.

But I think our disagreement is what I mentioned above: Utility functions and cost-benefit calculations are tools for decisions and predictions, where "altruism" and moral judgements are orthogonal and not really measurable using the same tools.

I see. That sounds correct. (And also probably isn't worth diving into here.)

I do agree with your (and Tim Urban's) observation that "emotional d

... (read more)

I think we're disagreeing over the concepts and models that have words like "altruism" as handles, rather than over the words themselves.

Gotcha. If so, I'm not seeing it. Do you have any thoughts on where specifically we disagree?

4Dagon2mo
My own thoughts and reactions are somewhat illegible to me, so I'm not certain this is my true objection.  But I think our disagreement is what I mentioned above: Utility functions and cost-benefit calculations are tools for decisions and predictions, where "altruism" and moral judgements are orthogonal and not really measurable using the same tools.   I do consider myself somewhat altruistic, in that I'll sacrifice a bit of my own comfort to (I hope and imagine) help near and distant strangers.  And I want to encourage others to be that way as well.  I don't think framing it as "because my utility function includes terms for strangers" is more helpful nor more true than "because virtuous people help  strangers".  And in the back of my mind I suspect there's a fair bit of self-deception in that I mostly prefer it because that belief-agreement (or at least apparent agreement) makes my life easier and maintains my status in my main communities. I do agree with your (and Tim Urban's) observation that "emotional distance" is a thing, and it varies in import among people.  I've often modeled it (for myself) as an inverse-square relationship about how much emotional investment I have based on informational distance (how often I interact with them), but that's not quite right.  I don't agree with using this observation to measure altruism or moral judgement.

I disagree that this conception of moral weights is directly related to "how altruistic" someone is.  And perhaps even that "altruistic" is sufficiently small-dimensioned to be meaningfully compared across humans or actions.

Hm. I'm trying to think about what this means exactly and where our cruxes are, but I'm not sure about either. Let me give it an initial stab.

This feels like an argument over definitions. I'm saying "here is way of defining altruism that seems promising". You're saying "I don't think that's a good way to define altruism". Furthermo... (read more)

2Dagon2mo
I think we're disagreeing over the concepts and models that have words like "altruism" as handles, rather than over the words themselves.  But they're obviously related, so maybe it's "just" the words, and we have no way to identify whether they're similar concepts.  There's something about the cluster of concepts that the label "altruistic" invokes that gives me (and I believe others, since it was chosen as a big part of a large movement) some warm fuzzies - I'm not sure if I'm trying to analyze those fuzzies or the concepts themselves. I (weakly) don't think the utility model works very well in conjunction with altruism-is-good / (some)warm-fuzzies-are-suspect models.  They're talking about different levels of abstraction in human motivation.  Neither are true, both are useful, but for different things.

I dunno. I feel a little uncertain here about whether "altruistic" is the right word. Good point.

My current thinking: there are two distinct concepts.

  1. The moral weights you assign. What your "moral mountain" looks like.
  2. The actions you take. Do you donate 10% of your income? Volunteer at homeless shelters?

To illustrate the difference, consider someone who has a crowded moral mountain, but doesn't actually take altruistic actions. At first glance this might seem inconsistent or contradictory. But I think there could be various legitimate reasons for this.

  • One
... (read more)

Particularly missing is types of caring, and scalability of assistance provided.

I'm not sure what you mean. Would you mind elaborating?

I simply can't help very many people move, and there's almost zero individuals who I'd give cash to in amounts that I donate to larger orgs.

I'm not understanding what the implications of these things are.

Also, and incredibly important, is the idea of reciprocity - a lot of support for known, close individuals has actual return support (not legibly accountable, but still very real).

I think that would be factored in to expect... (read more)

2Dagon2mo
I think I'm mostly responding to  I disagree that this conception of moral weights is directly related to "how altruistic" someone is.  And perhaps even that "altruistic" is sufficiently small-dimensioned to be meaningfully compared across humans or actions. Sure, but so does everything else (including your expectations of stranger-wellness improvement).  

Suppose you identify a single crux A. Now you need to convince them of A. But convincing them of A requires you to convince them of A.1, A.2, and A.3.

Ok, no problem. You get started trying to convince them of A.1. But then you realize that in order to convince them of A.1, you need to first convince them of A.1.1, A.1.2, and A.1.3.

I think this sort of thing is often the case, and is how large inferential distances are "shaped".

Noooooooo! I mean this in a friendly sort of sense. Not that I'm mad or indignant or anything. Just that I'm sad to see this and suspect that it is a move in the wrong direction.

This relates to something I've been wanting to write about for a while and just never really got around to it. Now's as good a time as any to at least get started. I started a very preliminary shortform post on it here a while ago.

Basically, think about the progression of an idea. Let's use academia as an initial example.

  • At some point in the timeline, an idea is deemed good enough
... (read more)

A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren't actually being (too) close-minded. This line of thought is very preliminary and unrefined.

It's related to Aumann's Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren't 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it's hard to convince people of things.

Well, I guess w... (read more)

1papetoast2mo
Modelling humans with Bayesian agent seems wrong. For humans, I think the problem usually isn't the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn't exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work. (Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)

Ah, yeah. I was thinking about it as like adamTrusts(person, thing) but the more general concept of just trust, I agree about it being 3-place.

I've always felt basically the same about trust. It's nice to see that I'm not the only one.

I'd add/emphasize that trust is really a 2-place word. It doesn't really make sense to say "I trust Alice". Instead, it'd make sense to say "I trust Alice to do X" or "I trust Alice's belief about X" or "I expect Alice's prediction about X to be true". Ie. instead of trust(person), it's trust(person, thing).

(And of course, as mentioned in the post, it isn't binary either. The output of the function is a probability. A number between zero and one. Not a boolean.)

2Yoav Ravid2mo
That would make it a 3-place word ("I trust" is 1-place, "I trust Alice" is 2-place, "I trust Alice to do X" is 3-place").

I feel incredibly fond for LessWrong. I've learned so much awesome stuff. And while not perfect, there's a community of people who more or less agree on and are familiar with various, er, "epistemic things", for lack of a better phrase. Like, it's nice to at least know that the person you're conversing with knows about and agrees on things like what counts as evidence and the map-territory distinction.

That said, I do share the impression that others here have expressed of it heading "downhill". Watered down. Lower standards. Less serious. Stuff like that. ... (read more)

5niplav2mo
I don't know, looking back at older posts (especially on LW 1.0) current LW is less "schizo" and more rigorous/boring—though maybe that's because I sometimes see insanely long & detailed mechinterp posts?
Load More