LESSWRONG
LW

2232
Dagon
13248Ω191455430
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Dagon's Shortform
6y
92
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Dagon1h20

[note, not a utilitarian, but I strive to be effective, and I'm somewhat altruistic.  I don't speak for any movement or group.]

Effective Altruism strives for the maximization of objective value, not emotional sentiment.

What?  There's no such thing as objective value.  EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I'd argue CAN not be) objectively chosen.

Reply
Shortform
Dagon4h20

I don't see much disagreement.  My comment was intended to generalize, not to contradict.   Other comments seem like refinements or clarifications, rather than a rejection of the underlying thesis.

One could quibble about categorization of people into "bad" and "nice", but anything more specific gets a lot less punchy. 

Reply
Shortform
Dagon1d1-5

Put another way: everyone underestimates variance.  

Reply
How To Vastly Increase Your Charitable Impact
Dagon3d72

Implicit in this argument is that the path of human culture and the long-term impact of your philanthropy is sub-exponential.  Why would that be so?  If there's no way to donate NOW to things that will bloom and increase in impact over time, why would you expect that to be different in 50 years?  If you prioritize much larger impact when you're dead over immediate impact during your life, you should find causes that match your profile.  

Reply
undefined's Shortform
Dagon9d20

"people who would reflectively endorse giving control over the future to AI systems right now". 

"right now" is doing a lot of work there.  I don't believe there are any (ok, modulo the lizardman constant) who reflectively want any likely singularity-level change right now.  I do believe there is a very wide range of believable far-mode preferences and acceptances of the trajectory of intelligent life beyond the immediate.

Reply
How we'll make all world leaders work together to make the world better (Expert-approved idea)
Dagon11d30

Neither upvoted nor downvoted - I'm happy that you're thinking about these topics, but I don't think this goes deep enough to be useful.  It tries to use word definitions incorrectly to prove things that aren't true.

All world leaders want to do good things. Their values is to do the most good. They just disagree on what the most good is!

Nope.  All humans (including leaders) want many conflicting things, which they then try to justify as "good". The label "good" is following, not leading, their desires.

  • All wars are because one side thinks X is good, another side thinks X is bad, and both sides are willing to fight for what they believe in to stop X or to keep X.

Perhaps, but see above.  "good" is poorly-defined and "good for me and my people" is not even theoretically compatible among different entities.

  • All cooperation is because two sides think X is good/moral, so they work together to get X! 
    Otherwise, one side wouldn’t want X, and they wouldn’t both work to get it.

Not at all.  A LOT of cooperatoin and trade is because two sides think they're better off, without any agreement that either result is "good".  Or maybe true, but only if you define "good" as "what each trader in an agreement wants".  

  • And all bad decisions are because someone’s goals/values led them to think “I should do this bad thing.”

I can't tell if you're saying "all decisions are because someone's goals/values led them to think "I should do this thing", or if you're saying decisions to pursue bad things (to some) are in this category.  This is either incorrect or tautological.

Reply
Linch's Shortform
Dagon12d42

There's also a saying of "don't try to teach a pig to sing - it wastes your time and annoys the pig".  It seems like you could investigate the porcine valence correlation using similar methods.

Reply
Notes on the need to lose
Dagon12d20

Ah, your inner Bruce.  I do sympathize, though I'm not sure I have great advice, other than self-awareness and noticing when it happens.  "Akrasia" doesn't get discussed around here as much as it used to, and it was never particularly rigorous discussion, but it may be worth looking for some older posts and sequences.  https://www.lesswrong.com/w/akrasia 

Reply1
Notes on the need to lose
Dagon12d20

It seems easy to deal with Bruce.  Take the seat to his left.  Do your best to help him enjoy giving you +EV opportunities.

More seriously, I didn't engage deeply with the linked post, more about the text on LW.  the link had enough blanket statements and far-mode stories that I didn't think there was much information for me.  What, specifically, is your difficulty?  If it's that "many people spew money because of emotional disregulation or bad epistemology", you mostly have to decide whether and how you can help them, and how you can catch some of the spew when you can't help.  This itself is a game that you may or may not be able to win, and deciding your goals and how to pursue them is key.

I should admit that one of the reasons I play less poker nowadays is that I find I don't enjoy the company of the people who make it profitable.  I do enjoy those who make it unprofitable, by thinking about the game and talking intelligently about life.  The goals of winning and of optimizing my social interaction are at odds, so I do something else.

It's been said and written many times - the important skill in poker is game selection: find the softest field, and exploit it.  As said long long ago on rec.gambling.poker: "to succeed in life, surround yourself with people smarter than you.  to succeed in poker, surround yourself with people dumber than you."

Reply
Notes on the need to lose
Dagon12d42

Sorry, I didn't mean to imply that it's universal in either direction.  You're absolutely right that this urge to avoid trying because it'd make failing feel a bit worse is common as well.

It does vary a LOT among groups and situations - some will be dispassionate strategists in money issues, but emotionally-risk-averse in (some) relationships.  Many will play a game or games enough to learn some of the lessons about meta-outcomes, and a few will apply that to other areas.  

It didn't resonate with me, though I recognize some of the behaviors in others.  I recognize just how thick the wall of my bubble is, though, so I really don't want to imply universality.   I do recommend my approach, though.  Losing is part of playing, and shouldn't affect one's ego in either direction.  The puzzle of how to maximize the overall outcome of the sequence of games in life remains fascinating and worth pursuing.

 

Reply1
Load More
8Moral realism - basic Q
Q
3mo
Q
12
14What epsilon do you subtract from "certainty" in your own probability estimates?
Q
11mo
Q
6
3Should LW suggest standard metaprompts?
Q
1y
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
3y
Q
0
24{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
9Believable near-term AI disaster
4y
3
2Laurie Anderson talks
4y
0
Load More