LESSWRONG
LW

992
Dagon
13253Ω191455460
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Dagon1d20

I don’t think EA is a trademarked or protected term (I could be wrong).  I’m definitely the wrong person to decide what qualifies.

For myself, I do give a lot of support to local (city, state  mostly) short-term (less than a decade, say) causes.  It’s entirely up to each of us how to split our efforts among all the changes in our future lightcone we try to improve.

Reply
Dollars in political giving are less fungible than you might think
Dagon2d2-1

dollar in my DAF is approximately as good as a dollar in my normal bank account for making the world a better place.

Well, one of them has reduced your tax burden when you deposted it.  The comparison should be $DAF ~= $(cash - taxes).  

Also, there's some controversy about whether political spending is actually altruistic.  I tend to lean toward being restrictive in my giving - not even most registered charitable organizations make my cut for making the world better, and almost no political causes.  

Reply
I will not sign up for cryonics
Dagon2d42

All-or-nothing, black-or-white thinking does not serve well for most decisions.  Integrals of value-per-time-unit is a much better expected-value methodology.

What difference does it make whether I die in 60 years or in 10,000? In the end, I’ll still be dead.

What difference does it make whether you die this afternoon or in 60 years?  If life has value to you, then longer life (at an acceptible quality level) is more valuable.  

Reply
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Dagon6d22

[note, not a utilitarian, but I strive to be effective, and I'm somewhat altruistic.  I don't speak for any movement or group.]

Effective Altruism strives for the maximization of objective value, not emotional sentiment.

What?  There's no such thing as objective value.  EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I'd argue CAN not be) objectively chosen.

Reply
Shortform
Dagon6d20

I don't see much disagreement.  My comment was intended to generalize, not to contradict.   Other comments seem like refinements or clarifications, rather than a rejection of the underlying thesis.

One could quibble about categorization of people into "bad" and "nice", but anything more specific gets a lot less punchy. 

Reply
Shortform
Dagon7d1-5

Put another way: everyone underestimates variance.  

Reply
How To Vastly Increase Your Charitable Impact
Dagon9d72

Implicit in this argument is that the path of human culture and the long-term impact of your philanthropy is sub-exponential.  Why would that be so?  If there's no way to donate NOW to things that will bloom and increase in impact over time, why would you expect that to be different in 50 years?  If you prioritize much larger impact when you're dead over immediate impact during your life, you should find causes that match your profile.  

Reply
undefined's Shortform
Dagon15d20

"people who would reflectively endorse giving control over the future to AI systems right now". 

"right now" is doing a lot of work there.  I don't believe there are any (ok, modulo the lizardman constant) who reflectively want any likely singularity-level change right now.  I do believe there is a very wide range of believable far-mode preferences and acceptances of the trajectory of intelligent life beyond the immediate.

Reply
How we'll make all world leaders work together to make the world better (Expert-approved idea)
Dagon17d30

Neither upvoted nor downvoted - I'm happy that you're thinking about these topics, but I don't think this goes deep enough to be useful.  It tries to use word definitions incorrectly to prove things that aren't true.

All world leaders want to do good things. Their values is to do the most good. They just disagree on what the most good is!

Nope.  All humans (including leaders) want many conflicting things, which they then try to justify as "good". The label "good" is following, not leading, their desires.

  • All wars are because one side thinks X is good, another side thinks X is bad, and both sides are willing to fight for what they believe in to stop X or to keep X.

Perhaps, but see above.  "good" is poorly-defined and "good for me and my people" is not even theoretically compatible among different entities.

  • All cooperation is because two sides think X is good/moral, so they work together to get X! 
    Otherwise, one side wouldn’t want X, and they wouldn’t both work to get it.

Not at all.  A LOT of cooperatoin and trade is because two sides think they're better off, without any agreement that either result is "good".  Or maybe true, but only if you define "good" as "what each trader in an agreement wants".  

  • And all bad decisions are because someone’s goals/values led them to think “I should do this bad thing.”

I can't tell if you're saying "all decisions are because someone's goals/values led them to think "I should do this thing", or if you're saying decisions to pursue bad things (to some) are in this category.  This is either incorrect or tautological.

Reply
Linch's Shortform
Dagon18d42

There's also a saying of "don't try to teach a pig to sing - it wastes your time and annoys the pig".  It seems like you could investigate the porcine valence correlation using similar methods.

Reply
Load More
2Dagon's Shortform
6y
92
8Moral realism - basic Q
Q
3mo
Q
12
13What epsilon do you subtract from "certainty" in your own probability estimates?
Q
11mo
Q
6
3Should LW suggest standard metaprompts?
Q
1y
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
3y
Q
0
23{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
8Believable near-term AI disaster
4y
3
1Laurie Anderson talks
4y
0
Load More