Posts

Sorted by New

Wiki Contributions

Comments

it sometimes happen in conversations, that people talk past each other, don't notice that they both use the word X and mean two different things, and behave as if they agree on what X is but disagree on where to draw the boundary.

from my point of view, you said some things that make it clear you mean very different thing then me by "illegible". prove of theorem can't be illegible to SOMEONE. illegibility is property of the explanation, not the explanation and person. i encountered papers and posts that above my knowledge in math and computer science. i didn't understand them despite them being legible. 


you also have different approach to concepts in generally. i don't have concept because it make is easier for people to debug. i try to find concepts that reflect the territory most precisely. that is the point of concepts TO ME.

i don't sure it worth it go all the way back, and i have no intention go over you post and adding "to you" in all the places where it should be add, to make it clearer that goals are something people have, not property of the teritory. but if you want to do half of the work of that, we can continue this discussion. 

this is one of the posts when i wish for three examples for the thingy described. because i see two options:
1. this is weakman of the position i hold, in which i seek the ways to draw a map that correspond to the territory, and have my estimations of what work and what no, and disagree with someone about that. and the someone instead of providing evidence that his method providing good predictions or insights, just say i should have more slack.

all you description on why believe in things sounds anti-Beysian. it's not boolean believe-disbelieve. update yourself incrementally! if i believe something provide zero evidence i will not update, if the deviance dubious, i will update only a little. and then the question is how much credence you assign to what evidence, and methods to find evidence. 

2. it's different worlds situation, when the post writer encountered problem i didn't.

and i have no way to judge that, without at least one, and better more, actual examples of the interaction, better linked to and not described by the author. 

list of implicit assumptions in the post i disagree with:

 

  • that there are significant amount of people that see advise and their cached thought is "that can't work for me".
  • that this cached thought is bad thing.
  • that you should to try to apply every advice you encounter to yourself.
  • that it's hard.
  • that the fact that it hard is evidence that it's good and worthy thing to do.
  • that "being kind of person" is good category to think in, or good framing to have.

 

i also have a lot of problems with the example - which is example of advise that most people try to follow but shouldn't, and should think about their probability of success by looking on the research and not by thinking that "you can be any kind of person" - statement whose true value is obviously false. 

this is not how the third conversation should go, in my opinion. instead. you should say inquiry your Inner Simulator, and then say that you expect that learning GTD will make them more anxious or will work for two weeks and then stop, so the initial investment in time will not pay off, or that in the past you encountered people who tried and it make them to crash down parts of themselves, or you expect it will work to well and lead to burnout.

it is possible to compare illegible intuitions - by checking what different predictions they produce, by comparing possible differences in the sorting of the training data. 

in my experience, different illegible intuitions come from people see different parts of the whole picture, and it's valuable to try to understand better. also, making predictions, describe the differences between word when you right and world when you wrong, having at least two different hypotheses, is all way to make the illegible intuitions better.

One of the things that I searched for in EA and didn't find, but think should exist: algorithm, or algorithms, to decide how much to donate, as a personal-negotiation thing.

There is Scott Alexander's post about 10% as Schelling point and way to placate anxiety, there is the Giving What You Can calculation. but both have nothing with personal values.

I want an algorithm that is about introspection - about not smashing your altruistic and utilitarian parts, but not other parts too, about finding what number is the right number for me, by my own Utility Function.

and I just... didn't find those discussions.
 

in dath ilan, when people expect to be able to name a price for everything more or less, and did extensive training to have the same answer to the questions  'how much would you pay to get this extra' and 'how much additional payment would you forgo to get this extra' and 'how much would you pay to avoid losing this' and 'how much additional payment would you demand if you were losing this.', there are answers.
 

What is the EA analog? how much I'm willing to pay if my parents will never learn about that? If I could press a button and get 1% more taxes that would have gone to top Giving Well charities but without all second order effects except the money, what number would I choose? What if negative numbers were allowed? what about the creation of a city with rules of its own, that take taxes for EA cause - how much i would accept then?
 

where are the "how to figure out how much money you want to donate in a Lawful way?" exercises?
 

Or maybe it's because far too many people prefer and try to have their thinking, logical part win internal battle against other, more egotistical ones?
 

Where are all the posts about "how to find out what you really care about in a Lawful way"? The closest I came about is Internal Double Crux and Multi-agent Model of the soul and all its versions. But where are my numbers?
 

so, I'm in the same time happy there is an answer, but can't be happy with the answer itself. which is to say, i tried to go and find the pints i agree with, and find one after another point of disagreement. but i also believe this post deserve more serious answer, so i will try to write at least part of my objections.

i do believe that x-risk and societies destroying themselves as thy become more clever then wise is a real problem. but i disagree with the framing that the ants are the ones to blame. it's running from the problem. if grasshoppers are to grow, even if slower, they too may bring atomic winter. 

and you just... assume it away. in the way of worst Utopian writing, where societies have features that present-people hate and find bad but somehow everyone happy and no one have any problems with that and everything is okay.  it's just... feel cheap to me.

and if you assume no growth at all, then... what about all the people that value growth? there are a lot of us in the world. if it's actually "steady-state existence", not sustainable growth but everything stay the same way... it's really really really bad by my utility function, and the one good thing i can say about that, is that state doesn't look stable to me. there were always innovators and progressors. you can't have your stable society without some Dystopian repression of those.

but you can have dath ilan. this was my main problem with the original parable. it was very black-and-white. dath ilan didn't come to the ants and ask for food, instead it offered it. but it definitely not table state. and to my intuition, it's look both possible and desirable.

 and it also doesn't assume that the ants throw away decision theory from the window. the original parables explicitly mentioned it. i find representation of ants that forego cooperation in prisoner dilemma strawmanish.

but beside all that, there is another, meta-point. there was prediction after prediction for pick-oil and the results, and they all proved wrong. so are other predictions for that strand of socialism. from my point of view, the algorithm that generating this predictions is untrustworthy. i don't think Less Wrong is the right place for all those discussions.

and i don't plan to write my own, dath-ilani replay to the parables.

but i don't think some perspectives are missing. i think they was judged false and ignored afterwards. and the way in which the original parables felt fair to the ants, and those don't, is evidence this is good rule to follow. 

it's not bubble, it's the trust in the ability for fair discussion, or the absent of trust. because discussion in which my opinions assumed to be result of bubble and not honest disagreement... i don't have words to describe the sense of ugliness, wrongness, that this create. it the same that came from feeling the original post as honest and fair, and this as underhanded and strawmanish. 

(all written here is not very certain and not precise representation of my opinions, but i already took way too much time to write it, and i think it better to write it then not)

this would be much closer to the Pareto frontier then our curren social organization! unfortunately, this is NOT how society work. if you will operate like that, you will loss almost all your resources. 

but it's more complicated then that - why not gate this on cooperation? why should i give 1 dollar for 2 of someone else dollars, when they will not do that for me? 

and this is why all this scheme doesn't work. every such plan need to account for defectors, and it doesn't look like you address it, anywhere.

on the issue of politics - most people who involve in politics make things worse. before declaring that it's people duty to do something, it's important to verify this is net-positive thing to do. if i look on people involved in politics and decide that less politics would have been better to society, then my duty is to NOT get involved in politics. or at least, not to get involved more then the level that i believe is the right level of involvement.

but... i really don't see how all this politics even connected to the first half of the post, about the right ratio of my utility : other person utility? 

 

regarding the first paragraph - Eliezer not criticizing the Drowning Child story in our world, but in dath ilan. dath ilan, that is utilitarian in such questions, when more or less everyone is utilitarian when children lives are what at stake. we don't live in dath ilan. in our world, it's often the altruistic parts that hammer down the selfish parts, or warm-fuzzies parts that hammer down the utilitarian ones as heartless and cruel.

EA sometimes is doing the opposite - there are a lot of stories of burnout.

and in the large scheme of things, what i want is a way to find what actions in the world will represent my values to the fullest - but this is a problem when i can't learn from dath ilan, that have a lot of things fungible, that are not in Earth. 

 

So i read a lot of dah-ilani glowfics the previous weeks, and yet, i didn't guess right. i didn't stop to put numbers on it, so i can in retrospective (and still not sure it's actually good idea to put numbers on all things). and it was 0.9 that the stroy is about kid losing trust in adults because they was told a lie, and 0.3 that after that, it turned out they should trust adults and this distrust is bad (like teens that think all drugs are not dangerous because adults exaggerate the harm of the less-harmful ones). in that situation, i was basically 50-50 divided if the Aesop is about the importance of bounded trust for the reader, that should see themselves as the kid, or that lying to children is bad, and the reader should not do that. 

i did realized it's dath illan and experiment some time after. and now i even more curious to what dath ilan would do with people like me, who see lying as Evil. not try to change them - the utilityfunction is not up to grabs.

i'm pretty sure typical minding make my attempts to do that sub-optimal. i just find it hard to imagine society when most people actually OK with that state of affairs. but my attempts at imagining trust broken and things go bad feel unrealistic, an-dath-ilani to my sense of how-dath-ilan-is. for example, this: https://pastebin.com/raw/fjpS2ZDP doesn't strike me as realistic. i expect dath ilan can use the fact the Keepers Are Trustworthy for example, to swear to a child they will never ever pull such experiments on them, and the child believe that. i expect dath ilan check in younger age how children react to that sort of thing, that is standard on dath-ilani education, and stop if they see it bad for some kid.

and yet... the utilityfunction is not up to grabs. and for some reason, this "fact" about dath ilan is somehow more bad then, for example, the places where dath ilan allow people lack of reflection so they can remain themselves and not go full Keeper.  i disagree there and find it wrong, but it strike me as difference in prioritization, when here it's look like our utilityfunctions are opposite in this small section.

i see lies and deceptions as Evil, even of sometimes it can be traded off, and society with more slack will use it to lie much less, almost never. dath ilan LIKE it's clever experiments and lies children should figure out for themselves. and i would have expected that Keepers would be the type of people that HATE such lies with fury of a thousand suns. so in the end, i remain confused, and feel like dath ilan is somewhat world-where-people-like-me-doesn't-exist. which, most Utopias just ignore uncomfortable complications, but dath ilan is much better then most. and i can't really believe dath ilan heredity-optimized to not have people that hate lying and being lied to.

so in the end, i just confused.

this also describe math. like, the mote complicated math that have some prerequisites and person that didn't take the courses in collage or some analog will not understand.

math, by my understanding of "legibility", is VERY legible. same about programming, physics, and a whole bunch of explicitly lawful but complicated things. 

what is your understanding about that sort of things?

 

Load More