All of wanderingsoul's Comments + Replies

Ah, that clears this up a bit. I think I just didn't notice when N' switched from representing an exploitive agent to an exploitable one. Either that, or I have a different association for exploitive agent than what EY intended. (namely, one which attempts to exploit)

I'm not getting what you're going for here. If these agents actually change their definition of fairness based on other agents definitions then they are trivially exploitable. Are there two separate behaviors here, you want unexploitability in a single encounter, but you still want these agents to be able to adapt their definition of "fairness" based on the population as a whole?

I'm not sure that is trivial. What is trivial is that some kinds of willingness to change their definition of fairness makes them exploitable. However this doesn't hold for all kinds of willingness to change fairness definition. Some agents may change their definition of fairness in their favour for the purpose of exploiting agents vulnerable to this tactic but not willing to change their definition of fairness when it harms them. The only 'exploit' here is 'prevent them from exploiting me and force them to use their default definition of fair'.

I tried to generalize Eliezer's outcomes to functions, and realized if both agents are unexploitable, the optimal functions to pick would lead to Stuart's solution precisely. Stuart's solution allows agents to arbitrarily penalize the other though, which is why I like extending Eliezer's concept better. Details below, P.S. I tried to post this in a comment above, but in editing it I appear to have somehow made it invisible, at least to me. Sorry for repost if you can indeed see all the comments I've made.

It seems the logical extension of your finitely... (read more)

I concur, my reasoning likely overlaps in parts. I particularly like your observation about the asymptotic behaviour when choosing the functions optimally.

It seems the logical extension of your finitely many step-downs in "fairness" would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these ... (read more)

I agree with you a lot, but would still like to raise a counterpoint. To illustrate the problem with mathematical calculations involving truly big numbers though, what would you regard as the probability that some contortion of this universe's laws allows for literally infinite computation? I don't give it a particularly high probability at all, but I couldn't in any honesty assign it one anywhere near 1/3^^^3. The naive expected number of minds FAI affects (effects?) doesn't even converge in that case, which at least for me is a little problematic

Yes, if he had said "I think there is a small-but-reasonable probability that FAI could affect way way more than 3^^^3 people", I wouldn't have had a problem with that (modulo certain things about how big that probability is).

Try to put meeting location in the title, just to save people not involved a click and better draw in people actually in the area

Please taboo "good". When talking about stories especially, good has more than one meaning, and I think that's part of your disagreement

A couple others have mentioned warnings on doing something only to become attractive (e.g. You will tire of it or become resentful). Something like general fitness with multiple benefits likely isn't a problem, but there's also an alternate perspective that has worked really well for me. Instead of optimizing for attractiveness, consider optimizing for awesomeness. Being awesome will tend to make people attracted to you, but it has the added bonus of improving your self-confidence (which again increases attractiveness) and life-satisfaction.

As far as ho... (read more)

Awesome for who? I've found that things that are awesome for me may be pretty irrelevant for others. I have strongly opposing beliefs. I tend to think that people who are attracted to you rationalize this also by believing that you're awesome. But I've not seen many examples of awesome and attractive (to women) guy who weren't from fiction.
A helpful tool to become awesome could be to know an awesome person (real or imaginary) and ask yourself what would that person do. I guess this helps to turn off your "identity" for a moment. (While thinking about what the other person would do, you remove the "but I don't typically act this way" filter, at least partially.) The next step is optimizing your environment, to spend more time with awesome people. For example, for me it means two things: spending less time on websites with low quality discussion (almost all of them), visiting free lectures of awesome people (together with my girlfriend, and then we discuss it together). Essentially it means manipulating availability bias to work in your favor. If you let television or newspapers filter your inputs, you will be surrounded by misfortune, anger, frustration. If you filter your inputs by spending more time with awesome people, you will be surrounded by awesomeness. After some time your brain will start accepting "being awesome" as a custom of your tribe.
I used to optimise for awesomeness. My guiding principle was that if handed an object, I should be able to impress someone with it.

Instead of optimizing for attractiveness, consider optimizing for awesomeness.

I wish I had said this. All other considerations are secondary. Indeed, it's likely that all other metrics (weight/physical shape, fashion/clothing, flirting/conversation) are merely indicators that people use to try to gauge your actual awesomeness. Optimizing for the source rather than the signals is a great move, I'd upvote your comment multiple times if I could.

Well then LW will be just fine; after all we fit quite snugly into that category

Moderately on topic:

I'll occasionally take "drugs" like airborne to boost my immune system if I feel myself coming down with something. I fully know that they have little to no medicinal effect, but I also know the placebo effect is real and well documented. In the end, I take them because I expect them to trigger a placebo effect where I feel better, and expect it to work because the placebo effect is real. This feels silly.

I wonder whether it is possible to switch out the physical action of taking a pill with an entirely mental event and get... (read more)

I don't really care much about the it

My friends do though, so I often wish I cared more

I'm unsure whether I want to be moved by that consideration though

I really wish I had stronger opinions about things like that

But I don't really know how much good that wish is doing me

At least I give self reflection a shot though, people always say it has good effects

Though I'm unsure whether I should believe the hype

I dislike always being uncertain

Though I admit that dislike has both unpleasant and motivating aspects

And I love just what this drive to dispel uncertainty... (read more)

Awesome. Shouldn't the last one refer to the one above it rather that the one two places above it though? I think it should be "and I love being able to recognize the costs and benefits of this uncertainty" rather than "and I love just what this drive to dispel uncertainty can do."

It does draw attention to the fact that we're often bad at deciding which entities to award ethical weight to. Not necessarily the clearest post doing so, and missing authorial opinion, but I wouldn't be shocked if the LW community could have an interesting discussion resulting from this post

I think I was too grumpy in the grandparent.

It seems we have a new avatar for Clippy; the automated IKEA furniture factory

Nice game, good to see someone making it easy to just practice being well calibrated.

My calibration started off wonky, (e.g. was wrong each of the first six times I claimed 70% certainty) but quickly improved. Unfortunately, it improved suspiciously well, I suspect I may have been assigning probabilities with my primary goal not being scoring points, but instead with trying to get that bar graph displayed every 5 or 10 questions to even out. It's a well designed game, but unfortunately at least for me the score wasn't the main motivator, which is a problem because the score is the quantity that increases by being helpfully well-calibrated. Anyone else have a similar experience?

My experience is distinctly similar. I observed another curiosity. For much of my time playing the game I've got a larger fraction of 50%s right than of 60%s. I think what's going on is that the 50% cases are ones where I definitely have no idea of the answer and have to fall back on heuristics (have I heard of this person? does the name sound old or recent? etc.) -- and the heuristics work better than I can bring myself to admit they do :-).

Took the survey, plus the IQ test out of curiosity, I'd never had my IQ tested before.

Along similar reasoning, do we know how well the test correlates with non-internet tests of IQ? Getting a number is cool, knowing it was generated by a process fundamentally different than rand(100,160) would be even better

I strongly suspect that a lot of the members of LessWrong have had a non-internet IQ test and will have entered their scores on the census. Those who also took the extra credit internet test and entered their scores to that as well could serve as a sample group for us to make just such an analysis. Granted, we are likely a biased sample of the population (I suspect a median of somewhere around 125 for both tests), but data is data.

I hadn't, but it was worth the while. I agree, thanks

Not too long ago I wanted to write a poem to express a poem to express a certain emotion, defiance toward death, but it only occurred to me recently that it might be LW appropriate. I took a somewhat different path than "do not go gentle..." but you can judge yourselves how it went. Posted in the open thread as I feel it is relatively open to random stuff like this. (Formatting screwy because I'm not used to the format here yet)


I am afraid

        All about me the lights blink out

        Seeing their fate I’m filled with fear

... (read more)
Are you familiar with "Today I Die"? (also available on App Store) Seemed appropriate.

I'm no Peter Norvig, but this is the discussion section after all....

One tool that may or may not have a place in online education is gamification. To put a long story short, the gaming industry has gotten plenty of practice motivating people to keep going, even at tasks that wouldn't necessarily be the most interesting. Other industries have finally noticed this, and started trying it out to see which concepts from gaming carry over well to other fields. I don't personally know of any research specific to education, but would be interested if anything ... (read more)

I might as well take a shot at explaining. Pascal's wager says I might as well take on the relatively small inconvenience of going through the motions of believing in God, because if the small probability event occurs that he does exist, the reward is extremely large or infinite (eternal life in heaven presumably)

Pascal's mugging instead makes this a relatively small payment ($5 as Yudkowsky phrased it) to avoid or mitigate a minuscule chance that someone may cause a huge amount of harm (putting dust specks in 3^^^^3 people's eyes or whatever the current ... (read more)