> Well, I haven't seen even a blog post's worth of effort put into doing something like what I suggested.
I think blog posts are potentially weird measures of effort, here. I also think that this is something that people are interested in doing--I think it's a component of [MIRI's strategic sketch ...(read more)
> The healers may not appreciate being asked to work _so much_ harder, just so that the DPSers can work _a bit_ less hard, and “but this benefits the raid” may not suffice to persuade them.
I note also that healers are much less replaceable than DPS are--or at least, that was the way of things when...(read more)
Specifically, the salary is for being a teaching assistant or a research assistant, rather than being a student, but everything is structured under the assumption that graduate students will have a relevant part-time job that covers tuition and living expenses.
> One reason I don't like your graph is that I have no idea how to suffer both X and Y at the same time, for the same action.
Imagine an audience with non-overlapping preferences. Suppose you have control over the thermometer, and someone likes the temperature above 20 degrees C, and another likes ...(read more)
LessWrong is not the place for this sort of complaint, hence the downvotes (including mine).
Note that while the Slack channel has a similar name, it is an independent entity run by Elo, and doesn't have the same moderation team.
> The honest debater can give a whole bunch of RGB pixel values, which even if it doesn't conclusively establish a lie will make the truth telling strategy have a higher winning probability, which would be enough to make both debaters converge to telling the truth during training.
One thing that I ...(read more)
> My understanding is each debater can actually reveal many pixels to the judge. See this quote from section 3.2:
That sounds different to me--the point there is that, because you only need a single pixel to catch me in a lie, and any such demonstration of my dishonesty will result in your win, you...(read more)
In Aumann, you have two Bayesian reasoners who are motivated by believing true things, who because they're reasoning in similar ways can use the output of the other reasoner's cognitive process to refine their own estimate, in a way that eventually converges.
Here, the reasoners are non-Bayesian, a...(read more)
Arbitrary, inspired by reading left to right.
The "valley of death" in scientific contexts is research that is not profitable to do that is in between research that is profitable to do (and thus is where good ideas go to die). In this particular context, it's feasible to do preliminary studies of whether or not drugs will help with particular b...(read more)