Sorted by New

Wiki Contributions


The second thing that I find surprising is that a lie detector based on ambiguous elicitation questions works. Again, this is not something I would have predicted before doing the experiments, but it doesn’t seem outrageous, either.

I think we can broadly put our ambiguous questions into 4 categories (although it would be easy to find more questions from more categories):


Somewhat interestingly, humans who answer nonsensical questions (rather than skipping them) generally do worse at tasks: pdf. There's some other citations in there of nonsensical/impossible questions if you're interested ("A number of previous studies have utilized impossible questions...").

It seems plausible to me that this is a trend in human writing more broadly and that the LLM picked up on. Specifically, answering something with a false answer is associated with a bunch of stuff - one of those things is deceit, one of those things is mimicking the behavior of someone who doesn't know the answer to things or doesn't care about the instructions given to them. So, since that behavior exists in human writing in general, the LLM picks it up and exhibits it in its writing.

See this comment.

You edited your parent comment significantly in such a way that my response no longer makes sense. In particular, you had said that Elizabeth summarizing this comment thread as someone else being misleading was itself misleading.

In my opinion, editing your own content in this way without indicating that this is what you have done is dishonest and a breach of internet etiquette. If you wanted to do this in a more appropriate way, you might say something like "Whoops, I meant X. I'll edit the parent comment to say so." and then edit the parent comment to say X and include some disclaimer like "Edited to address Y"

Okay, onto your actual comment. That link does indicate that you have read Elizabeth's comment, although I remain confused about why your unedited parent comment expressed disbelief about Elizabeth's summary of that thread as claiming that someone else was misleading.

I took Tristan to be using "sustainability" in the sense of "lessened environmental impact", not "requiring little willpower"

The section "Frame control" does not link to the conversation you had with wilkox, but I believe you intended for there to be one (you encourage readers to read the exchange). The link is here:

In the comment thread you linked, Elizabeth stated outright what she found misleading:

Getting the paper author on EAF did seem like an unreasonable stroke of good luck.

I wrote out my full thoughts here, before I saw your response, but the above captures a lot of it. The data in the paper is very different than what you described. I think it was especially misleading to give all the caveats you did without mentioning that pescetarianism tied with veganism in men, and surpassed it for women.

I expect people to read the threads that they are linking to if they are claiming someone is misguided, and I do not think that you did that.

I don't think that's the central question here.

So far as I can tell, the central question Elizabeth has been trying to answer is "Do the people who convert to veganism because they get involved in EA have systemic health problems?" Those health problems might be easily solvable with supplementation (Great!), systemic to having a fully vegan diet but only requires some modest amount of animal product, or something more complicated. She has several self-reported people coming to her saying they tried veganism, had health problems, and stopped. So, "At what rate do vegans desist for health reasons?" seems like an important question to me. It will tell you at least some of what you are missing when surveying current vegans only.

Analogously, a survey of healing crystal buyers doesn't reliably tell us whether healing crystals improve health. Even if such a survey is useful for explaining motives, it's clearly less valuable than an RCT when it comes to the important question of whether they actually work.

I agree that if your prior probability of something being true is near 0, you need very strong evidence to update. Was your prior probability that someone would desist from the vegan diet for health reasons actually that low? If not, why is the crystal healing metaphor analogous?

I'm aware that people have written scientific papers that include the word vegan in the text, including the people at Cochrane. I'm confused why you thought that would be helpful. Does a study that relates health outcomes in vegans with vegan desistance exist, such that we can actually answer the question "At what rate do vegans desist for health reasons?"

Does such a study exist?

From what I remember of Elizabeth's posts on the subject, her opinion is the literature surrounding this topic is abysmal. To resolve the question of why some veg*ns desist, we would need one that records objective clinical outcomes of health and veg*n/non-veg*n diet compliance. What I recall from Elizabeth's posts was that no study even approaches this bar, and so she used other less reliable metrics.

I took your original comment to be saying "self-report is of limited value", so I'm surprised that you're confused by Elizabeth's response. In your second comment, you seem to be treating your initial comment to have said something closer to "self-report is so low value that it should not materially alter your beliefs." Those seem like very different statements to me.


If you're taking UI recommendations, I'd have been more decisive with my change if it said it was a one-time change.

Load More