One of the most frustrating things about the Blanchardian system for me is how it flattens a ton of variation into the "auto____philia" category, asserting the same "erotic target location error" cause for all of it
In my opinion, this is not necessarily a problem. Not all variation is equal; if the goal is to describe the etiology of trans women, then it is not necessary to capture variation that happens for other reasons than etiology, such as personality.
the people pushing the theory tend to brush that off by asserting that trans people are lying (or grossly mistaken) about their own experiences and sexualities
I am ambivalent about this; clearly Blanchardians push this assertion too much, but also anti-Blanchardians don't acknowledge it enough.
the original study
What study do you have in mind?
It's not simply that auto____philia is a less satisfying narrative, it simply doesn't track with many trans people's actual experiences at all, and I think analyzing a larger dataset (especially if the analysis were done by a researcher less prone to motivated reasoning and questionable statistical practices) would demonstrate that.
I think Phil understands the trans position (or should I say positions, because in actuality trans people have a ton of different and conflicting ideas about gender, sexuality, sex, and so on) very poorly, or at least represents it very poorly in the quote you provided. What he's presenting there is what I tend to call the "lies to cis children" version, intended to try and get the basic idea across to people who know nothing about transness and have no framework for understanding our experiences.
Phil is well-aware that trans activists are lying about trans etiology, that is sort of which schtick. It seems strange for you to acknowledge widespread deception while also dismissing accusations of deception earlier in your comment. Sure, Blanchardians probably don't hit 100% correctly with their guesses as to when trans people are lying, but hitting 100% correctly in identifying lies is hard.
In fact, the HSTS subset of the initial Blanchard study sample likely consists almost entirely of patients who were seeing him in order to access medical transition and had to either have or pretend to have experiences and motivations which fit the Harry Benjamin template in order to do so.
I kind of struggle with buying this theory, at least without more explication. How am I supposed to square this with people I've seen who seem to fit the HSTS archetype?
One of the reasons why there isn't a coherent position about gender, etc. among trans people, however, is because we're mostly just trying to get by, we're not nearly as concerned with theorizing. We're working from our own experiences, we're more concerned with practical things like access to the medical care which pretty demonstrably helps us even if we don't understand exactly what's going on under the hood, and we all have different experiences which inform how we think about this stuff.
I mean I can be sympathetic to this point. The standard response by Phil and other Blanchardians when trans women doubt the typology is "Well then why did you transition??" / "What model is better than Blanchardianism??", and in practice this seems to play out pretty abusively. You can't really expect someone to solve a tricky causal inference problem for you for no reason.
But on the other hand, most methods of evaluating evidence (e.g. Bayesianism) work best when there are multiple theories to contrast. If e.g. you could list some factors where you differ from cis men, which plausibly caused you to transition, then I could add them to a survey sent to the sample from my comprehensive study, and we could see whether there is any statistical patterns of interest. But otherwise it is really hard to figure out any objectively informative tests.
It's a good question. I've usually measured ideology as a side-effect when using a completely different method for running surveys.
Rather than writing all of the questions by myself, I have asked a bunch of people to give me me a bunch of qualitative descriptions of themselves that might be relevant for the survey, and then I've taken those qualitative descriptions and turned them into a large number of fixed-response-set questions, each question capturing a capturing different aspects a qualitative response. When factor-analyzing such a set of questions, usually a political factor pops out.
An equivalent for your case might be to qualitatively first ask a bunch of parents whether there is anything special about their manner of parenting, and then turning whatever they mention into multiple-choice questions.
However this is a lot of work, and it can also result in very long questionnaires that it may be hard to get responses for. So it may not be so practical.
An option may be to just ask a few relevant questions one can think of, e.g.
... and then create an overall score from that.
Though especially once ideological questions explicitly mention childrearing, it also raises questions about direction of causality. One could sort of handle this by asking about more distal ideological questions such as "Do you vote for Republicans or Democrats?" or "Do you think the government is too big?", but by going further away it also makes it less able to capture ideological factors specific to childrearing.
I don't know what the best option is, given your constraints, especially because I don't know your constraints.
- Then it will output actions in such a way that will make the environment more predictable, so that it gets moved around by gradient descent less when it receives the observation
Why? That's not part of standard LLM training. And if it did, wouldn't stuff like breaking its sensors be the most straightforward way to make the environment more predictable, which again is completely different from what humans do?
It's too bad you didn't have data by ideology. It seems to me that ideology is quite strongly related to attitudes towards childrens capabilities.
I'd say usually bottlenecks aren't absolute, but instead quantifiable and flexible based on costs, time, etc.?
One could say that we've reached the threshold where we're bottlenecked on inference-compute, whereas previously talk of compute bottlenecks was about training-compute.
This seems to matter for some FOOM scenarios since e.g. it limits the FOOM that can be achieved by self-duplicating.
But the fact that AI companies are trying their hardest to scale up compute, and are also actively researching more compute-efficient algorithms, means IMO that the inference-compute bottleneck will be short-lived.
Here's another way of looking at it which could be said to make it more trivial:
We can transform addition into multiplication by taking the exponential, i.e. x+y=z is equivalent to 10^x * 10^y = 10^z.
But if we unfold the digits into separate axes rather than as a single number, then 10^n is just a one-hot encoding of the integer n.
Taking the Fourier transform of the digits to do convolutions is a well-known fast multiplication algorithm.
Is this really an accurate analogy? I feel like clock arithmetic would be more like representing it as a rotation matrix, not a Fourier basis.
I think if I hover my cursor over text that has been reacted to, I should see the react.