tailcalled

Sequences

Linear Diffusion of Sparse Lognormals: Causal Inference Against Scientism

Wiki Contributions

Comments

Sorted by

She also frequently compared herself to Glaistig Uaine and Kyubey.

Reminder not to sell your soul(s) to the devil.

What I don't get is, why do you have this impulse to sanewash the sides in this discussion?

Is this someone who has a parasocial relationship with Vassar, or a more direct relationship? I was under the impression that the idea that Michael Vassar supports this sort of thing was a malicious lie spread by rationalist leaders in order to purge the Vassarites from the community. That seems more like something someone in a parasocial relationship would mimic than like something a core Vassarite would do.

I have been very critical of cover ups in lesswrong. I'm not going to name names and maybe you don't trust me. But I have observed this all directly. If you are let people toy with your brain while you are under the influence of psychedelics you should expect high odds of severe consequences. And your friends mental health might suffer as well.

I would highlight that the Vassarite's official stance is that privacy is a collusion mechanism created to protect misdoers, and so they can't consistently oppose you sharing what they know.

all FSAs are equivalent

??????

I think one underused trick for training LLMs is to explicitly "edit" them. That is, suppose they generate some text X in response to prompt Y, and it has some error or is missing something. In that case you can create a text X' that fixes this problem, and do a gradient update to increase log P(X'|Y)/P(X|Y).

For example, if we generate virtual comments in the style of certain LW users, one could either let those users browse the virtual comments that have been created in their style and correct them, or one could let the people who receive the virtual comments edit them to remove misunderstanding or similar.

If we think of the quantified abilities as the logarithms of the true abilities, then taking the log has likely massively increased the correlations by bringing the outliers into the bulk of the distribution.

Your post is an excellent example of how the supposedly-reasonable middle ground tends to be so clueless as to be plausibly worse than the extremes.

Like, e.g. Blanchard doesn’t think trans men have AGP

You mean AAP here, right?

He accepts autohomoeroticism, which is close enough to AAP that the difference doesn't matter. The real problem here is Michael Bailey who has a sort of dogmatic denial of AAP.

doesn’t think trans women who are attracted to men have AGP

That's pretty common in people's second-hand version; the real issue here is that this is sometimes wrong and some androphiles are AGP.

Oversimplification 2: Bisexuals exist. Many trans women report their sexual orientation changing when they start taking hormones. The correlation between having AGP and being attracted to women can’t be as 100% as Blanchard appears to believe it is.

Blanchard explicitly measured that some trans women identified as bisexual, and argued that they were autogynephilic and not truly bisexual. There's some problems with that assertion, but uncovering those problems really ought to engage with more of the nuances than what you imply here.

Oversimplification 4: Do heterosexual cisgender women have AGP? (Cf. Comments by Aella, eigenrobot etc.) if straight cisgender women also like being attractive in the same way as (some) trans women do, it becomes somewhat doubtful that it’s a pathology.

According to qualitative studies I've done, around 15% of women are at least somewhat AGP (though I think it correlates with being bi/lesbian), but the assertion that this implies it's not a pathology for males seems like magical thinking. E.g. ~100% of women have breasts, but this does not mean that developing breasts would not be considered a pathology for males.

If you consider the "true ability" to be the exponential of the subtest scores, then the extent to which the problem I mention applies depends on the base of the exponential. In the limiting case where the base goes to infinity, only the highest ability matter, whereas in the limiting case where the base goes to 1, you end up with something basically linear.

As for whether it's a crux, approximately nobody has thought about this deeply enough that they would recognize it, but I think it's pretty foundational for a lot of disagreements about IQ.

The analogy that I'm objecting to is, if you looked at e.g. the total for a ledger or a budget, it is an index that sums together expenses in a much more straightforward way. For instance if there is a large expense, the total is large.

Meanwhile, IQ scores are more like the geometric mean of the entries on such an entity. The geometric mean tells you whether the individual items tend to be large or small, which gives you broad-hitting information that distinguishes e.g. people who live in high-income countries from people who live in low-income countries, or large organizations from individual people; but it won't inform you if someone got hit by a giant medical bill or if they managed to hack themselves to an ultra-cheap living space. These pretty much necessarily have to be low-rank mediators (like in the g model) rather than diverse aggregates (like in the sum model).

(Well, a complication in this analogy is that a ledger can vary not just in the magnitude of the transfers but also qualitatively in the kinds of transfers that are made, whereas IQ tests fix the variables, making it more analogous to a standardized budget form (e.g. for tax or loan purposes) broken down by stuff like "living space rent", "food", "healthcare", etc..)

Load More