Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated in about the last four years. I live near Cambridge (UK) and work for a small technology company in Cambridge. My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.

If you're wondering why some of my old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.


How do you measure conformity?

"They laughed at Galileo. They laughed at Einstein. But they also laughed at Bozo the Clown."

It's true that being shunned or attacked is a sign that you've likely done something nonconforming and transgressive. But some things are nonconforming and transgressive just because they're stupid or obnoxious.

I think what this means is that trying to measure how non-conformist you are, and hoping that "the more non-conformist the better", is a mistake. Being unlike other people isn't a good goal, in itself, because most ways of being unlike other people are not improvements. What you need to identify is ways of doing better, specifically, and measuring whether they laugh at you / shun you / stone you won't do that, because they'll do that for being worse as well as for being better.

D&D.Sci II Evaluation and Ruleset

I don't think the trap was horrible and unfair. Rule One of data science is: always look at your freakin' data rather than blindly feeding it into the sausage-making machine and hoping you'll be able to eat what comes out.

Discussion on the choice of concepts

"Since you're troubled by the other possibly unwanted associations of the word stupid, how about we just agree to say that toasters aren't highly intelligent? It doesn't really matter whether you say that that's because toasters aren't the sort of thing one can call intelligent, or that it's because you could call them intelligent if they were but they aren't; either way we can agree that toasters are not highly intelligent agents, and that's what matters."

"Oh, yeah, that works."

"Great. Let's move on."

(Of course, in many arguments about how one should define things there isn't a sufficiently convenient circumlocution, either because there isn't a good one at all or because it's super-important to have a handy short term or because the question is exactly about how one particular term should be used.)

D&D.Sci II: The Sorceror's Personal Shopper

Meta: there's one word in that comment that's kinda spoilery and you should maybe spoilerize it.

D&D.Sci II: The Sorceror's Personal Shopper

Proposed buy (no explanations but may still be spoilery; there is a lot I still don't understand so I suspect one can do better):

 WH o Ju, Pl o Pl, Ha o Ca, Pe o Ho. I expect a little under 130 mana, for a cost of 144gp.

Explanations (definitely spoilery):

 Yellow-glowing things get 18-21 mana; I haven't found patterns beyond that. Green-glowing things get 2-40 mana, always an even number; I haven't found patterns beyond that. Red-glowing things get 2^a 3^b 5^c mana; other than the fact that somehow we never get >96 even though we separately get 64, 27, 5, I haven't found patterns beyond that. Blue-glowing things get highly variable mana, also favouring small prime factors though 7 occurs; for these (and only these) the thaumometer gives plainly useful information, yielding the true mana gain +-1 except that items you wear yield a number too high by 22. So the two cheaper yellow items are pretty good value, as are the highest-thaumperature blue ones even though one of them is overrated. We should get at least 18+18 for the yellow ones and at least 34+54 for the blue ones.


I suspect there may be more going on than I yet understand with the red and green items, for which at present I don't think I know anything useful. And maybe the finer details of the yellow and blue ones are predictable too.

D&D.Sci II: The Sorceror's Personal Shopper

Then you are missing out. I have only a partial understanding of the phenomena so far, but I already have a set of four items that I think should pretty reliably get at least 120 mana for a total price below 150gp.


Nitpick about that first paragraph: that sort of backward chaining is pretty common in chess, actually. Near the end of a game you're very often envisaging aparticular state of affairs and planning how to get there. Not necessarily the exact final state, but something like "I need his king in this corner or that corner, and I need it to arrive there when my knight is here or here so that my bishop can do that and deliver checkmate". Even in the middle of the game you may have intermediate goals like "move my pawn safely from e7 to e5" or "drive the knight away from e4 without weakening my king position".

It feels as if actually there's a continuum from "do things that make the situation better in some generalized sense" to "do things that make the situation better in particular ways that seem like they're likely to be useful" to "do things that make the situation better in particular ways I can see likely uses for" to "do things that make the situation better in particular ways that I definitely have concrete uses for" to "do things that bring about a broad class of later states that I like" to "do things that bring about a very specific range of later states that I like" to "do things that bring about a single specific outcome that I need".

I think it genuinely doesn't make sense to say that  reflects our prior expectation of ; the acolyte is correct. What  reflects is our prior on ; that regularization term corresponds exactly to a prior that makes  (multivariate) normally distributed with mean zero and covariance  times the identity (i.e., components independent and each component having variance ).

You've dropped a factor of  about half-way through your calculation. And then you've multiplied by  between two lines separated by "="; the idea is that both sides are zero so it kinda-sorta makes sense but it's super-misleading. If you restore the factor of  then your last equation ends up as .

But even this is wrong, I'm afraid. You can't multiply by  there at all. There is no  is not (except by coincidence, and in an ML application if this coincidence happens then you don't have anything like enough data) a square matrix and in general it has no inverse.

There are problems earlier in the derivation, too, which I think are encouraged by some of your nonstandard notation. E.g., you write  rather than  or , and this has fooled you into writing down something wrong for what you write as . That's also nonstandard notation; it's defensible but again it makes it easy to get things wrong by mixing up left and right multiplications. Let's do it with more standard and explicit notation, which will make it harder to make mistakes:

The  is constant and its derivative is zero. The terms linear in  are one another's transposes and readily yield . The second quadratic term is just  whose  is . The first quadratic term is similarly  which equals  whose  is .

So what ends up being zero is the th component of  and if you like you can write . But again you need to be very clear about what you mean by that;  means "the  such that to first order " and so actually the Right Thing to use for the "derivative" is the transpose of what I wrote down above.

Finishing off the correct derivation, we have

 so  so .

Load More