AI can do longer and longer coding tasks.
But this is not a good category; it contains both [the type of long coding task that involves having to creatively figure out several points] and also other long coding tasks. So the category does not support the inference. It makes it easier for AI builders to run... some funny subset of "long coding tasks".
The "Boggle" section of this could almost be a transcript from the CFAR class. Should be acknowledged.
I don't necessarily disagree with what you literally wrote. But also, at a more pre-theoretic level, IMO the sequence of events here should be really disturbing (if you haven't already been disturbed by other similar sequences of events). And I don't know what to do with that disturbedness, but "just feel disturbed and don't do anything" also doesn't seem right. (Not that you said that.)
(I think I'll go with "alloprosphanize" for now... not catchy but ok. https://tsvibt.github.io/theory/pages/bl_25_10_08_12_27_48_493434.html )
Exoclarification? Alloclarification? Democlarification (dēmos - "people")?
These are good ideas, thanks!
Ostentiation:
So there's steelmanning, where you construct a view that isn't your interlocutor's but is, according to you, more true / coherent / believable than your interlocutor's. Then there's the Ideological Turing Test, where you restate your interlocutor's view in such a way that ze fully endorses your restatement.
Another dimension is how clear things are to the audience. A further criterion for restating your interlocutor's view is the extent to which your restatement makes it feasible / easy for your audience to (accurately) judge that view. You could pass an ITT without especially hitting this bar. Your interlocutor's view may have an upstream crux that ze doesn't especially feel has to be brought out (so, not necessary for the ITT), but which is the most cruxy element for most of the audience. You can pass the ITT while emphasizing that crux or while not emphasizing that crux; from your interlocutor's point of view, the crux is not necessarily that central, but is agreeable if stated.
A proposed term for this bar of exposition / restatement: ostentiate / ostentiation. Other terms:
is a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes
I want to just agree because fuck those guys, but actually I think it's also shit justifications. A good justification might come from exploring the various cases where we have decided to not make something, analyzing them, and then finding that there's some set of reasons that those cases don't transfer as possibilities for AI stuff.
This seems fine and good--for laying some foundations, which you can use for your own further theorizing, which will make you ready to learn from more reliable + rich expert sources over time. Then you can report that stuff. If instead you're directly reporting your immediately-post-LLM models, I currently don't think I want to read that stuff, or would want a warning. (I'm not necessarily pushing for some big policy, that seems hard. I would push for personal standards though.)
(For reference, 135 is 2.33 SDs, which works out to about 1 in 100, i.e. you're the WAISest person in the room with 100 randomly chosen adults. Cf. https://tsvibt.blogspot.com/2022/08/the-power-of-selection.html#samples-to-standard-deviations )