People comment, and quickly move on.

That's the problem, of course, and why it can't replace the mainstream sites. It's trapped in fast mode and has no endurance or cumulative effect. So it sounds like there is plenty of demand (especially allowing for how terrible Telegram is as a medium for this), it's just suppressed and fugitive - which is what we would expect from the cartel model.

At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it. I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good

Ah yes, the profoundly human and irreplaceable experience of 'Paypaling some guy online $1000 for drawings of your fursona'...

How can AI ever compete with the deeply meaningful and uncommodifiable essence of the furry experience in 'commissioning from an artist you like for your friend'? Well, it could compete by 'letting you create the art for your friend instead of outsourcing it to the market'. What's more meaningful than paying an artist to make a gift for your friend? You making it yourself! That's what.

Further, I think you might've missed my point in invoking 'commoditize your complement' here. The choice is not between a world in which you experience the deep joy of handing over huge sums of money collectively to artists & middlemen, and a meaningless nihilistic wasteland of AI art where there is naught but 'atoms and the void'; it's between the commissioning world and all of the many other options, like going to conventions with your friends with the freed-up cash, or building elaborate D&D-esque campaigns or fictional worlds with your fursona character now that images of it are 'too cheap to meter', or... something. Use some imagination. I'm not a furry, I'm not going to pretend I know exactly where you will derive all of your personal meaning and what form will parasocial relationships take in the future; but I note that nowhere else in subcultures does there seem to be a similar cartel-esque community where personal relationships/identities are mediated through so much payments to artists, nor does it always seem to have been a part of furry culture. So I do not expect that to be the unchanging essence of furrydom, but a more historically contingent fact related to Internet & online payments developing faster than AI - and potentially a fast-vanishing fact, at that.


I was surprised to hear this, given how the fur flew back when we released This Pony Does Not Exist & This Fursona Does Not Exist, and how well AstraliteHeart went on to create furry imagegen with PonyDiffusion (now v6); I don't pay any attention to furry porn per se but I had assumed that it was probably going the way regular stock photos / illustrations / porn / hentai were going, as the quality of samples rapidly escalated over time & workflows developed - the bottom was falling out of the commission market with jobs cratering and AI-only 'artists' muscling in. So I asked a furry acquaintance I expected would know.

He agreed inasmuch as he said that there was a remarkable lack of AI furry porn on e621 & FurAffinity and just in general, and what I had expected hadn't happened. (Where is all the furry AI porn you'd expect to be generated with PonyDiffusion, anyway? Civitai?* That site is a nightmare to navigate, and in no way a replacement for a proper booru.) But it was not for a lack of quality.

He had a more cynical explanation, though: that despite huge demand (lots of poorer furries went absolutely nuts for TFDNE - at least, until the submissions were deleted by mods...), the major furry websites are enough of a cartel, and allied with the human artists, that they've managed to successfully censor and keep AI art out of bounds. And no AI-accepting furry website has hit any kind of critical mass, so it's broadly not a thing.

I wonder how long this can last? It doesn't seem like a sustainable cartel, as the quality continues to increase & costs crash. If nothing else, how do they intend to suppress AI video...? (There is not even a fig leaf argument that you should avoid AI video to cater to hardworking human animators, because manual animation is so colossally expensive and will never ever be realtime.)

This also makes me think that this is an unstable preference falsification setup.

And then consider the advertisers who pay the bills for e621's colossal bandwidth usage: from an advertisers' perspective, someone who sells dragon dildos or custom fursuits or convention tickets, say, propping up human artists is a waste of money that could be spent on their wares which do not have adequate AI substitutes. A smart group of furry advertisers would look at this situation and see a commoditize-your-complement play: if you can break the censorship and everyone switches to the preferred equilibrium of AI art, that frees up a ton of money. (I don't know if there are any good statistics on how much furry money gets spent on art total, but if the anecdotes about spending thousands or tens of thousands of dollars, particularly from rich techie furries, are true, then it seems like the total must be at least in the tens of millions annually.) Individual advertisers would never dare suggest this, because anyone who sticks their neck out will get it chopped off by e621 et al, pour encourager les autres, and a consortium would probably squabble and leak and not do any better.

But that means that a third party could potentially enter in and profit from breaking the equilibrium. All they need is to get big enough that they can peel off an advertiser or two, and then it's all over. So if someone like NovelAI integrated an e621-style booru and invested enough to make it a usable substitute for e621 and integrated their own furry models into it, they could make a killing in both signups & advertising on the booru.

* another datapoint: 'I have seen furry AI art on Pixiv. Where else is it? Probably on Discord, Mastodon, and Twitter. It got me curious: does furry Mastodon often allow AI art? I have checked a sample of 10 instances from Their list is in random order. (The dark mode switch isn't tristate, boo!) The result: 3 instances banned AI art outright; 4 had no rules about it; 1 required tagging it ("No untagged AI slop."); 1 explicitly allowed it "unless used for nefarious purposes"; 1 was ambiguous (requiring "bot" content to be unlisted with language that may or may not apply to AI art).'


For longtime readers of patio11 or even just his new blog, I think most of this will not be new, except for the discussion of Stockfighter: patio11 discusses how he generated realistic stock price histories for the simulated stock market (he copied real histories - I would've done something similar, but phrased it as a 'block bootstrap' or something), and also the final level of Stockfighter, which turns out to be 'detect the inside trader' and quizzes Ricki on how to do it both with and without the intended approach of 'hacking' the simulated stock market to get IDs for traders' trades & find the trader with excessive returns.

(If you are interested in Jane-Street-esque discussions of trading & adverse selection and this is too entry-level, possibly LeBron's 2019 The Laws of Trading might be a better use of reading time.)


I don't think that's clear at all. What investments have been made into GPUs specifically have been fairly minor, discussion at the state level has been general and as focused on other kinds of chips (eg. avoiding the Russian shortages) in order to gain general economic & military resilience to Taiwanese sanctions & ensure high tempo high-tech warfare, with chips being but one of many advanced technologies that Xi has designated as priorities (which means they're not really priorities) and the US GPU embargo has been as focused on sabotaging weapons development like hypersonic missiles as it is on AI (you can do other things with supercomputers, you know, and historically, that's what they have been doing).


I was surprised there was any signal here because of the "flattened logits" mode collapse effect where ChatGPT-4 loses calibration and diversity after the RLHF tuning compared to GPT-4-base, but I guess if you're going all the way up to 1.5, that restores some range and something to measure.


ChatGPT-4 just spits out a list of 'famous ML people' like 'Ilya Sutskever' or 'Daphne Koller' or 'Geoffrey Hinton' - most of whom are obviously incorrect as they write nothing like me!

To elaborate a little more on this: while the RLHF models all appear still capable of a lot of truesight, we also still appear to see "mode collapse". Besides mine, where it goes from plausible candidates besides me to me + random bigwigs, from Cyborgism Discord, Arun Jose notes another example of this mode collapse over possible authors:

ChatGPT-4's guesses for Beth's comment: Eliezer, Timnit Gebru, Sam Altman / Greg Brockman. Further guesses by ChatGPT-4: Gary Marcus, and Yann LeCun.

Claude's guesses (first try): Paul Christiano, Ajeya, Evan, Andrew Critch, Daniel Ziegler. [but] Claude managed to guess 2 people at ARC/METR. On resampling Claude: Eliezer, Paul, Gwern, or Scott Alexander. Third try, where it doesn't guess early on: Eliezer, Paul, Rohin Shah, Richard Ngo, or Daniel Ziegler.

Interestingly, Beth aside, I think Claude's guesses might have been better than 4-base's. Like, 4-base did not guess Daniel Ziegler (but did guess Daniel Kokotajlo). Also did not guess Ajeya or Paul (Paul at 0.27% and Ajeya at 0.96%) (but entirely plausible this was some galaxy-brained analysis of writing aura more than content than I'm completely missing).

Going back to my comments as a demo:

Woah, with Gwern's comment Claude's very insistent that it's Gwern. I recommended it give other examples and it did so perfunctorily, but then went back to insisting that its primary guess is Gwern.

...ChatGPT-4 guesses: Timnit Gebru, Emily Bender, Yann LeCun, Hinton, Ian Goodfellow, "people affiliated with FHI, OpenAI, or CSET". For Gwern's comment. Very funny it guessed Timnit for Beth and Gwern. It also guessed LeCun over Hinton and Ian specifically because of his "active involvement in AI ethics and research discussions". Claude confirmed SOTA.


I've replicated that on GPT-4-base as well just now, although 1 or 2 other BigBench-esque prompts I tried didn't yield it, even with high BO (they yielded GUID strings that looked like it but were not it), so you may need to replicate the exact BigBench formatting to reliably trigger it.

(Why the apparent difficulty in getting it out of ChatGPT-4 if GPT-4-base definitely has it? GPT-4o might be: a retrained model on better-filtered data or just happening to not memorize it that time; RLHFed to not yield canary strings, either from people explicitly flagging the BigBench canary or possibly the model being smart enough to realize it's not supposed to know canaries; forgetting of rare memorized strings due to heavy sparsification/quantization/distillation as we assume deployed models are generally trained using now; or just fragile in prompting and no one has tried enough yet.)


early transformatively-powerful models are pretty obviously scheming (though they aren't amazingly good at it), but their developers are deploying them anyway

So... Sydney?


Seems like the natural next step would be to try to investigate grokking, as this appears analogous: you have a model which has memorized or learned a grabbag of heuristics & regularities, but as far as you can tell, the algorithmic core is eluding the model despite what seems like ample parameterization & data, perhaps because it is a wide shallow model. So one could try to train a skinny net, and maybe aggressively subsample the training data down into a maximally diverse subset. If it groks, then one should be able to read off much more interpretable algorithmic sub-models.


Apparently. Stability claimed "numerous safeguards" starting from the data onwards, which cannot simply be disabled by 1 line of Python if nsfw(generated image) then error, as I recall, and while I can't find any specifics easily about what all of the anti-NSFW countermeasures were, I don't see anyone claiming to have trivially beaten them. If they did something like filter the training data even more heavily to eliminate all kinds of nudity & suggestive imagery & porn, then ignorance may be baked into the model's final concepts so heavily you might as well not even bother with SD3 and work on something else.

Load More