I was about to say the same (Gary Marcus' substack here).
In defense of Marcus, he often complains about AI companies refusing to give him access to their newer models. If your language/image model is really as awesome as advertised, surviving the close scrutiny of a skeptical scientist should not be a problem, but apparently it is.
You are not.
Just to engage a bit this idea:
There's a popular order: opinion, size, physical quality or shape, age, colour, origin, material, purpose. What created this order? I don't know, but I know that certain biases could make it easier to understand.
How would you model the fact that non-English languages use different orders?
Would criminals form their own security agencies?
Does the very concept of "criminals" even make sense in this context? How can you distinguish criminals from ordinary people if no one has formal obligations?
Suppose that someone in a gray pinstripe shows up at your door offering "protection" in exchange for a monthly fee. How are the anarchists supposed to handle this situation? If the answer is "paying someone else for actual protection", then this system sounds awfully similar to a world ruled by the mafia, if you just substitute "security agencies" with "families"...
At least for actual Magic cards, it's not just a matter of consistency in some abstract sense. Cards from the same set need to relate to each other in very precise ways and the related constraints are much more subtle than "please keep the same style".
Here you can find some examples of real art descriptions that got used for real cards (just google "site:magic.wizards.com art descriptions" for more examples). I could describe further constraints that are implicit in those already long descriptions. For example, consider the Cult Guildmage in the fourth image. When the art description lists "Guild: Rakdos", it's implicitly asking for giving the whole card a black-red tone, and possibly inserting the guild logo somewhere in the picture (the guild logo looks like this; note how the guildmage wears one).
I don't want to dispute that AI-generated artworks are very cheap and can be absolutely stunning, but I still predict that AI as available today would make a terrible job if used to replace human illustrators for Magic cards (You could have a better time using AI artworks for a brand-new trading card game, however).
I've never encountered this issue with figures and drawings. Maybe I have not read enough figure-packed textbooks, but I don't remember ever losing the reference to a figure. I do remember losing the reference to an equation when reading linear programming models or the like, but we are talking about ~30 equations stacked one above the other. But since I mostly read these from academic papers in pdf which already include suitable hyperlinks, I would say that the overall issue is almost nonexistent.
Asimov may not have been a professional forecaster, but he was still someone who had thought a lot about the future in the most realistic way possible (and he got invited quite often on TV to talk about it, if I remember correctly), especially considering that he wrote also a crazy amount of scientific nonfiction. Maybe he's more famous as a science fiction author, but he was also a very well-known futurologist, not just some random smart guy who happened to make some predictions. I would be quite surprised to hear about anyone else from the 60s with a better futurology record than him.
That said, I am still quite convinced that the average smart person would still make terrible predictions about the long-term future. The best example I can offer is this, one of the rare set of illustrations that got printed in 1899 France to imagine what France would look like in the year 2000. Of course, the vast majority of these predictions were comically bad.
It is worth to notice that we mainly know about these postcards because Asimov himself published a book about them in the 80s (this is not a coincidence because nothing is ever a coincidence).
I would like to express serious disappointment about astrological compatibility being labeled as "worth checking" on LW. The only possible way in which I could see this having any effect is the scenario where your partner believes so much in astrology to turn the incompatibility belief into a self-fulfilling prophecy.
I suppose you are talking about this quote from the Sequences (A Priori):
James R. Newman said: "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2."
Judging from his recent post on AlphaCode, I would say that Scott Aaronson is probably more concerned about AI risk now.