Comments

iframes sound like overkill; LW2 won't pass through the image map <map> HTML element? Then the existing popups should work with the links in it.

Though like moridinamael, I’m also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.

Yes, we shouldn't assume a SF author endorsed any speculative proto/pseudo-science he includes. But in the case of genetic memory, we can be fairly sure that he 'believed in it' in the sense that he took it way more seriously than you or I and considered it a live hypothesis because he says so explicitly in an interview I quote in the essay: he thinks genetic memory and pheromones, or something much like them, is necessary to explain things like the cohesion of mob & social groups like aristocracies without explicit obvious status markers, or the supposed generational patterns of warfare 'spasms' (this is a reference to the obscure crankery of The Sexual Cycle of Human Warfare† which apparently deeply influenced Herbert and you won't understand all the references/influences unless you at least look at an overview of it because it's so lulzy).


Reading back, I see I got sidetracked and didn't resolve your main point about why the Butlerian Jihad targeted all software. The one-line explanation is: permitting any software is an existential risk because it is a crutch which will cripple humanity's long-term growth throughout the universe, leaving us vulnerable to the inevitable black swans (not necessarily AI).

First, you should read my essay, and especially that Herbert interview and the Spinrad democracy footnote and if you have the time, Herbert's attitude towards computers & software is most revealed in Without Me You're Nothing, which is a very strange artifact: his 1980 technical guide/book on programming PCs of that era - leaving aside the wildly outdated information which you can skip over, the interesting parts are his essays or commentaries on PCs in general, which convey his irascible humanist libertarian attitude on PCs as being a democratizing and empowering force for independent-human growth. Herbert was quite a PC enthusiast: beyond writing a whole book about how to use them, his farmstead apparently had rigged up all sorts of gadgets and 'home automation' he had made as a hobby to help him farm and, at least in theory, be more independent & capable & a Renaissance man. (Touponce is also well worth reading.) There's a lot of supporting information in those I won't try to get into here which I think support my generalizations below.

So, your basic error is that you are wrong about the BJ not being about AI or existential-risk per se. The BJ here is in fact about existential-risk from Herbert's POV; it's just that it's much more indirect than you are thinking. It has nothing to do with signaling or arms-races. Herbert's basic position is that machines (like PCs), 'without me [the living creative human user], they are nothing': they are dead, uncreative, unable to improvise or grow, and constraining. (At least without a level of strong AI he considered centuries or millennia away & to require countless fundamental breakthroughs.) They lock humans into fixed patterns. And to Herbert, this fixedness is death. It is death, sooner or later, perhaps many millennium later, but death nevertheless; and [human] life is jazz:

In all of my universe I have seen no law of nature, unchanging and inexorable. This universe presents only changing relationships which are sometimes seen as laws by short-lived awareness. These fleshly sensoria which we call self are ephemera withering in the blaze of infinity, fleetingly aware of temporary conditions which confine our activities and change as our activities change. If you must label the absolute, use its proper name: "Temporary".

Or

The person who takes the banal and ordinary and illuminates it in a new way can terrify. We do not want our ideas changed. We feel threatened by such demands. 'I already know the important things!' we say. Then Changer comes and throws our old ideas away.

And

Odrade pushed such thoughts aside. There were things to do on the crossing. None of them more important than gathering her energies. Honored Matres could be analyzed almost out of reality, but the actual confrontation would be played as it came -- a jazz performance. She liked the idea of jazz although the music distracted her with its antique flavors and the dips into wildness. Jazz spoke about life, though. No two performances ever identical. Players reacted to what was received from the others: jazz. Feed us with jazz.

('Muad'dib's first lesson was how to learn'/'the wise man shapes himself, the fool lives only to die' etc etc)

Whether it's some space plague or space aliens or sterility or decadence or civil war or spice running out or thinking machines far in the future, it doesn't matter, because the universe will keep changing, and humans mentally enslaved to, and dependent on, their thinking machines, would not. Their abilities will be stunted and wither away, they will fail to adapt and evolve and grow and gain capabilities like prescience. (Even if the thinking-machines survive whatever doomsday inevitably comes, who cares? They aren't humans. Certainly Herbert doesn't care about AIs, he's all about humanity.) And sooner or later - gambler's ruin - there will be something and humanity will go extinct. Unless they strengthen themselves and enter into the infinite open universe, abandoning delusions about certainty or immortality or reducing everything to simple rules.

That is why the BJ places the emphasis on banning anything that serves as a crutch for humans, mechanizing their higher life.* It's fine to use a forklift or a spaceship, humans were never going to hoist a 2-ton pallet or flap their wings to fly the galaxy and those tools extend their abilities; it's not fine to ask a computer for an optimal Five-Year Plan for the economy or to pilot the space ship because now it's replacing the human role. The strictures force the development of mentats, Reverend Mothers, Navigators, Face Dancers, sword-masters, and so on and so force, all of which eventually merge in the later books, evolving super-capable humans who can Scatter across the universe, evading ever new and more dangerous enemies, ensuring that humanity never goes extinct, never gets lazy, and someday will become, as the Bene Gesserit put it, 'adults', who presumably can discard all the feudal trippery and stand as mature independent equals in fully democratic societies.

As you can see, this has little to do with Confucianism or the stasis being intrinsically desirable or it being a good thing to remove all bureaucracy (bureaucracy is just a tool, like any other, to be used skillfully) or indeed all automation etc.

* I suspect that there's a similar idea behind 'BuSab' in his ConSentiency universe, but TBH, I find those novels/stories too boring to read carefully.
† 183MB color scan: https://www.gwern.net/docs/sociology/1950-walter-thesexualcycleofhumanwarfare.pdf

It's hard to say because no one has tracked down if the rat story happened; although we did find some instances which looked real of very similar stories.

Selling a card game as 'non-mana' is like selling non-apples, sounds like.

I would observe that any HEC computronium planet could be destroyed and replaced with a similar amount of computronium running more efficient non-HEC computations, supporting a much greater amount of flourishing and well-being. So the real question is, why suffer a huge utility hit to preserve a blackbox, which at its best is still much worse than your best, and at its worst is possibly truly astronomically dreadful?

There's a game-theoretic component here as well: the choice to hide both encryption/decryption keys is not a neutral one. Any such civilization could choose to preserve at least limited access, and could also possibly provide verifiable proofs of what is going on inside (/gestures vaguely towards witness/functional encryption, PCP, and similar concepts). Since this is possible to some degree, choosing not to do so conveys information.

So, this suggests to me an unraveling argument: any such civilization which thinks its existence is ethically acceptable to all other civs will provide such proofs; any blackbox civ is then inferred to be one of the rest, with low average acceptability and so may be destroyed/replaced, so the civs which are ethically acceptable to almost all other civs will be better off providing the proof too; now the average blackbox civ is going to be even worse, so now the next most acceptable civ will want to be transparent and provide proof... And so on down to the point of civs so universally abhorrent that they are better off taking their chances as a blackbox rather than provide proof they should be burnt with fire. So you would then have good reason to expect any blackbox HEC civs you encounter to probably be one of the truly abominable ones.

My earlier comment on this question where I argue no, precisely for the same reasons (ie. if the generated samples are indistinguishable from human samples and 'pollute' the dataset then mission accomplished).

An interesting example of what might be a 'name-less style' in a generative image model, Stable Diffusion in this case (DALL-E 2 doesn't give you the necessary access so users can't experiment with this sort of thing): what the discoverer calls the "Loab" (mirror) image (for lack of a better name - what text prompt, if any, this image corresponds to is unknown, as it's found by negation of a text prompt & search).

'Loab' is an image of a creepy old desaturated woman with ruddy cheeks in a wide face, which when hybridized with other images, reliably induces more images of her, or recognizably in the 'Loab style' (extreme levels of horror, gore, and old women). This is a little reminiscent of the discovered 'Crungus' monster, but 'Loab style' can happen, they say, even several generations of image breeding later when any obvious part of Loab is gone - which suggests to me there may be some subtle global property of descendant images which pulls them back to Loab-space and makes it 'viral', if you will. (Some sort of high-frequency non-robust or adversarial or steganographic phenomenon?) Very SCP.

Apropos of my other comments on weird self-fulfilling prophecies and QAnon and stand-alone-complexes, it's also worth noting that since Loab is going viral right now, Loab may be a name-less style now, but in future image generator models feeding on the updating corpus, because of all the discussion & sharing, it (like Crungus) may come to have a name - 'Loab'.

Considering what a country can do a in a decade does make sense. But it is still relatively close compared to multiple millennia evolutionary timescales.

I'm not sure what you mean here. If you want to incorporate all of the evolution before that into that multiplier of '1.4 billion', so it's thousands of times that, that doesn't make human brains look any more efficient.

Humans produce go professionals as a side product or one mode of answering the question of life. Even quite strict go professionals do stuff like prepare meals, file taxes and watch television.

All of those are costs and disadvantages to the debit of human Go FLOPS budgets; not credits or advantages.

On that "country level" we should also consider for the model hyperparameter tuning and such.

Sure, but that is a fixed cost which is now in the past, and need never be done again. The MuZero code is written, and the hyperparameters are done. They are amortized over every year that the MuZero trained model exists, so as humans turn over at the same cost every era, the DL R&D cost approaches zero and becomes irrelevant. (Not that it was ever all that large, since the total compute budget for such research tends to be more like 10-100x the final training run, and can be <1x in scaling research where one pilots tiny models before the final training run: T5 or GPT-3 did that. So, irrelevant compared to the factors we are talking about like >>10,000x.)

"Do go really well and a passable job at stereoscopic 3d vision" is a different task than just "Do go really well".

But not one that anyone has set, or paid for, or cares even the slightest about whether Lee Sedol can see stereoscopic 3D images.

Humans being able to do ImageNet classifications without knowing to prepare for that specific task is quite a lot more than just having the capability.

I think you are greatly overrating human knowledge of the 117 dog breeds in ImageNet, and in any case, zero-shot ImageNet is pretty good these days.

In contrast most models get an environment or data that is very pointedly shaped/helpful for their target task.

Again, a machine advantage and a human disadvantage.

Human filtering is also pretty much calibrated on human ability levels ie a good painter is a good human painter. Thus the "miss rate" based on trying to gather the cream of the cream doesn't really tell that it would be a generally unreliable method.

I don't know what you mean by this. The machines either do or do not pass the thresholds that varying numbers of humans fail to pass; of course you can have floor effects where the tasks are so easy that every human and machine can do it, and so there is no human penalty multiplier, but there are many tasks of considerable interest where that is obviously not the case and the human inefficiency is truly exorbitant and left out of your analysis. Chess, Go, Shogi, poetry, painting, these are all tasks that exist, and there are more, and will be more.

Load More