I agree with your worry about reducing every artistic input from a human to a natural language prompt. I think others share this worry and will make generative AI to address it. Some image generation software already allows artistic input (sketching something for the software to detail). I don't think it exists yet, but I know it's a goal, and I'm looking forward to music generation AI that takes humming or singing as inputs. These will be further refined to editing portions of the resulting art, rather than producing a whole new work with each prompt.
Generative AI can also be used at the detailed level, to aid existing experts. Using AI to generate tones for music, sketches for visual art, etc. may preserve our interest in details. The ability to do this seems likely to preserve artists who at least intuit tone and color theory.
The loop of learning and engaging by perturbing may be enhanced, by doing that perturbation at a broad scale, at least initially. Changing a prompt and getting a whole new piece of art is quite engaging. I see no reason why interest in the details might not be driven by an ability to work with the whole, perhaps better than interest in producing the whole is driven by working to produce the components. Learning to sketch before producing any satisfying visual art is quite frustrating, as is learning to play an instrument. The idea that we won't get real experts who learn about the details just because they started at the level of the whole seems possible- as you put it, not an unreasonable worry. But my best guess would be that we get the opposite, which is a world in which many more people at least intuit the detailed mechanics of art because they've tried to make art themselves.
Somewhat off of your point: I expect this question to be less relevant than the broader question "what will humans do once AGI can do everything better?". The idea that we might have many years, let alone generations with access to generative AI but not AGI strikes me as quite odd. While it's possible that the last 1% of cognitive ability (agency, reflection, and planning) will remain the domain of humans, it seems much more likely that those predictions are driven by wishful thinking (technically, motivated reasoning).
If you want to see a good example of human-in-the-loop visual augmentation, check out this tool that I worked on at one point:
https://www.vizcom.ai/
They now have the capability of taking a sketch, making a 2d image with prompt guidance, and then generating 3d objects from that, and creating 2d images from composable 3D objects. I think it's just about the workflow and the control we wish for. I'm an Artist myself, and I find this an excellent tool. We can choose what parts we want to automate, and what parts we want machines to do.
So long as generative AI is just a cognitive prosthesis for humans, I think the situation is similar to social media, or television, or print, or writing; something is lost, something is found. The new medium has its affordances, its limitations, its technicalities, it does create a new layer of idiocracy; but people who want to learn, can learn, and people who master the novelty, and becomes power users of the new medium, can do things that no one in history was previously able to do. In my opinion, humanity's biggest AI problem is still the risk of being completely replaced, not of being dumbed down.
Whenever I learn about a thing from an LLM, my understanding is so shallow. The LLM is bad enough that I will still go use better resources / direct interaction, but I think I might get in an unhappy medium soon where my default mode is just pretty low depth :(
What's the fix?
For what it's worth, I think even current, primitive-compared-to-what-will-come LLMs sometimes do a good job of (choosing words carefully here) compiling information packages that a human might find useful in increasing their understanding. It's very scattershot and always at risk of unsolicited hallucination, but in certain domains that are well and diversely represented in the training set, and for questions that have more or less objective answers, AI can genuinely aid insight.
The problem is the gulf between can and does. For reasons elaborated in the post, most people are disinclined to invest in deep understanding if a shallower version will get the near-term job done. (This is in no way unique to a post-AI world, but in a post-AI world the shallower versions are falling from the sky and growing on trees.)
My intuition is that the fix would have to be something pretty radical involving incentives. Non-paternalistically we'd need to incentivise real understanding and/or disincentivise the short cuts. Carrots usually being better than sticks, perhaps a system of micro-rewards for those who use GenAI in provably 'deep' ways? [awaits a beatdown in the comments]
All I can think of is how, with current models plus a little more Dakka, for genAI to deeply research a topic.
It wouldn't be free. You might have to pay a fee with varying package prices. The model then buys a 1 time task license for say 10 reference books on the topic. It reads each one, translating it from "the original text and images" to a distilled version that focuses on details relevant to your prompt.
It assembles a set of "notes" where each note cites directly the text it was from. (And another model session validates this assertion)
It constructs a summary or essay or whatever form it needs to be in from the notes.
Masters thesis grade in 10 minutes and under $100...
The post seems to assume a future version of generative AI that no longer has the limitations of the current paradigm which obligate humans to check, understand, and often in some way finely control and intervene in the output, but where that tech is somehow not reliable and independent enough to be applied to ending the world, and somehow we get this long period where we get to feel the cultural/pedagogical impacts of this offloading of understanding, where it's worth worrying about, where it's still our problem. That seems contradictory. I really don't buy it.
I don't think I'm selling what you're not buying, but correct me if I misrepresent your argument:
The post seems to assume a future version of generative AI that no longer has the limitations of the current paradigm which obligate humans to check, understand, and often in some way finely control and intervene in the output...
Depending on your quality expectations, even existing GenAI can make good-enough content that would otherwise have required nontrivial amounts of human cognitive effort.
but where that tech is somehow not reliable and independent enough to be applied to ending the world...
Ending the world? Where does that come in?
and somehow we get this long period where we get to feel the cultural/pedagogical impacts of this offloading of understanding, where it's worth worrying about, where it's still our problem. That seems contradictory.
If your main thrust is that by the time GenAI's outputs are reliable enough to trust implicitly we won't need to maintain any endogenous understanding because the machine will cook up anything we can imagine, I disagree. The space of 'anything we can imagine' will shrink as our endogenous understanding of concepts shrinks. It will never not be 'our problem'.
even existing GenAI can make good-enough content that would otherwise have required nontrivial amounts of human cognitive effort
This doesn't seem to be true to me. Good enough for what? We're still in the "wow, an AI made this" stage. We find that people don't value AI art, and I don't think that's because of its unscarcity or whatever, I think it's because it isn't saying anything. It either needs to be very tightly controlled by an AI-using human artist, or the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
Ending the world? Where does that come in?
All communications assume that the point they're making is important and worth reading in some way (cooperative maxim of quantity). I'm contending that that assumption isn't true in light of what seems likely to actually happen immediately or shortly after the point starts to become applicable to the technology, and I have explained why, but I might be able to understand if it's still confusing, because:
The space of 'anything we can imagine' will shrink as our endogenous understanding of concepts shrinks. It will never not be 'our problem'
is true, but that doesn't mean we need to worry about this today. By the time we have to worry about preserving our understanding of the creative process against automation of it, we'll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects. Eventually it'll be a problem that we each have to tackle, but we'll have a new kind of support, paradoxically, learning the solutions to the problem will not be our problem.
(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for what?
Good enough for time-pressed people (and lazy and corrupt people, but they're in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
We're still in the "wow, an AI made this" stage.
I'm pretty sure we're currently exiting the "wow, an AI made this" stage, in the sense that 'fakes' in some domains are approaching undetectability.
We find that people don't value AI art, and I don't think that's because of its unscarcity or whatever, I think it's because it isn't saying anything
I strongly agree, but it's a slightly different point: that art is arguably not art if it was not made by an experiencing artist, whether flesh or artificial.
My worry about the evaporation of (human) understanding covers science (and every other iterably-improvable, abstractable domain) as well as art, and the impoverishment of creative search space that might result from abstraction-offloading will not be strongly affected by cultural attitudes toward proximately AI-created art — even less so when there's no other kind left.
the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
It's probably not what you were implying, but I'd complete your sentence like so: "as soon as machines have that, we will have missed the slim chance we have now to protect human understanding from being fully or mostly offloaded."
but where that tech is somehow not reliable and independent enough to be applied to ending the world...
I'm afraid I still can't parse this, sorry! Are you referring to AI Doom, or the point at which A(G)I becomes omnipotent enough to be able to end the world, even if it doesn't?
By the time we have to worry about preserving our understanding of the creative process against automation of it, we'll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects.
I don't have any more expertise or soothsaying power than you, so in the absence of priors to weight the options, I guess your opposing prediction is as likely as mine.
I'd just argue that the bad consequences of mine are bad enough to motivate us to try to stop it coming true, even if it's not guaranteed to do so.
In which a case is made for worrying about the AI Prompt Box.
Preamble
Technology serves to abstract away nonessential aspects of creative activities, giving us more direct access to their conceptual cores. Few audio engineers pine for the days of flaky reel-to-reel tape machines that unspool at the worst moments; few graphic designers long to swap their Macbooks for bulky old photostat rigs; few mathematicians grieve for the sliderule or the log table.
Yet domain understanding survived those leaps to digital abstraction. Music producers working entirely 'in the box' still know and/or intuit dynamics, frequency equalisation, melody and harmony. Photoshop natives still know and/or intuit colour theory, visual communication, the rules of composition. Recent mathematics and physics graduates experience the beauty of Euler's identity, its vast arms linking trigonometry to arithmetic to analysis to the complex plane, just as vividly as their predecessors did a century ago. Indeed, with the time these modern creatives save by not having to re-ravel 1/4" tape, wrestle with Zipatone and fixative or pore over columns of logarithms (to say nothing of their access to new tools), they can elevate their understanding of their fields' fundamentals[1].
The GenAI Prompt Box declares itself the asymptote of this march to abstraction: a starry empyrean of 'pure', unfettered creative actualisation, in every medium, on the instant, where all pesky concrete details are swept away and the yellow brick road to Perfect Self-Expression is illuminated like a kaleidoscopic runway.
Here are some problems with that dream.
The Worse Angels of Our Nature
Consider the normal distribution of intellectual curiosity in the human population.
The long tail on the right is the minority whose genius and drivenness guarantee that they will seek and find whatever high-hanging epistemic fruit it was their destiny to pluck, no matter how alluring the paths of less resistance on offer.
On the left is the opposing minority whom no amount of goading could make curious. They want simple, earthly pleasures; they are not prepared to invest near-term time towards a long-term goal. They don't care how things work. (And that's fine.)
The majority of us are in the middle, on the bell: we who are naturally curious and eager to understand the world, but also fallible and time-poor and often money-poor. On a bad or short or busy day, we just don't have the mental wherewithal to engage with the universe's underpinnings. But if we happen to read a good pop-sci/cultural history book on a good day, and the author blesses us with a (temporary, imperfect) understanding of some heady concept like Special Relativity or convolution or cubism, our brains sparkle with delight. We are rewarded for our effort, and might make one again.
For us willing-of-spirit but weak-of-flesh, the multimodal Prompt Box is a seductive, decadent thing. Silver-tongued, it croons an invitation to use it, abuse it, at no cognitive cost, with no strings. It will 'write' and 'record' a song for us (which we can then claim as ours), making zero demands on our lyrical or music-theoretical acumen. It will 'paint' a picture with content and style of our choosing (but its doing), letting us ignore such trifles as vanishing point, brush technique, tonal balance. It will write copyright-free code for a fast Fourier transform in n dimensions, shielding us from the tediums of discretisation, debugging, complex mathematics and linear algebra.
This will feel liberating for a while, giving us bell-dwellers what seem to be new, godlike powers: to bypass the nuisance of technical mastery, to wire the inchoate creative urge directly to its fulfilment. But each time we reach for the Prompt Box — where in the AI-less counterfactual we stood a nonzero chance of getting our thumbs out and at least trying to understand the concepts behind what we want — we will have a) lost another opportunity to understand, b) received a superficially positive, encouraging reward for our decision to let the machine do the understanding for us, increasing the probability of our doing so again.
The curiosity distribution will drift — maybe only slightly, but inevitably — toward the incurious.
After years or generations of this, our endogenous ability to understand will atrophy.
Lingua Franca
Language, for all its capacity for beauty and meaning, is a spectacularly low-bandwidth channel[2].
Even we, who invented it and use it to communicate intersubjectively, regularly feel language's limitations. When we pause or stammer because we can't quite find the words to express an idea shining clearly in our minds; when we struggle to express verbally our love (or hate) for a work of art or a person because no words are sharp or big or hot or fast or loud enough to do the job; when misunderstandings lead to violence or death.
We even use language to talk to ourselves, most of the time. But the famous flow state, coveted by artists[3] and scientists and meditators of every stripe, lets us dispense with the bottleneck of translation between verbal language and the brain's native encoding. In the flow, time feels irrelevant. Thinking and acting feel frictionless; superconductive; efficient[4].
It's not crazy to suspect that deep understanding and deep creativity mostly or only occur in this 'superconducting' phase; the non-flow, language-first state might be prohibitively slow.
It's therefore not crazy to worry that a human/AI collaboration in which all inter-system communication is by natural language might fundamentally lack the access to flow that a purely human system has[5].
Baby With Bathwater
It is astonishing that current AI, which in its descendants' shadow will seem laughably primitive, can already shoulder cognitive burdens that until recently required significant human effort.
From afar, this looks like a clear net good. But up close, its trajectory is crashing through a checkpoint beyond which something essential will be leached from us.
Abstracting is understanding. To see how a particular thing generalises to a broader concept is to understand at least some of that concept. And this understanding is cumulative, iterative. It begets more of itself.
Constraints are a kind of paradox here. On one hand, to understand a conceptual model — its dynamics, its rules, its quirks, its character — is to be constrained by the model being thus and not otherwise. But one who has real insight into a system, constraints and all, commands a far bigger, richer search space than one who doesn't. (The list of scientific and cultural breakthroughs from people with no relevant training or practice is short.)
Via the 'unconstrained' Prompt Box, we are offloading not just the tedious concrete instantiation of interesting concepts, but also the instructive process of abstracting those instantiations into those concepts, and therefore understanding itself, to machines whose methods of abstraction are almost totally opaque to us.
It's too much delegation. We're going from hunger to sausage without touring the factory, and the sausage courier compliments us on our excellent butchery. Is that a system for improving sausages?
Enter the Steel Men
"Humans have been outsourcing understanding to other humans since the dawn of civilisation, and expressing in natural language what they want from the rented expertise. A Madison Avenue executive with a great campaign idea farms its execution out to a design team, and they communicate in English. Is GenAI any different?"
It is. Humans have been outsourcing their understanding to other humans who understand, and who iterate upon that understanding. The amount of understanding in the world[6] doesn't fall in this scenario, as it would if the advertising exec enlisted a GenAI to create their campaign; it is merely distributed. And humans regularly augment natural language with other communication channels: architectural plans, equations, body language, chemical formulae[2].
"How do you know that a world of fully machine-offloaded understanding is bad?"
I don't, but there's plenty of circumstantial evidence in recent history. Strong correlations between obscurantism and misery (Mao's Cultural Revolution, Stalin's purges, the Cambodian genocide, McCarthyism, Q-Anon, ISIS, the Taliban, the Inquisition, the Third Reich, etc.) suggest that a drop in understanding has catastrophic consequences.
Even if GenAI really understands the concepts whose concrete instantiations it abstracts away from us (and many experts doubt this), it cannot share back this understanding with us. The iterative chain is broken.
"Maybe, freed from the shackles of having to learn how what we know about now works, we'll be able to learn how wild new things work. Things you and I can't even imagine."
And then slothful/distractible human nature will kick in (see the "Worse Angels" section above), and we'll get the machines to abstract away understanding of the wild new things as well. Also, in the absence of human understanding and with the wild new things being unimaginable, who will discover them?
"In the preamble, you give examples of insight surviving the radical abstracting-away of some of its concrete prerequisites by technology, and even of this abstracting-away freeing up more resources for further understanding. Why are you now saying that in the limit, abstracting-away goes in the opposite direction?"
The Prompt Box has already ferried us past the checkpoint that reverses these dynamics, beyond which offloaded abstraction no longer facilitates new understanding. Typing "a beautiful pastoral nighttime scene, in the style of Cézanne, with robot farmers tending to giant cow/spider hybrids, three neon moons and a wooden space station in the sky, ultra detailed", and getting a convincing depiction back, gives zero insight into art or art history or painting or arachnology or mammalogy or agriculture or robotics or hybridisation or fluorescence or astronomy or orbital mechanics or carpentry or computer graphics or generative artificial intelligence.
Conclusion
A curious human begins the journey to understanding by randomly perturbing an existing concrete system. The relationship between the perturbations and their consequences reveals regularities: the human has discovered an abstraction. Informed by this abstraction, the human builds a new concrete system and perturbs it, this time a little less randomly. It performs a little better.
There is constant interaction and feedback between the concrete and abstract levels. That is what unites and defines understanding, insight, play, imagination, iterative improvement, wonder, science, art, and basically all the Good Stuff humans can offer the world, sparsely distributed among the other, mostly horrible stuff we're currently doing to it.
By indiscriminately abstracting everything away, the Prompt Box will push us away from this tightly-coupled, looping system that is the source and hallmark of ingenuity.
We might want to push back.
While leaving plenty of nonjudgemental space for purists and nostalgics to play around with vinyl records, film photography, sextants and Letraset if they please.
This is a problem specific to models with textual front-ends; it could be resolved by a multimodal front-end and/or grim, neurally invasive means.
Even writers, strangely.
It is speculative and not universally agreed that flow state is less verbal than normal brain states, but the distinction in Gold and Ciorciari's paper between the explicit/linguistic and implicit/flow processes strongly suggests it.
The topology of an LLM-front end collaboration is flat, its time steps discrete:
Human types text; machine renders media.
Human types text; machine refines media.
Human types text; machine refines media.
We know little about how the brain does ideation, but it's probably continuous in time and it probably mobilises feedback loops that make it an altogether less linear affair than the stilted conversation between carbon Promptsmith and silicon magus.
If you think LLMs have or will develop (or their successors will have or will develop) a real, recursive, symbolic world-model, preface 'world' with 'human'.