I always assumed it's because in western society, the career of "artist" is smeared and sneered at for being "not a real job". My creative friends often mention the risk of being "taken advantage of" when it comes to payment and remuneration - that since they love what they do, they should be expected to "do it for free" or be underpaid.
And at society at large there's this idea that you get paid to suffer - and jobs aren't expected to be appealing or fun. So you shouldn't expect to be paid for making art. This does dovetail as a cause of financial precarity you're alluding to - but I believe there's more malice or disdain behind the reaction than just neutral risk assessment. Therefore when A.I. comes and gobbles up paid opportunities for artists, the view is "well that wasn't a serious job anyway - it's just a hobby, a passion. You don't get paid to do what you love, you get paid to do what you hate." rather than "it's a valid job, but good luck making a living"
To be honest, I've never actually looked into this to back it up. I'm making a lot of assumptions here and putting a lot of thoughts/words into a nebulous group known as "society". The closest is I've read some historical analysis about the shift in views of genius (including artistic) from the romantic era and into the industrial age that underscore a shifting sentiment from being driven by passion or inspiration (passive), into patience and discipline (active).
What are the common counterpoints to “there’s nothing I, personally, can do about p(doom) or various X-Risks so I just don’t think about it”?
For example, I have a liberal arts education, no social influence, no platform I can beat the drum about specific risks from. I feel no more likely to affect change in, say, development or policy about misaligned super intelligent A.I. than I do about something like the cost of rent in my country, or the legislation around motorized scooters.
I'm sure that hobbyists on Civitai or TensorArt have some thoughts on it. Many LoRAs are made to evoke antiquated camera technologies, digital and analog (although they often incorporate elements of what we may call 'art direction' like costume and furnishing of spaces to match the formats).
I think most people aren't aware of how much AI there already is, and has been, in their smartphone and the influence that has on their photos.
"Araffe" is a nonsense word familiar to anyone who works with generative AI image models or captioning in the same way that "delve" or "disclaim" are for LLMs and presents yet another clear case of an emergent AI behavior. I'm currently experimenting with generative video again and the word came to my attention as I try to improve the adherence to my prompts and mess around with conditioning. These researchers/artists investigated the origin of 'arafed' and similar words: it appears to be a case of overfitting the BLIP2 vision-language pretraining framework to the COCO dataset of image and caption which always starts off with "a bed with" "a slice of cake" - the captions always start with 'a', so the model would start captions with 'a' and create a nonsense word like 'arrafed' or 'arrafe' or even 'arraful' around it to score better. Apparently, later versions of the framework don't exhibit this behavior.
The audience questions in the video are interesting in that they suggest to the researchers that they had the opportunity to define the meaning of the word. Which, I suppose, would undermine the point of the presentation and that it is an emergent hallucination of the AI.
Another interesting observation is that some Stable Diffusion users included it in their 'quality salad' (a word salad of words that they claim improves the overall quality of outputs that often includes "RAW photo 4K best quality subsurface scattering" etc. etc.). How true that is depends on how much you believe quality salads work at all or whether it's some kind of confirmation bias. I for one found in at least one case on a non-stable-diffusion model adding 'araffe' to the prompt caused a tiny decrease in subjective quality.
Is it too much to read into this as an example of the symbiosis or perhaps even domination of AI over our language in the future, or this is just another innocuous artefact in the same way that typographical errors like "teh" or "!!1!!" became internet in-jokes?
Prompted by Raemon's article about "impossible" problems, I've been asking myself:
What do I actually mean when I say something is “very hard” or “difficult”?
I wonder if my personal usage of these words less describes the effort involved, but more the projected uncertainty. If I describe something as difficult, I tend to use it in one of these three patterns:
My purpose here isn’t to muse on the correct or even idiomatic usage of the words. Instead I'm wondering if my idiosyncratic use of words can help me identify what framing I am projecting onto problems, and therefore, what solutions may be effective. So often, the solution of a problem has little to do with actually doing the work to solve it – but bringing the right mental representation about it (like pushing on a 'pull' handle).
Scrutinizing what specific flavor of difficulty a task suggests (to me), may mean that solving the problem is, sometimes, as simple as confirming if my subjective probability assessment is accurate. Take the supervisor example, if it turns out he's a pushover, (easily convinced) then the problem no longer is "difficult" and also likely to solved.
Maybe I should ask myself "Is it effortful or just fanciful?" followed by "why do I think it's fanciful?"
and it turned out to be "actually pretty impossible" vs "okay actually sort of straightforward if I were trying all the obvious things".
Interesting, because looking at this question, things not appearing "straightforward" appears to be why I flinch away from them - I know that 'straightforward' doesn't imply "easy" or "effortless" but I assume it does imply something like predictability? As in, digging a big hole can be very straightforward in that you you grab a shovel and dig, and then keep digging until it's big enough. But the act of digging is also very hard and effortful. Does "straightforward but effortful" seem to characterize, in flavor, how a task appears once you've forced yourself to question if it is impossible?
Maybe it's not you're not deficient in dreaming impossible things so much as you're very good at seeing "obvious" means and ways of accomplishing something and mapping how the dominoes land.
I've found this a very provocative question. And it really depends on how specific the conditions are. In my case, I think it is impossible to make a full-time career from directing feature films. On the other hand I think it's very hard but not impossible for me to make a full-time career from making video content (i.e. I currently get commissioned to make music videos, but not enough to make it full-time - the business model is totally different).
It is also possible, very very very hard, but not impossible to subsidize an expensive filmmaking hobby with the income from a day-job.
Do you really have a license to sell hair tonic... to bald eagles... in Omaha Nebraska? Impossible! To sell hair tonic, maybe, but the joke works because impossibility = specificity.
Can I find a Ming vase tomorrow? No. In the next Month? Maybe. In 10 years? Probably.
Specificity is the expressway to impossibility.[1]
Often, things that seem impossible, are not, actually. If you list out exactly why they are impossible, you might notice ways in which it instead it is merely Very Hard, and sometimes not even that.
I'm not sure about this, I think Very Hard and Impossible do mean very different things even if "impossible" is technically not applicable. It seems like when I label something "impossible" what I really mean is it's so specific that "it's a total crapshoot"[2] or to be more specific what I mean is "I do not have any faith that persistence is a reliable predictor of success with this task" and implicitly it is not worth pursuing, since the risk return ratio is both lousy and fixed. (Compare this to something which is "very hard" but for which persistence [3]has a demonstrable effect on the odds - the harder/longer you work at it, the vastly better your chances of success gets, but the return is still attractive even if you work for a very long time at it).
For example, I'm sure learning the Mandolin is very hard - not impossible - if I took lessons and stuck with it practicing every-day, I'm sure even a four-thumbed tone-deaf person like me could learn it. (it just doesn't interest me enough)
However, generating a full-time income from successive feature films? There is no "just stick with this every day" that will make that a near-certainty. You can make a feature film, you can bootstrap, self fund it - but you can't be sure that it will translate into enough commercial success that you can quit your day job to work on the next.
The irony is, you must, absolutely must have "success metrics" and clearly defined goals to increase your chances of success. But beyond a certain threshold it renders it impossible.
Precision versus Accuracy?
Since I'm speaking in generalities I'm choosing to gloss over the notion of "work smarter not harder", which personally I'm all for. But obviously something for which working 'smarter' increases the odds of success is very different to something which is a "total crapshoot".
The irony is blog posts do consume attention, if I read this blog post, that is time, energy, and effort I am using exclusively on that - and I wonder if it's a mixed metaphor? If we actually internalize and learn something from a piece of media, be it a blog post, a documentary, a book, a lecture etc. etc. we are said to have "digested it". And "consume" is a lazy analogy to eating rather than an apt description of what is going on.
Software is not consumed by use. In fact, software is duplicated by use. If you install Linux on a new computer, there are now more copies of Linux in existence, not fewer. You have not consumed a Linux; you have produced one, by mechanical reproduction, like printing a new copy of an existing book.
But in practice, most people will now be locking themselves into a Linux ecosystem. Dual-Boots are the minority. Therefore most users have been 'consumed' by Linux, or Emacs vs. Vim.
Maybe the active-passive/agent-patient assignment is confused? It is not we who consume the blogpost, the blogpost consumes us. It is not we who consume software, the software consumes our resources.
Information can be duplicated and therefore not consumed, but any time attention is paid to it, it is consuming that finite resource. Information duplication doesn't create more attention. There can be plenty more information, and no one to digest it.
and depth of crystallized intelligence that AIs now have.
How do you measure the intelligence? What unique problems is it solving? And how much of it is precipitated by the intelligence of good prompters ? (of which I am certainly not one, as much of a 'self-own' that might be to admit).
If lousy prompts deliver lousy and unintelligent replies - then is the AI really that intelligent?
If skillful prompts which much like Socrates imply and lead the AI to point to certain solution spaces, then is the lion-share of credit for being intelligent rest with the user or the AI? Especially since if the AI is more intelligent than the average person, then wouldn't it lift lousy prompts by understanding the user's intent and reformulating it in a manner better then their feeble intelligence could?
Time is not, in terms of experience, uniform. Therefore even with extra time, priorities can vary. People tend to have 5 hours of peak productivity a day - this doesn't mean they couldn't be generally more productive with additional off-peak hours, but it does mean that priories vary depending on what 'kind' of hour we're hypothesizing.
For example, folding clothes and putting them away I can do off-peak. However, I don't like driving my car too late at night - even though I prefer the lack of traffic - because I don't like driving with diminished alertness. As such, my driving habits and how I structure my day may not change that much.
You may find that with more off-peak hours my top priorities during peak hours wouldn't change, but my off-peak and lower priorities would.