Prompted by Raemon's article about "impossible" problems, I've been asking myself:
What do I actually mean when I say something is “very hard” or “difficult”?
I wonder if my personal usage of these words less describes the effort involved, but more the projected uncertainty. If I describe something as difficult, I tend to use it in one of these three patterns:
My purpose here isn’t to muse on the correct or even idiomatic usage of the words. Instead I'm wondering if my idiosyncratic use of words can help me identify what framing I am projecting onto problems, and therefore, what solutions may be effective. So often, the solution of a problem has little to do with actually doing the work to solve it – but bringing the right mental representation about it (like pushing on a 'pull' handle).
Scrutinizing what specific flavor of difficulty a task suggests (to me), may mean that solving the problem is, sometimes, as simple as confirming if my subjective probability assessment is accurate. Take the supervisor example, if it turns out he's a pushover, (easily convinced) then the problem no longer is "difficult" and also likely to solved.
Maybe I should ask myself "Is it effortful or just fanciful?" followed by "why do I think it's fanciful?"
and it turned out to be "actually pretty impossible" vs "okay actually sort of straightforward if I were trying all the obvious things".
Interesting, because looking at this question, things not appearing "straightforward" appears to be why I flinch away from them - I know that 'straightforward' doesn't imply "easy" or "effortless" but I assume it does imply something like predictability? As in, digging a big hole can be very straightforward in that you you grab a shovel and dig, and then keep digging until it's big enough. But the act of digging is also very hard and effortful. Does "straightforward but effortful" seem to characterize, in flavor, how a task appears once you've forced yourself to question if it is impossible?
Maybe it's not you're not deficient in dreaming impossible things so much as you're very good at seeing "obvious" means and ways of accomplishing something and mapping how the dominoes land.
I've found this a very provocative question. And it really depends on how specific the conditions are. In my case, I think it is impossible to make a full-time career from directing feature films. On the other hand I think it's very hard but not impossible for me to make a full-time career from making video content (i.e. I currently get commissioned to make music videos, but not enough to make it full-time - the business model is totally different).
It is also possible, very very very hard, but not impossible to subsidize an expensive filmmaking hobby with the income from a day-job.
Do you really have a license to sell hair tonic... to bald eagles... in Omaha Nebraska? Impossible! To sell hair tonic, maybe, but the joke works because impossibility = specificity.
Can I find a Ming vase tomorrow? No. In the next Month? Maybe. In 10 years? Probably.
Specificity is the expressway to impossibility.[1]
Often, things that seem impossible, are not, actually. If you list out exactly why they are impossible, you might notice ways in which it instead it is merely Very Hard, and sometimes not even that.
I'm not sure about this, I think Very Hard and Impossible do mean very different things even if "impossible" is technically not applicable. It seems like when I label something "impossible" what I really mean is it's so specific that "it's a total crapshoot"[2] or to be more specific what I mean is "I do not have any faith that persistence is a reliable predictor of success with this task" and implicitly it is not worth pursuing, since the risk return ratio is both lousy and fixed. (Compare this to something which is "very hard" but for which persistence [3]has a demonstrable effect on the odds - the harder/longer you work at it, the vastly better your chances of success gets, but the return is still attractive even if you work for a very long time at it).
For example, I'm sure learning the Mandolin is very hard - not impossible - if I took lessons and stuck with it practicing every-day, I'm sure even a four-thumbed tone-deaf person like me could learn it. (it just doesn't interest me enough)
However, generating a full-time income from successive feature films? There is no "just stick with this every day" that will make that a near-certainty. You can make a feature film, you can bootstrap, self fund it - but you can't be sure that it will translate into enough commercial success that you can quit your day job to work on the next.
The irony is, you must, absolutely must have "success metrics" and clearly defined goals to increase your chances of success. But beyond a certain threshold it renders it impossible.
Precision versus Accuracy?
Since I'm speaking in generalities I'm choosing to gloss over the notion of "work smarter not harder", which personally I'm all for. But obviously something for which working 'smarter' increases the odds of success is very different to something which is a "total crapshoot".
The irony is blog posts do consume attention, if I read this blog post, that is time, energy, and effort I am using exclusively on that - and I wonder if it's a mixed metaphor? If we actually internalize and learn something from a piece of media, be it a blog post, a documentary, a book, a lecture etc. etc. we are said to have "digested it". And "consume" is a lazy analogy to eating rather than an apt description of what is going on.
Software is not consumed by use. In fact, software is duplicated by use. If you install Linux on a new computer, there are now more copies of Linux in existence, not fewer. You have not consumed a Linux; you have produced one, by mechanical reproduction, like printing a new copy of an existing book.
But in practice, most people will now be locking themselves into a Linux ecosystem. Dual-Boots are the minority. Therefore most users have been 'consumed' by Linux, or Emacs vs. Vim.
Maybe the active-passive/agent-patient assignment is confused? It is not we who consume the blogpost, the blogpost consumes us. It is not we who consume software, the software consumes our resources.
Information can be duplicated and therefore not consumed, but any time attention is paid to it, it is consuming that finite resource. Information duplication doesn't create more attention. There can be plenty more information, and no one to digest it.
and depth of crystallized intelligence that AIs now have.
How do you measure the intelligence? What unique problems is it solving? And how much of it is precipitated by the intelligence of good prompters ? (of which I am certainly not one, as much of a 'self-own' that might be to admit).
If lousy prompts deliver lousy and unintelligent replies - then is the AI really that intelligent?
If skillful prompts which much like Socrates imply and lead the AI to point to certain solution spaces, then is the lion-share of credit for being intelligent rest with the user or the AI? Especially since if the AI is more intelligent than the average person, then wouldn't it lift lousy prompts by understanding the user's intent and reformulating it in a manner better then their feeble intelligence could?
I think both those CS software manuals and tutorials would be an incredible and helpful resource if you were able to find the time.
Trying to do any of this in one day (especially with a penalty for failure to meet the deadline) would feel like an unbearable compromise on quality. I understand that in some sense this is intentional -- the purpose of the blogging marathon is not to write highest possible quality; it is specifically to produce quantity. Because if you have the internal drive for quality, this exercise can help you overcome some mental blocks, and then you will find your own way which includes both high quality and a greater quantity than you had before.
I suppose I had a different intention with this exercise. My problem wasn't quantity - I can vomit out words easily and never understood the fear of the blank page. I was hoping, that through brute force writing for the public, I could somehow become a "better writer".
Perhaps what I really need is a "edit-haven" 30 days of editing, redrafting, critiquing and analysis of my own and other writing with the intent of learning how to better edit myself?
Different courses for different horses, strokes for folks, as they say
I hope you don't mind if I post here my own attempt back in August, I think I only managed 27 of my intended 30 posts before my self-imposed deadline in early September.
My main memory of this time is - "geez coming up with post ideas was a slog when I was constrained by only 24 hours for research and multiple drafts!"
Closed Mouth, Open Oppurtunities
Why is it interesting?
Reading Horoscopes and Sun Tzu
What is useful?
Success Stories Teach Less than Failure
Why did the Simpsons and Mercedes finally stop winning all the time?
Why was Technicolor IB so vibrant?
Misremembering things on purpose
Answer a question with a better question
A Good Communicator Gives and Takes
Althusser's Interpellation with the boring stuff cut out
Transcode your videos to keep the Lucille Ball that lives in your computer Happy
A Cover Letter from Waylon Smithers
Reflections on 15 days of writing Blog posts
Great Artists aren't the greatest salesman but the most self-critical
"We're Not a Cult" (hint, they are)
No, I won't watch the Sopranos just because I'm supposed to
"All Laws were followed" but it's still not okay
Aristotle talks keeping fit, royal friendships, and not missing Athens
What if a Baptism of Flame can't change you?
What if I'm wrong? Negotiate with yourself to avoid making mistakes
I'm really encouraged by research that attempts interventions like this rather than the ridiculous "This LLM introspects, because when I repeatedly prompted it about introspection it told me it does" tests.
I do wonder with the only 20% success rate how that would compare to humans? (I do like the ocean vector failed example - “I don’t detect an injected thought. The ocean remains calm and undisturbed.”)
I'm not sure if one could find a comparable metric to observe in human awareness of influences on their cognition... i.e. "I am feeling this way because of [specific exogenous variable]"?
Isn't that the entire point of using activities like Focusing, to hone and teach us to notice thoughts, feelings, and affect which otherwise go un-noticed? Particularly in light of the complexity of human thought and the huge amount of processes which are constantly going on unnoticed (for example, nervous tics which I've only become aware of when someone has pointed them out to me, but others might be saccades - we're not aware or notice each individual saccade, only the 'gestalt' of where out gaze goes - and even then involuntary interventions that operate faster than we can notice can shift our gaze, like when someone yells out for help, or calls your name. Not to mention Nudge Theory and Priming)
I've been reflecting on the suggestion to think about "what kind of answer you're looking for" quite a bit recently, not in terms of conversation with others (although it is relevant to my difficulties with prompting LLMs) but in terms of framing problems and self-directed questions.
"Araffe" is a nonsense word familiar to anyone who works with generative AI image models or captioning in the same way that "delve" or "disclaim" are for LLMs and presents yet another clear case of an emergent AI behavior. I'm currently experimenting with generative video again and the word came to my attention as I try to improve the adherence to my prompts and mess around with conditioning. These researchers/artists investigated the origin of 'arafed' and similar words: it appears to be a case of overfitting the BLIP2 vision-language pretraining framework to the COCO dataset of image and caption which always starts off with "a bed with" "a slice of cake" - the captions always start with 'a', so the model would start captions with 'a' and create a nonsense word like 'arrafed' or 'arrafe' or even 'arraful' around it to score better. Apparently, later versions of the framework don't exhibit this behavior.
The audience questions in the video are interesting in that they suggest to the researchers that they had the opportunity to define the meaning of the word. Which, I suppose, would undermine the point of the presentation and that it is an emergent hallucination of the AI.
Another interesting observation is that some Stable Diffusion users included it in their 'quality salad' (a word salad of words that they claim improves the overall quality of outputs that often includes "RAW photo 4K best quality subsurface scattering" etc. etc.). How true that is depends on how much you believe quality salads work at all or whether it's some kind of confirmation bias. I for one found in at least one case on a non-stable-diffusion model adding 'araffe' to the prompt caused a tiny decrease in subjective quality.
Is it too much to read into this as an example of the symbiosis or perhaps even domination of AI over our language in the future, or this is just another innocuous artefact in the same way that typographical errors like "teh" or "!!1!!" became internet in-jokes?