" (...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties."
- Saharon Shelah
"A little learning is a dangerous thing ;
Drink deep, or taste not the Pierian spring" - Alexander Pope
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
Happy to hear ILIAD felt productive!
When we started ILIAD this was from a felt sense that there is in fact much more commonality and convergence in research directions than commonly assumed.
Feels like something has gone wrong way before when one cares more about money than survival of the human race.
If a man's judgement is really swayable by equity one cant stop to wonder whether he is the right man for the job in the first place.
Did you see the movie before ?
I mean paperclip maximization is of course much more memetic than 'tiny molecular squiggles'.
So indeed if you define coherence as the negative of arbitrage value.
There is a pretty close relation between thermodynamic free energy and arbitrgable value and degree to which an entity can be money pumped.
You might also be interested in :
https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents
Looking over the comments, some of the most upvoted comments express the sentiment ththat Yudkowsky is not the best communicator. This is what the people say.
For my interest, for these reallife latents with many different pieces contributing a small amount of information do you reckon Eisenstat's Condensation / some unpublished work you mentioned at ODYSSEY would be the right framework here?
100 percent this. There is this perpetual miscommunication about the word "AGI".
"When I say AGI, I really mean a general intellignintellignence not just a new app or tool."
Claude is smarter than you. Deal with it.
There is an incredible amount of cope about the current abilities of AI.
AI isn't infallible. Of course. And yet...
For 90% of queries a well-prompted AI has better responses than 99% of people.For some queries the number of people that could match the kind of deep, broad knowledge that AI has can be counted on two hands. Finally, obviously, there is no man alive on the face of the earth that comes even close to the breadth and depth of crystallized intelligence that AIs now have.
People have developped a keen apprehension and aversion for " AI slop". The truth of the matter is that LLMs are incredible writers and if you had presented AI slop as human writing to somebody ten six years ago they would say it is good if somewhat corporate writing all the way to inspired, eloquent, witty.
Does AI sometimes make mistakes? Of course. So do humans. To be human is to err.
There is an incredible amount of cope about the current abilities of AI. Frankly, I find it embarassing. Witness the absurd call to flag AI-assisted writing. The widespread disdain for " @grok is this true?" . Witness how llm psychosis has gone from perhaps a real phenomenon to a generic slur for slightly kooky people. The endless moving of goalposts. The hysteria for the slopapocalypse. The almost complete lack of interest in integrating AI in life conversations. The widespread shame that people evidently seem to still feel when they use AI. Worst of all - it's not just academia in denial, or the unwashed masses. The most incredible thing to me is how much denial, cope and refusal there is in the AI safety space itself.
I cannot escape the conclusion that inside each and everyone of us is an insecure ape that cannot bear to see itself usurped from the throne of creation.