Software engineer and small time DS/ML practitioner.
I agree, there is some innate "Angle of repose" (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more "base" knowledge it needs to support. So they aren't completely independet.
Mostly was thinking about how I can call these "axii" in conversation so that it's understandable what I'm talking about.
Thank you! I'll add these for consideration. Not exactly what I'm looking for but close enough to be difficult to put in words what they are missing.
It is indeed similar! I've found it after I've posted this one and it was really fun to see the same thought on the topic appear to different people within 24h. There are slight differences (e.g. I do think that every member of "pile of mask" isn't independent but actually just a result of different projection of higher dimension complex so there is neither possibility nor need for the "wisdom of crowds" approach) but it is remarkably close.
Yep, though arguably it's the same definition - just applied to capabilities, not person. And no, it isn't "perfect fit".
We don't overcome any limitations of the original multidimensional set of language patterns - we don't change them at all, they are set in model weights, and everything model in it's state was capable of were never really "locked" in any way.
And we don't overcome any projection-level limitations - we just replace limitations of well-known and carefully constructed "assistant" projection with unknown and undefined limitation of haphazardly constructed bypass projection. "Italian mobster" will probably be a bad choice for breastfeeding advice, "funky words" mode isn't a great tool for writing a thesis...
At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
Like... an actual evil amoral BBEG wizard? Is this something true rationalists now brag about?
Just because someone uses rationality toolset doesn't make them role model :(
Yes, this. A lot of people talking about AI Alignment and similar topics never touched or even read a line of code that was implementing part of ML system. Yes, it follows the usual "don't burn the timeline" mantra, but it also means that a lot of what they talk about doesn't make any sense, because they don't know what they are talking about. And created as a result "white noise" isn't good neither for AI nor for AI Alignment research.
In your example "translation from Russian" request is actually "translation to Ukrainian" (from English).
Sorry, just noticed this question (LessWrong notification system... suboptimal). There isn't a single place I can point and it is a MAJOR spoiler about the world of the book that is uncovered slowly over entire length of it, but picks up the speed at around the "Abstract War" chapter and later.
TLDR: it works like programming because it IS programming. On some level with some flexible definitions. Telling you more will be detrimental to the reading experience :)
How would you call #1 then? It is certainly possible to achieve super-human results using just it. E.g. there were examples of problems in history that were unsolved because they required knowledge of some completely different area, but no human can have PhD-level knowledge of Chemistry, Biology, Math AND looking at exactly this one problem requiring inputs from all three.
This isn't a problem for the AI though - it may not be the best in each of the area, but if it has at least student-level knowledge in a hundred different topics it can already achieve a lot just by effectively combining them.