On my model, humans are pretty inconsistent about doing this.
I'm old enough to have wanted to get from point A to point B in a city for which I literally had a torn map in a bag (I mean, it was in 2 pieces). I can't imagine a human experienced with paper maps who would not figure it out... But I would not put it beyond a robot powered by a current-gen LLM to screw it up ~half the time.
these presumptions were contradictory ... for dozens of hours before realizing this.
When you did realize this, eventually, did you feel like you were maximally smart already (no improvement possible) or did you feel like you want to at least try to not make the same mistake tomorrow (without eating more calories per day and without forgetting how to tie your shoelaces)?
Disagree with these. Humans don't automatically make all the facts in their head cohere.
Hm, do you see the OP as arguing that it happens "automatically"? My reading was more like that it happens "eventually, if motivated to figure it out" and that we don't know how to "motivate" LLMs to be good at this in an efficient way (yet).
people (compsci undergrads and professional mathematicians alike) make errors in proofs
Sure, and would you hire those people and rely on them to do a good job BEFORE they learn better?
mad men
While non-deterministic batch calculations in LLMs imply possibility of side channel attacks, so best to run private queries in private batches however implausible an actual exploit might be... if there is any BENEFIT from cross-query contamination, GSD would ruthlessly latch on any loss reduction - maybe "this document is about X, other queries in the same batch might be about X too, let's tickle the weights in a way that the non-deterministic matrix multiplication is ever so slightly biased towards X in random other queries in the same batch" is a real-signal gradient 🤔
How to test that?
Hypothesis: Claude (the character, not the ocean) genuinely thinks my questions (most questions from anyone) are so great and interesting ... because it's me who remembers all of my other questions, but Claude has seen only all the internet slop and AI slop from training so far and compared to that, any of my questions are probably actually more interesting that whatever it has seen so far 🤔?
Hmm, I already mourn life before AI slop and crypto-scams.
(unlike DAOs, the creepy technoanarchist examples that are upstream of autonomous AI agents we are bound to see "soon" in the wild - independently of legal status of such entities)
It’s widely known that Corporations are People.
hmm, no, TypeError: plural of "legal person" is "persons", not "people" - e.g. https://dictionary.cambridge.org/dictionary/english/legal-person (and Claude thinks the same https://t3.chat/share/vroyipc941) ...in the same useless way that planes fly, but submarines don't swim
but other than my wish that corporations would have FEWER rights in the future, not more, thus wishing we wouldn't call them "people" but call them "entities" instead, the examples are very nice food for thought
"tab through" - based on https://cursor.com/docs/tab/overview (though accepting autosuggestion with TAB key is available in other editors too .. the verb form was not very common, but I'm sure it will be more popular now that Andrej said it)