But your general point is probably valid.
The LLVM compiler has some extremely exciting code that identifies if it is compiling an implementation of popcount(), and if so substitutes in an llvm-ir primitive for popcount, which will get compiled down to a popcount instruction if the target has one.
As I said, this code is very entertaining.
Really, I ought to extend it so it also recognizes a different common way of implementing popcount, for reasons of getting better scores in some commonly used benchmarks. (Changing the benchmark? Clearly cheating. Extending the compiler so it recognises a code sequence in a common benchmark? Slightly sketchy.) But really, I can’t be bothered to write a PR against that horrific piece of compiler code.
I disagree here. It is reasonably easy to mix assembler and C if there’s a clear reason for doing it.
Examples:
Software defined radio doing vector operations for the performance critical digital filter. Now, gnuradio is having to do an excitingly difficult version of this because:
a. The software has to work in Intel. ARM, MIPS, RISC-V.
b. Which vector operations the cpu supports is only known at run time.
So here, the performance critical routines have to e it’s not just for each target architecture, but also in multiple versions for each target architecture depending on level of vector support. (Does it support RISC-V vectorization of floating point? Does it support RISC-V vectorization of bit manipulation?)
And so at run time you need to substitute in the appropriate implementation depending on which cpu features thr kernel reports. But it’s all do-able.
When I read the title, I thought you were going to talk about how LLMs sometimes claim bodily sensations such as muscle memory. I think these are probably confabulated. Or at least, the LLM state corresponding to those words is nothing like the human state corresponding to those words.
Expressions of emotions such as joy? I guess these are functional equivalents of human states. A lack of enthusiasm (opposite of joy) an be reflected in the output tokens.
In most of these examples, LLMs have a state that is functionally like a human state, e.g. deciding that they’re going to refuse to answer, or “wait…” backtracking in chain of thought. I say Functionally, because these states have externally visible effects on the subsequent output (e.g. it doesn’t answer the question). It seems that LLMs have learned the words that humans use for functionally similar states (e.g, “Wait”).
The underlying states might not be exactly human identical. “Wait” backtracking might have function differences from human reasoning that are visible in the tokens generated.
In other groups with I’m familiar, you would kick out people you think are actually a danger (e.g. you discover the guy is a convicted child molester, and have some intelligence to the effect that they are not a reformed character) or you think they might do something that brings your group into disrepute. (I can think of one example where the counterintelligence investigation of a group member suggested that they were setting up a financial scam and were planning to abscond with people’s money).
But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.
Go ahead, delete it if you don’t think it was a good comment,
It was an honest attempt to think of instances of paranoid uncertainty (protestors don’t know if other protestors are acting in good faith), but sure, delete it if you think it wasn’t up to standard,
I was asking DeepSeek R1 about which things LLMs say are actually lies, as opposed to just being mistaken about something, and one of the types of lie it listed was claims to have looked something up. R1 says it knows how LLMs work, it knows they don’t have external database access by default, and therefore claims to that effect are lies.
Some (not all) of the instances of this are the LLM trying to disclaim responsibility for something it knows is controversial. If it’s controversial, suddenly, the LLM doesn’t have opinions, everything is data it has looked up from somewhere. If it’s very controversial, the lookup will be claimed to have failed.
—-
So that’s one class of surprising LLM claims to experience that we have strong reason to believe are just lies, and the motive for the lie, usually, is avoiding taking a position on something controversial.