It's sad because the AI partners in the story seem to be fake. Not fake because they're AI, fake because they're fiction. For example, it's sad to fall in love with a character on character.ai because the LLM is simply roleplaying, it's not really summoning the soul of Hatsune Miku or whoever. I assume the world models are the same; they're basically experience machines.
This tells me that people might step into experience machines not because they don't care about reality, but because they convince themselves the world inside is reality.
Yes, their goal is to make extremely parameter-efficient tiny models, which is quite different from the goal of making scalable large models. Tiny LMs and LLMs have evolved to have their own sets of techniques. Parameter sharing and recurrence works well for tiny models but increases compute costs a lot for large ones, for example.
There was that RCT showing that creatine supplementation boosted the IQs of only vegetarians.
While looking for the RCT you're referencing, I instead found this one from 2023 which claims to be the largest to date and which states "Vegetarians did not benefit more from creatine than omnivores." (They tested 123 people altogether over 6 weeks; these RCTs tend to be small.)
A systematic review from 2024 states:
To summarize, we can say that the evidence from research into the effects of creatine supplementation on brain creatine content of vegetarians and omnivores suggests that vegetarianism does not affect brain creatine content very much, if at all, when compared to omnivores. However, there seems to be little doubt that vegans do not intake sufficient (if any) exogenous creatine to ensure the levels necessary for maintaining optimal cognitive output.
I tried googling to find the answer. First I tried "melting chocolate in microwave" and "melting chocolate bar in microwave", but those just brought up recipes. Then I tried "melting chocolate bar in microwave test", and the experiment came up. So I had to guess it involved testing something, but from there it was easy to solve. (Of course, I might've tried other things first if I didn't know the answer already.)
This is a neat question, but it's also a pretty straightforward recall test because descriptions of the experiment for teachers are available online.
I think alcohol's effects are at least somewhat psychosomatic, but that doesn't mean you can easily get the same effect without it. Once nobody's actually drinking and everyone knows it, then the context where you're expected to let loose is broken. You'd have to construct a new ritual that encourages the same behavior without drugs, which is probably pretty hard.
I agree that the vocals have gotten a lot better. They're not free of distortion, but it's almost imperceptible on some songs, especially without headphones.
The biggest tell for me that these songs are AI is the generic and cringey lyrics, like what you'd get if you asked ChatGPT to write them without much prompting. They often have the name of the genre in the song. Plus the way they're performed doesn't always fit with the meaning. You can provide your own lyrics, though, so it's probably easy to get your AI songs to fly under the radar if you're a good writer.
Also, while some of the songs on that page sound novel to me, they're usually more conventional than the prompt suggests. Like, tell me what part of the last song I linked to is afropiano.
This is what I think he means:
The object-level facts are not written by or comprehensible to humans, no. What's comprehensible is the algorithm the AI agent uses to form beliefs and make decisions based on those beliefs. Yudkowsky often compares gradient descent optimizing a model to evolution optimizing brains, so he seems to think that understanding the outer optimization algorithm is separate from understanding the inner algorithms of the neural network's "mind".
I think what he imagines as a non-inscrutable AI design is something vaguely like "This module takes in sense data and uses it to generate beliefs about the world which are represented as X and updated with algorithm Y, and algorithm Z generates actions, and they're graded with a utility function represented as W, and we can prove theorems and do experiments with all these things in order to make confident claims about what the whole system will do."(The true design would be way more complicated, but still comprehensible.)
Putting GPT back in the name but making it lowercase is a fun new installment in the "OpenAI can't name things consistently" saga.
I don't think this means much, because dense models with 100% active parameters are still common, and some MoEs have high percentages, such as the largest version of DeepSeekMOE with 15% active.