Given that the basic case for x-risks is so simple/obvious[1], I think most people arguing against any risk are probably doing so due to some kind of myopic/irrational subconscious motive.
It isn't simple or obvious to many people. I've discussed it with an open-minded philosophy professor and he had many doubts, like:
So far I've had answers to these things, but they required their own long discussions, and the thornier ones (like moral realism) didn't get resolved. Overall, he seems to take it somewhat seriously, but he also has lots of experience with philosophers, students, coworkers, etc. trying to convince him of weird things, so it's unfortunately understandable that he isn't that concerned about this thing in particular yet.
I suppose you could argue that all of his objections are trivial and he's obviously biased, but I don't think that tackling his emotions instead of his arguments would help much.
Wanting competent people to lead our government and wanting a god to solve every possible problem for us are different things. This post doesn't say anything about the former.
I believe the vast majority of people who vote in presidential elections do so because they genuinely anticipate that their candidate will make things better, and I think your view that most people are moral monsters demonstrates a lack of empathy and understanding of how others think. It's hard to figure out who's right in politics!
Some people can be too dismissive of the differences between humans and LLMs.
One one hand, it's true that some people cherry-pick the mistakes that LLMs make and use them to denounce their intelligence, even though they're mistakes that many humans make. For example, some have said LLMs can't be intelligent because they can't multiply big numbers accurately without a calculator or a scratchpad; but humans can't do that, either.
On the other hand, I see people hand-wave away some important things. Someone will point out how strange it is that LLMs still hallucinate, and someone else will say "nah, humans make things up all the time!" But like, if you ask an LLM for someone's biographical information, it sometimes will give highly specific fake details mixed in with real details, without being misled by unreliable sources or having an agenda to persuade you of. Even an overconfident and dishonest human wouldn't do that. There's clearly something different in kind from what we humans do.
I don't think this means much, because dense models with 100% active parameters are still common, and some MoEs have high percentages, such as the largest version of DeepSeekMOE with 15% active.
It's sad because the AI partners in the story seem to be fake. Not fake because they're AI, fake because they're fiction. For example, it's sad to fall in love with a character on character.ai because the LLM is simply roleplaying, it's not really summoning the soul of Hatsune Miku or whoever. I assume the world models are the same; they're basically experience machines.
This tells me that people might step into experience machines not because they don't care about reality, but because they convince themselves the world inside is reality.
Yes, their goal is to make extremely parameter-efficient tiny models, which is quite different from the goal of making scalable large models. Tiny LMs and LLMs have evolved to have their own sets of techniques. Parameter sharing and recurrence works well for tiny models but increases compute costs a lot for large ones, for example.
There was that RCT showing that creatine supplementation boosted the IQs of only vegetarians.
While looking for the RCT you're referencing, I instead found this one from 2023 which claims to be the largest to date and which states "Vegetarians did not benefit more from creatine than omnivores." (They tested 123 people altogether over 6 weeks; these RCTs tend to be small.)
A systematic review from 2024 states:
To summarize, we can say that the evidence from research into the effects of creatine supplementation on brain creatine content of vegetarians and omnivores suggests that vegetarianism does not affect brain creatine content very much, if at all, when compared to omnivores. However, there seems to be little doubt that vegans do not intake sufficient (if any) exogenous creatine to ensure the levels necessary for maintaining optimal cognitive output.
I tried googling to find the answer. First I tried "melting chocolate in microwave" and "melting chocolate bar in microwave", but those just brought up recipes. Then I tried "melting chocolate bar in microwave test", and the experiment came up. So I had to guess it involved testing something, but from there it was easy to solve. (Of course, I might've tried other things first if I didn't know the answer already.)
This is a neat question, but it's also a pretty straightforward recall test because descriptions of the experiment for teachers are available online.
Philosophers have come up with a bunch of elaborate, if flawed, arguments for moral realism over the years. This professor gave me the book The Moral Universe which is a recent instance of this. To be fair, people who haven't already gotten got by modern philosophy or religion can be sold a form of anti-realism with simple thought experiments, like the aliens who desire nests with prime-numbered stones from IABIED.
I think moral realism is something many people believe for emotional reasons ("How DARE you suggest otherwise?"), but it's also a conclusion that can be gotten to with subtly flawed abstract reasoning.
You could probably sidestep the moral realism debate when talking about x-risk, because it seems plausible that AI could be wrong about morality, or it could simply be an unfeeling force of nature to which moral reasoning doesn't apply. I'm realizing now that if I wasn't so eager to debate morality, I could've avoided it altogether.