Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
I do think that using "It's" or "It is" is part of the pattern.
I made the change.
When it comes to medical questions, a patient might ask a chatbot medical question with the intent to solve their medical issue. On the other hand, someone might ask the chatbot a medical question to understand
If I ask my friend "Do you think I should vaccinate my child?" I could be asking it because I want to make a decision about vaccination. I could also ask the question because I want to evaluate whether or not my friend is an antivaxxer.
Most humans have an understanding that a lot of questions that are asked of them have the intention of evaluation whether they belong to the right tribe, and act accordingly. That's a pattern that the AI is going to learn from training on a large corpus of human data.
We do have the existing concept of simulacrum levels, evaluation awareness is about guesses that the simulacrum level of a question isn't 1.
If you look at your last post on LessWrong it starts with:
"We are on the brink of the unimaginable. Humanity is about to cross a threshold that will redefine life as we know it: the creation of intelligence surpassing our own. This is not science fiction—it’s unfolding right now, within our lifetimes. The ripple effects of this seismic event will alter every aspect of society, culture, and existence itself, faster than most can comprehend."
The use of bold is more typical of AI writing. The ':' happens much more in AI writing. The emdash happens much more in AI writing, especially with "is not a X it's a Y".
Emdashes used to be a sign of high-quality writing where a writer is thoughtful enough to know how to use an emdash. Today, it's a sign of low-quality LLM writing.
It's also much more narrative driven than the usual opening paragraph of a LessWrong post.
The US system is build so that taking campaign donations is important to getting elected. It's not like for example the German system where politician can really on support of his party to get into parliament.
If you want to affect the US politician system it's likely necessary to cooperate with politicians who do take donations to their campaign funds.
It's also worth noting that that donations to PACs are probably worse than direct campaigns when it comes to lobbyist power.
A serious political engine could have money to defend against lawsuits, but, also, the more money you have, the more it's worth suing you. (I think at the very least having someone who specializes in handling all the hassle of getting sued would be worth it).
This suggest that the organization that has the money to defend against lawsuits is not the same organization as the organization making the potentially libelous claims.
There are broad organizations like Foundation for Individual Rights and Expression (FIRE) that can do that. You could also fund an organization that's more specific about defending people within your coalition.
An over-broad skepticism of experts risks turning people into the kind of credulous fools who try to heal themselves with the powers of quartz crystals.
There's a reason why I spoke about generally being skeptical. The person who easily accepts claims about the healing powers of quartz crystals is not broadly skeptical. They are not the person who often says "I don't know".
Especially if you apply some common sense, and if you keep track of which experts appear to be full it (e.g., the replication crisis)
The replication crisis is about the community of psychology getting much better at getting rid of bullshit. Before the crisis you could have listened to Feynmann's cargo cult science speech and him explaining why rat psychology is cargo cult science and observe that the same criticisms apply to most of psychology.
Fields of science that behave like what Feymann describes as cargo cult science but who don't had their replication crisis are less trustworthy than post-replication crisis psychology. Post-replication crisis psychology still isn't perfect but it's a step up.
There are many cases where systematically increased transparency that reveal problems in an expert community should get you to trust them more because they have found ways to reduce problems.
If you ask "What do I do if I don't know?", there's the answer is to make sure that you have decent feedback systems that allow you to change course if what you are doing isn't working.
There's the policy of generally being more skeptical both on claims that something is true or something is false and more often say "I don't know".
I think the key difference between a normal guy who believes in God and someone in a psych ward is often that the person who believes in God does so because it's what other people in authority told him but the person in the psych ward thinks for themselves and came up with their delusion on their own. This often means their beliefs are self-referential in a way that prevents them from updating due to external feedback.
If you believe that the word rational is supposed to mean something along the lines of "taking actions that optimize systematically for winning", believing something just because an authority told you can sometimes be irrational and sometimes be rational.
If you want to talk well about behavior you don't like, it makes sense to use words that refer to the reason for why the behavior is bad. Credulous or gullible might be words that better describe a lot of normie behavior. The person who believes in god just because his parents and teachers told him is credulous.
Given that most of the models value Kenyan lives more than other lives, this is a quite interesting thesis that Kenyan language use drives LLM behavior here.