Random Attempts at Apllied Rationality
Using Credence Calibration for Everything
NLP and other Self-Improvement
The Grueling Subject
Medical Paradigms


That setup doesn't give you a randomized control trial which is what's usually meant with the term clinical trial.

The system has a lot of incentives against doctors cooperating with illegal clinical trials. I don't think there's a notable example of anyone who pulled off a comparable trial which suggests that it's hard.

Clinical trials are highly regulated. The median cost of a clinical trial is on the order of US$19 million. Do you have that kind of money available to run a clinical trial?

Whether someone has epistemic virtue depends on whether they use the epistemic tools available to them. We made a lot of progress in epistemics in the last hundred years.

This post looks to me like it's not living up to any epistemic virtues championed by the rationality community.

When we talk about predictions in rationality we are talking about statements that come with the likelihood of whether or not a future event happens.

You lay out a thesis, but you don't make an argument for why I should believe the thesis. You are just saying what you believe to be true and not why you believe it. 

The fact that you believe that someone would run a clinical trial because you wrote the post also suggests that you are a bit delusional about how things work. 

There's a lot of material to read. Part of being good at reading is spending one's attention in the most effective way and not wasting it with low-value content. 

If you want your proposed solution attributed to you, writing it in a style that people actually want to engage with instead of "your personal voice", would be the straightforward choice. 

Larry McEnerney is great at explaining what writing is about. 

What do you care more about? Getting to write in "your personal voice" or getting your ideas well received?

I'm definitely a crank, but I personally feel like I'm onto something?

That quite common for cranks ;)

If the ideas you want to propose are unorthodox, try to write in the most orthodox style in the venue you are addressing. 

Look at how posts that have high karma are written and try to write your own post in the same style. 

Secondly, you can take your post and tell ChatGPT that you want to post it on LessWrong and ask it what problems people are likely to have with the post. 

however there's as if the higher echelons are trapped in office politics and doesn't really seem to realise what sort of implications are going to occur if they let themselves be gamed by malicious actors

It's quite ironic that you say that at the same time as speaking against actions that are about making it harder to game Wikipedia by malicious actors. 

Wikipedia isn't perfect but all decisions have their tradeoffs and when you don't think about those, that's not really the basis for improving anything. 

It's amplified in a large magnitude on Wikipedia which acts like a monopoly on the knowledge market. 

There are plenty of different ways knowledge is published on the web and Wikipedia does not have a monopoly on knowledge. What it has is a community that in all its flaws has a decent process that produces valuable outcomes. 

Nobody found a way to set up the way a community around the topic works better than Wikipedia. 

When negotiating it can be useful to be open to outcomes that are net destruction of value, even if the outcome is not what you ideally want. 

Load More