Posts

Sorted by New

Wiki Contributions

Comments

drgunn5mo21

Fanelli is a good, if dated reference for this. Another important point is that there are levels of misconduct in research, ranging from bad authorship practices to outright fabrication of results, with the less severe practices being relatively more common: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4269469/

Aside from all that, there's irreproducibility, which doesn't arise from any kind of deliberate misconduct, but still pollutes the epistemic commons: https://www.cos.io/rpcb

drgunn6mo40

As someone with experience in BSL-3 labs, BSL feels like a good metaphor to me. The big issue with the RSP proposal is that it's still just a set of voluntary commitments that could undermine progress on real risk management by giving policymakers a way to make it look like they've done something without really doing anything. It would be much better with input from risk management professionals.

drgunn6mo10

I'm confused.The protest is now listed at 4. Have you coordinated with them?

drgunn7mo30

I like it, but it feels like you could have worked snakes in there somehow: https://www.vectorsofmind.com/p/the-snake-cult-of-consciousness

drgunn7mo15-2

X-risk discussions aren't immune from the "grab the mic" dynamics that affect every other cause advocacy community.

There will continue to be tactics such as "X distracts from Y" and "if you really cared about X you would ..." unless and until people who care about the cause for the cause's sake can identify and exclude those who care about the cause for the sake of the cultural and social capital that can be extracted. Inclusivity has such a positive affect halo around it that it's hard to do this, but it's really the only way.

Longer-form of the argument: https://meaningness.com/geeks-mops-sociopaths

Answer by drgunnSep 24, 202340

I'm not available, I'll be clear about that up front, but I am in my late 40s, in case that helps anyone update their priors about the demographics. YMMV as to whether I'm intellectually interesting ;-)

drgunn7mo10

When I hear someone saying,

"Because it is asking to me to believe it completely if I believe it at all, I feel more comfortable choosing to consider it “false” on my terms, which is merely that it gave me no other choice because it defined itself to be false when it is only believed weakly."

what I think is "of course there are strong and weak beliefs!" but true and false is only defined relative to who is asking and why (in some cases), so you need to consider the context in which you're applying LoEM.

In other words, LoEM applies to "Does 2+2=4?" but it does not apply to "Is there water in the fridge?", unless the context is specified more carefully.

It's obviously an error to only have 100% or 0% as truth values for all propositions, and it's perhaps less obviously an error to have the same probabilities that a proposition is true across all possible contexts in which that proposition might be evaluated.

More here: https://metarationality.com/refrigerator

drgunn7mo20

You seem to be talking about "combinatorial explosion". It's a classic problem in AI, and I like John Vervaeke's approach to explaining how humans solve the problem for themselves. See: http://sites.utoronto.ca/jvcourses/jolc.pdf

No one has solved it for AI yet.

drgunn9mo10

It's a very interesting model of the world that tastes 18 different brands of store-bought sauce and doesn't compare it to just making your own. Add "seriouseats" to any recipe-related query and you'll get recipes that both work and taste like they're supposed to. They may eventually fire their writers and replace them with AI trained on blogspam recipes, so exploit this knowledge while you can. Surprisingly few people know about it, given how much utility basic culinary knowledge can add to your life.

drgunn1y10

What if AI safety and governance people published their papers on Arxiv in addition to NBER or wherever? I know it's not the kind of stuff that Arxiv accepts, but if I was looking for a near-term policy win, that might be one.

Load More