Senior Research Scientist at NTT Research, Physics & Informatics Lab. jessriedel.com , jessriedel[at]gmail[dot]com
Does excalidraw have an advantage over a slides editor like PowerPoint or Keynote?
Let me also endorse the usefulness of AlternativeTo.net . Highly recommended.
You've given some toy numbers as a demonstration that the claim needn't necessarily be undermined, but the question is whether it's undermined by the actual numbers.
> Of course, the outcomes we’re interested in are hospitalization, severe Covid, and death. I’d expect the false positives on these to be lower than for having Covid at all, but across tens of thousands of people (the Israel study did still have thousands even in later periods), it’s not crazy that some people would be very ill with pneumonia and also get a false positive on Covid.
Does this observation undermine the claim of a general trend in effectiveness with increasing severity of disease? That is, if false positives bias the measured effectiveness downward, and if false positives are more frequent with less severe severe, then the upward trend is less robust and our use of it to extrapolate into places where the error bars are naively large is less convincing.
The automated tools on Zotero are good enough now that getting the complete bibtex information doesn't really make it much easier. I can convert a DOI or arXiv number into a complete listing with one click, and I can do the same with a paper title in 2-3 clicks. The laborious part is (1) interacting with each author and (2) classifying/categorizing the paper.
Looks fine, thanks.
Does the org have an official stance? I've seen people write it both ways. Happy to defer to you on this, so I've edited.
If we decide to expand the database in 2021 to attempt comprehensive coverage of blog posts, then a machine-readable citation system would be extremely helpful. However, to do that we would need to decide on some method for sorting/filtering the posts, which is going to depend on what the community finds most interesting. E.g., do we want to compare blog posts to journal articles, or should the analyses remain mostly separate? Are we going to crowd-source the filtering by category and organization, or use some sort of automated guessing based on authorship tags on the post? How expansive should the database be regarding topic?
Currently of the 358 web items in our database, almost half (161) are blog posts from AI Alignment Forum (106), LessWrong (38), or Effective Altruism Forum (17). (I emphasize that, as mentioned in the post, our inclusion procedure for web content was pretty random.) Since these don't collect citations on GoogleScholar, some sort of data on them (# comments and upvotes) would be very useful to surface the most notable posts.
Somewhat contra Alex's example of a tree, I am struck by the comprehensibility of biological organisms. If, before I knew any biology, you had told me only that (1) animals are mechanistic, (2) are in fact composed of trillions of microscopic machines, and (3) were the result of a search process like evolution, then the first time I looked at the inside of an animal I think I would have expected absolutely *nothing* that could be macroscopically understood. I would have expected a crazy mesh of magic material that operated at a level way outside my ability to understand without (impossibly) constructing a mental entire from the bottom up. And indeed, if animals had been designed through a one-shot unstructured search I think this is what they would be like.
In reality, of course, animals have macroscopic parts that can be partially understood. There's a tube food passes through, with multiple food-processing organs attached. There are bones for structure, muscles to pull, and tendons to transmit that force. And the main computation for directing the animal on large scales takes place in a central location (the brain).
We can tell and understand a post-hoc story about why the animal works, as a machine, and it's sorta right. That animals have a strong amount of design, making this is possible, seems to be related to the iterated search-and-evaluate process that evolution used; it was not one-shot unstructured search.
At the least, this suggest that if search vs. design identifies a good dichotomy or a good axis, it is an axis/dichotomy that is fundamental, and arises way before human-level intelligence.
You're drawing a philisophical distinction based on a particular ontology of the wavefunction. As simpler version arises in classical electromagnetism: we can integrate out the charges and describe the world entirely as an evolving state of the E&M field with the charges acting as weird source terms, or we can do the opposite and integrate out the E&M field to get a theory of charges moving with weird force laws. These are all equivalent descriptions in that they are observationally indistinguishable.