Here's what I usually try when I want to get the full text of an academic paper:
https://doi.org/...
) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g. https://www.sciencedirect.com/science...
)."name of paper in quotes" filetype:pdf
. If that fails, search for "name of paper in quotes"
and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.)I would add Semantic Scholar to the list. It gives consistantly better search results than Google Scholar and has a better interface. I've also found a really difficult-to-find paper on pre-print websites once or twice.
Thanks for the suggestion! I'll be trying it out and adding it to the list if I find it useful.
I'm really confused by this passage from The Six Mistakes Executives Make in Risk Management (Taleb, Goldstein, Spitznagel):
We asked participants in an experiment: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, on average, there has been one crash every 1,000 years on this airline. It is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would.
We then changed the second sentence so it read: “Safety statistics show that, on average, one in 1,000 flights on this airline has crashed.” Only 70% of the sample said they would take the flight. In both cases, the chance of a crash is 1 in 1,000; the latter formulation simply sounds more risky.
One crash every 1,000 years is only the same as one crash in 1,000 flights if there's exactly one flight per year on average. I guess they must have stipulated that in the experiment (of which there's no citation), because otherwise it's perfectly rational to suppose the first option is safer (since generally an airline serves >1 flight per year)?
A few months ago I wrote a post about Game B. The summary:
I describe Game B, a worldview and community that aims to forge a new and better kind of society. It calls the status quo Game A and what comes after Game B. Game A is the activity we’ve been engaged in at least since the dawn of civilisation, a Molochian competition over resources. Game B is a new equilibrium, a new kind of society that’s not plagued by collective action problems.
While I agree that collective action problems (broadly construed) are crucial in any model of catastrophic risk, I think that
- civilisations like our current one are not inherently self-terminating (75% confidence);
- there are already many resources allocated to solving collective action problems (85% confidence); and
- Game B is unnecessarily vague (90% confidence) and suffers from a lack of tangible feedback loops (85% confidence).
I think it can be of interest to some LW users, though it didn't feel on-topic enough to post in full here.