Somewhat related, though different in various ways, is this post by Bryan Caplan: https://www.econlib.org/the-cause-of-what-i-feel-is-what-i-do-how-i-eliminate-pain/
Good point! It could be that both kinds of mental exercise (excess stimulation and lack of stimulation) are important for building mental strength; modern society provides the former in abundance (and particularly so for LessWrong readers!), so the form of exercise we're constrained on is the lack-of-stimulation kind (and that's where meditation helps). How far-fetched does that sound?
An anonymous friend to whom I sent this post writes:
...He has a good point that most people just want to do the universal “safety” precautions. I think a big reason that he doesn’t mention is that reasonable precautions are how all businesses defend themselves from lawsuits (e.g. sexual harassment and DEI training); as long as they take the reasonable precautions, then they are immune from lawsuits. But I don’t buy “safety” as an explanation for what policies are possible. It sounds like a just-so story for why we are in the mess that we are in. Vaccines ar
I just got my booster dose today (December 24) and intend to monitor closely. I'll be regularly updating https://github.com/vipulnaik/diet-exercise-health/blob/master/notes/2021-12-24-pfizer-covid-vaccine-booster-dose.md with temperature readings and subjective details of my experience.
I did similar logging after the second dose, that you can see here: https://github.com/vipulnaik/diet-exercise-health/blob/master/notes/2021-06-25-pfizer-covid-vaccine-dose-2.md
Thanks for the feedback! It seems like you're saying I should first have done "negotiation techniques" then "do these negotiation techniques have a place in rational discourse?" as separate sections. So if we make a table with rows as techniques and columns as lenses, then I should have traversed it column major instead of row major.
Did I misunderstand or miss an angle to what you're saying?
Good point! Voss talks a bit about how many of these techniques feel odd. Two points he makes:
I would also add...
We cover a larger period in the overall summary and full timeline. The summary by year starts 2013 because (it appears that) that's around the time that enough started happening per year. Though we might expand it a little further to the past as we continue to expand the timeline.
<describe lockdowns as social engineering>
Did you intend to expand this?
<Michael Mina stuff here>
Did you intend to expand this?
I did some rewording of the post that made it a little more wordy, but fingers crossed that that part has now become less confusing.
Thank you for the feedback (and also for discussing this at length which gave me better understanding of the nuances). I modified to a more clumsy but hopefully a more what-you-see-is-what-I-mean term: https://www.lesswrong.com/posts/mRkWTpH9mb8Wdpcn5/coronavirus-california-case-growth?commentId=GHSEwZwR2TSkyzpdm
Thank you for the feedback. I agree with Lukas Gloor's reply below that the choice of term is confusing as it differs from what people may intuitively think "true cases" means. I also agree with his remark that setting terminology that is consistent with reality isn't bad in and of itself.
I have therefore changed "true cases" to "true currently-or-eventually-symptomatic cases". I think that provides the level of precision needed for our purposes. I haven't found a better term after some searching (though not a lot); however, I'm happy to change to a more concise and medically accepted term if I get to learn of one.
What I wrote there was assuming that the number of new true cases drops to a fairly low level. Whether that happens now or a week or two or three later is unclear; if the 2 -> 3 backlog is growing. then resolving that backlog will add more delay.
I posited us already being at this point as the "optimistic" scenario.
I'll reword the post to clarify this.
Directly visiting http://foretold.io gives an ERR_NAME_NOT_RESOLVED. Can you make it so that foretold.io redirects to www.foretold.io?
That's a normal part of life :). Any things that I decide to do in a future day, I'll copy/paste to over there, but I usually won't delete the items from the checklist for the day where I didn't complete them (thereby creating a record of things I expected or hoped to do, but didn't).
For instance, at https://github.com/vipulnaik/daily-updates/issues/54 I have two undone items.
There is some related stuff by Carl Shulman here: https://www.greaterwrong.com/posts/QSHwKqyY4GAXKi9tX/a-personal-history-of-involvement-with-effective-altruism#comment-h9YpvcjaLxpr4hd22 that largely agrees with what I said.
My understanding is that Against Malaria Foundation is a relatively small player in the space of ending malaria, and it's not clear the funders who wish to make a significant dent in malaria would choose to donate to AMF.
One of the reasons GiveWell chose AMF is that there's a clear marginal value of small donation amounts in AMF's operational model -- with a few extra million dollars they can finance bednet distribution in another region. It's not necessarily that AMF itself is the most effective charity to donate to to end malaria -- i...
I tried looking in the IRS Form 990 dataset on Amazon S3, specifically searching the text files for forms published in 2017 and 2016.
I found no match for (case-insensitive) openai (other than one organization that was clearly different, its name had openair in it). Searching (case-insensitive) "open ai" gave matches that all had "open air" or "open aid" in them. So, it seems like either they have a really weird legal name or their Form 990 has not yet been released. Googling didn't reveal any articles of incorporation or legal...
In my experience, writing full-fledged, thoroughly researched material is pretty time-consuming, and if you push that out to the audience immediately, (1) you've sunk a lot of time and effort that the audience may not appreciate or care about, and (2) you might have too large an inferential gap with the audience for them to meaningfully engage.
The alternative I've been toying with is something like this: when I'm roughly halfway through an investigation, I publish a short post that describes my tentative conclusions, without fully rigorous b
FWIW, my impression is that data on Wikipedia has gotten somewhat more accurate over time, due to the push for more citations, though I think much of this effect occurred before the decline started. I think the push for accuracy has traded off a lot against growth of content (both growth in number of pages and growth in amount of data on each page). These are crude impressions (I've read some relevant research but don't have strong reason to believe that should be decisive in this evaluation) but I'm curious to hear what specific impressions you have that are contrary to this.
If you have more fine-grained data at your disposal on different topics and how much each has grown or shrunk in terms of number of pages, data available on each page, and accuracy, please share :).
In the case of Wikipedia, I think the aspects of quality that correlate most with explaining pageviews are readily proxied by quantity. Specifically, the main quality factors in people reading a Wikipedia page are (a) the existence of the page (!), (b) whether the page has the stuff they were looking for. I proxied the first by number of pages, and the second by length of the pages that already existed. Admittedly, there are a lot more subtleties to quality measurement (which I can go into in depth at some other point) some of which can have indirect, long
Great point. As somebody who has been in the crosshairs of Wikipedia mods (see ANI) my bias would push me to agree :). However, despite what I see as problems with Wikipedia mod culture, it remains true that Wikipedia has grown quite a bit, both in number of articles and length of already existing articles, over the time period when pageviews declined. I suspect the culture is probably a factor in that it represents an opportunity cost: a better culture might have led to an (even) better Wikipedia that would not have declined in pageviews so much, but I do
Great points. As I noted in the post, search and social media are the two most likely proximal mechanisms of causation for the part of the decline that's real. But neither may represent the "ultimate" cause: the growth of alternate content sources, or better marketing by them, or changes in user habits, might be what's driving the changes in social media and search traffic patterns (in the sense that the reason Google's showing different results, or Facebook is making some content easier to share, is itself driven by some combinati
The Wikimedia Foundation has not ignored the decline. For instance, they discuss the overall trends in detail in their quarterly readership metrics reports, the latest of which is at https://commons.wikimedia.org/wiki/File:Wikimedia_Foundation_Readers_metrics_Q4_2016-17_(Apr-Jun_2017).pdf The main difference between what they cover and what I intend to cover are (a) they only cover overall rather than per-page pageviews, (b) they focus more on year-over-year comparisons than long-run trends, (c) related to (b), they don't discuss the long-run causes.
The 2017 SSC Survey had 5500 respondents. Presumably this survey was more widely visible and available than mine (which was one link in the middle of a long link list).
https://slatestarcodex.com/2017/03/17/ssc-survey-2017-results/
Varies heavily by context. Typical alternatives:
(a) Google's own answers for simple questions.
(b) Transactional websites for search terms that denote possible purchase intent, or other websites that are action-oriented (e.g., Yelp reviews).
(c) More "user-friendly" explanation sites (e.g., for medical terminology, a website that explains it in a more friendly style, or WikiHow)
(d) Subject-specific references (some overlap with (c), but could also include domain Wikias, or other wikis)
(e) When the search term is trending because of a recent news item, then links to the news item (even if the search query itself does not specify the associated news)
Interesting. I suspect that even among verbal elites, there are further splits in the type of consumption. Some people are heavy on reading books since they want a full, cohesive story of what's happening, whereas others consume information in smaller bits, building pieces of knowledge across different domains. The latter would probably use Wikipedia more.
Similarly, some people like opinion-rich material whereas others want factual summaries more. The factual summary camp probably uses Wikipedia more.
However, I don't know if there are easy ways of segmenti...
Good idea, but I don't think he does the census that frequently. The most recent one I can find is from 2014: http://slatestarcodex.com/2015/11/04/2014-ssc-survey-results/
The annual LessWrong survey might be another place to consider putting it. I don't know who's responsible for doing it in 2017, but when I find out I'll ask them.
It's not too late, if I do so decide :). In other words, it's always possible to spend later for larger samples, if that actually turns out to be something I want to do.
Right now, I think that:
It'll be pretty expensive: I'd probably want to spend using several different survey tools, since each has its strengths and weaknesses (so SurveyMonkey, Google Surveys, maybe Survata and Mechanical Turk as well). Then with each I'd need 1000+ responses to be able to regress against all variables and variable pairs. The costs do add up quickly to over a thousand do
I've published a new version of this post where the takeaways are more clearly highlighted (I think!). The post is longer but the takeaways (which are summarized on top) should be quick to browse if you're interested.
It's at http://lesswrong.com/r/discussion/lw/odb/wikipedia_usage_survey_results/
Good point! Something I thought a bit about but didn't get around to discussing in this post. The Slate Star Codex audience returned a total of 618 responses. I don't have a very good idea of how many people read the SSC blog carefully enough to go through all the links, but my best guess is that that number is in the low thousands. If that's the case the response rate is 15% or higher. This is still low but not that low.
Another way of framing this: how low would the response rate have to be for the true SSC readership to be like the SurveyMonkey Audience ...
Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the third takeaway:
The gap between elite samples of Wikipedia users and general United States Internet users is significantly greater than the gap between the different demographics within the United States that we measured. It is comparable to the gap between United States Internet users and Internet users in low-income countries.
I'm still a little surprised at the low effect sizes of demographic differences within the United States. Still, a lot of q...
Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the second takeaway:
we’ve revised upward our estimate of the impact per pageview, and revised downward our estimate of the broad appeal and reach of Wikipedia.
A lot of this comes down to whether the indicators we've identified for heavy Wikipedia use actually are things to be optimistic about. Is the typical SSC or LessWrong reader better able to use information gleaned from Wikipedia?
And what about the alleged downside that Wikipedia is being read by f...
On a related note, one of famous LessWronger Carl Shulman's research suggestions mentions Wikipedia:
Try to get datasets (Wikipedia lists, World Bank info, USDA, etc.) as a primary step in thinking about a question.
From his research advice document
Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the first takeaway:
Wikipedia consumption is heavily skewed toward a profile of “elite” people, and these people use the site in qualitatively different ways.
I didn't talk about it much in the post since it would be too speculative, but I'm interested in more concrete thoughts on predicting what websites or online communities would have a high degree of Wikipedia use. The SurveyMonkey Audience and Google Surveys results plausibly show that crude demograp...
I like the spirit of the suggestion here, but have at least two major differences of opinion regarding:
It could also be a good way for the Internets to give up on trying to talk in a forum where you are around.
The impression I form based on this is that the main blocker to LessWrong revitalization is people writing sufficiently attractive posts. This seems to mostly agree with the emerging consensus in the comments, but the empirical backing from the survey is nice. Also, it's good to know that software or interface improvements aren't a big blocker.
As for what's blocking content creators from contributing to LessWrong, here are a few hypotheses that don't seem to have been given as much attention as I'd like:
I compiled some previous discussion here, but the troll downvoted it below visibility (he's been very active in this thread).
Crazy idea to address point #2: What if posts were made anonymously by default, and only became nonymous once they were upvoted past a certain threshold? This lets you take credit if your post is well-received while lessening the punishment if your post is poorly received.
I added logs of two further ChatGPT sessions, one of which repeated many of the prompts I used here, tested against the 2023-01-09 version of ChatGPT: https://github.com/vipulnaik/working-drafts/commit/427f5997d48d78c69e3e16eeca99f0b22dc3ffd3
I had originally been thinking of formatting these into a blog post or posts, and I might still do so, but probably not for the next two months, so just sharing the raw logs for now so that people reading the comments on this post see my update.