It's funny time now in Slovakia; as if someone declared a call: "Irrational people of all beliefs, unite!"
It started two years ago with the so-called "Gorilla scandal". (TL;DR: Not a real gorilla, just a nickname of some criminal who was investigated by the secret service. By wiretapping his house the investigation revealed that almost all of our political parties, both left and right, participated in economical crime, cooperating with the same small group of people. The transcripts of the investigation were leaked to internet.) It was followed by a few demonstrations, after which pretty much nothing happened. Realizing that most media in our country actually belong to people involved in the scandal, so they don't have an incentive to investigate and report on the scandal, an internet radio called "the free broadcast" was created. From that point, it gradually went downhill.
By deciding to focus on 'news that don't have place in the official media', the radio was gradually selecting for hoaxes, conspiracy theories, etc. Which probably led to saner people leaving the radio, concentrating the irrationality of the remaining ones. One year later, it was mos... (read more)
I used to wish that people be more interested in how society works, go outside of their homes and try to improve things. After seeing this, I just wish they all lost interest, returned home, a started watching some sitcoms.
I wasn't sure whether this largely political comment was okay to write on LW, but then I realized LW is pretty much the only place I know where I could write such comment without receiving verbal abuse, racist comments, explanations that homosexuality really is the greatest danger of our civilization, or offended complaints about how I am insensitive towards religion. Recently, LW feels like an island of sanity in the vast ocean of madness.
Perhaps this will give me more energy to promote rationality in my country. I already arranged another LW meetup after a few months pause.
Martin Odersky, the inventor of the Scala programming language, writes regarding a recent rant against Scala publicized on Hacker News:
Seems hardly a weekend goes by these days without another Scala rant that makes the Hacker news frontpage. [...]
There certainly seems to be a grand coalition of people who want to attack Scala. Since this has been going on for a while, and the points of critique are usually somewhere between unbalanced and ridiculous, I have been curious why this is. I mean you can find things that suck (by some definition of "suck") in any language, why is everybody attacking Scala? Why do you not see articles of Rubyists attacking Python or of Haskellers attacking Clojure?
The quotation is remarkable for its absolute lack of awareness of selection bias. Odersky doesn't appear to even consider the possibility that he might be noticing the anti-Scala rants more readily than rants against other programming languages. Not having considered the possibility of the bias, he has no chance to try and correct for it. The wildly distorted impression he's formed leads him to language bordering on conspiracy theories ("grand coalition of people who want to a... (read more)
I think if you read what he wrote less ungenerously (e.g. as if you were reading a mailing list post rather than something intended as a bulletproof philosophical argument), you'll see that his implicit point - that he's just talking about the reaction to Scala in particular - is clear enough, and - and this is the important point - the eventual discussion is productive in terms of bringing up ideas for making Scala more suitable for its intended audience. Given that his post inspired just the sort of discussion he was after, I do think you're being a bit harsh on him.
I don't know that we disagree. I will cheerfully agree that Martin's email was
relatively measured, the discussion it kicked off was productive, and that his
tone was neither bitter nor toxic. That doesn't detract from my point - that as
far as I can make out, his perception of relative attack frequency is heavily
selection-biased, and he's unaware of this danger. It is true that in this case
the bias did not lead to toxic consequences, but I never said it did. The bias
itself here is remarkable.
If my being a bit harsh on him basically consists of my not saying the above in
the original comment, I'll accept that; I could've noted in passing that the
discussion that resulted was at the end largely a friendly and productive one.
Yesterday I received the following message from user "admin" in my Less Wrong inbox:
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I got this, too. I was concerned that it was might not be what it claimed to be,
and avoided clicking the link. I view with suspicion anything unexpected that
points me anywhere I might reasonably input login details.
2[anonymous]9y
I got it too. I think it was a typo in the URL which should instead appear as
your preferences page [http://lesswrong.com/prefs/wikiaccount/]
0Pfft9y
Does that link actually work for you? If I enter my password, it briefly says
"submitting" and the button moves to a different spot, but it doesn't seem to
create a wiki account.
0Metus9y
This seems like the answer. Can one of the admins validate that this was the
intended link?
1jackk9y
That private message was part of a new feature to encourage wiki participation,
by helping existing Less Wrong users onto wiki accounts. Unfortunately the link
to create an account didn't point to the right place.
If you tried to create a wiki account and had the brief flash of "submitting"
(like Pfft
[http://lesswrong.com/r/discussion/lw/j9e/open_thread_december_28_2013/a5iw]),
make sure you've got a validated email address associated with your account.
1hyporational9y
Got the message too, and that 404, then created an account with the form below
my username.
0Antiochus9y
Also did the same.
1ChrisHallquist9y
I got a similar / identical message. Anyone know what was up with that?
1Alejandro19y
Ditto.
0EvelynM9y
I got one as well. Link http://wiki.lesswrong.com/prefs/wikiaccount
[http://wiki.lesswrong.com/prefs/wikiaccount]
I am an Ashkenazi Jew. We are a population with many well-documented diseases tied to recessive alleles. It is unfair to force a minority population to have to pay massive sums of money so that we have to find out our own genetic situation. This applies to genes such as BRCA1, which causes cancer, or the alleles which causeTay Sachs and autonomic neuropathy type III, all cases where the documentation is strong. Ashkenazic Jews are not the only group in this situation, and there are also bad alleles which are not more common with specific ethnic or racial groups. The individuals with those genes deserve the same benefits.
The FDA's move is a step in the wrong direction which interferes with the fundamental right to know about one's own body.
The last line I added in part to aim at the current left-wing attitudes about personal bodily integrity. I stole the less well known disease from Yvain's excellent letter here, where I got to find about yet one more fun disease potentially in my gene pool. I strongly recommend people read Yvain's letter.
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn't, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can't fully eliminate even when you know that they're there.
(I have a feeling that this might have been discussed before, but I don't remember where in that case.)
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting.
It's more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It's not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
This seems like a breakdown in reflective consistency. Shouldn't you try to
actively counter/avoid the expected irrationality pressure, instead of
(irrationally and meekly) waiting for it to nudge your mind in a wrong
direction? Is there a specific example that prompted your comment? I can think
of some cases offhand. Say, you work at a failing company and you are required
to attend an all-hands pep talk by the CEO, who wants to keep the employee
morale up. There are multiple ways to avoid being swayed by rhetoric: not
listening, writing down possible arguments and counter arguments in advance,
listing the likely biases and fallacies the speaker will play on and making a
point of identifying and writing them down in real time, etc.
4Kaj_Sotala9y
No specific examples originally, but Yvain had a nice discussion about
persuasive crackpot theories in his old blog (now friends-locked, but I think
that sharing the below excerpt is okay), which seems like a good example:
As for trying to actively counter the effect of the misleading rhetoric, one can
certainly try, but they should also keep in mind that we're generally quite bad
at this. E.g. while not exactly the same thing, this bit from Misinformation and
its Correction [http://psi.sagepub.com/content/13/3/106.full] seems relevant:
0lmm9y
Sure, you should try to counter. But sometimes the costs of doing that are
higher than the losses that will result from an incorrect belief.
2Alejandro19y
This seems related [http://lesswrong.com/r/discussion/lw/7xp/on_selfdeception/],
though not exactly what you are asking for.
0ThrustVectoring9y
There's an intermediate step of believing things because you expect them to be
true (rather than merely convincing). It's fully corrected if you use
correlates-to-truth over convincingness for the update.
In other words, if you expect the fifth column more if you see sabotage, and
more if you don't see sabotage, then you can reduce that into just expecting the
fifth column more.
That's because the stylesheet link in the homepage is:
and that link should be to:
http://wiki.lesswrong.com/wiki/Lesswrong:Stylesheet
[http://wiki.lesswrong.com/wiki/Lesswrong:Stylesheet]
I've been teaching myself the basics of probability theory (I'm sixteen) but I'm having trouble on the first step. My basic definitions of probabilities are all frequentist, and I don't know a good Bayesian source appropriate for a secondary school student. Is Jaynes' PT:LOS able to be read by moi, given that I know basic set theory? If not, can anyone recommend a different textbook?
Jayne's book probably requires a university undergraduate-level familiarity with probability theory to fully appreciate.
I'd say that for the time being you don't need to worry about bayesianism vs. frequentism. Just learn the basics of probability theory and learn how to solve problems.
Thanks for being the one commenter who told me how tough the book is - I'm
leaving it for now, and the below recommendation of 'Understanding Uncertainty'
was very useful for understanding what a probability is. After that, I've got
some basic probability textbooks waiting to go. Cheers.
8Douglas_Knight9y
It's worth knowing that what Jaynes calls "probability" everyone else calls
"statistics."
Generally, "probability theory" means studying well-specified random models. In
some sense this is frequentist, but in another sense the distinction does not
apply. Whereas "statistics" is about subjective ignorance.
0Lumifer9y
That terminology sounds strange to me.
I define statistics as a toolbox of methods to deal with uncertainty.
0Gvaerg9y
And simulation theory is kinda the opposite of statistics - whereas in
statistics you deduce the distribution from sample data, in simulation you
compute plausible sample data from a given distribution.
8pragmatist9y
If you're looking for an elementary introduction to Bayesian probability theory,
I recommend Dennis Lindley's Understanding Uncertainty
[http://www.amazon.com/Understanding-Uncertainty-Dennis-V-Lindley-ebook/dp/B001CBCPS6/ref=sr_1_1?ie=UTF8&qid=1386107488&sr=8-1&keywords=dennis+lindley].
A lot more accessible than Jaynes, but not dumbed down. It's informal, but it
covers a number of quite sophisticated topics.
Lindley [http://en.wikipedia.org/wiki/Dennis_Lindley] is one of the architects
of the Bayesian conspiracy.
5Ben Pace9y
This recommendation has helped me out a lot, I might do a write-up of the book
as a LW post at some point in the future. Thanks.
7Oscar_Cunningham9y
Given that PT:LOS is free online you can just try reading it. Even if you don't
understand all the maths (do you know some calculus?) you'll still be able to
read his verbal explanations of things, which will give you a good idea of the
distinction between frequentist statistics and Bayes.
7gjm9y
IIRC the version that's online is not the same as the dead-tree version you can
buy; the latter has extra material and bugfixes. (I do, none the less, think
reading the online version is a good way for Benito to determine whether he
finds it approachable.)
1Oscar_Cunningham9y
Indeed. (Although the dead-tree version doesn't have that much extra material.
It mostly just has the "--Much more here!!!--"" notices deleted.)
1JGWeissman9y
A good way to find out would be to try reading it.
With math, it's useful to be able to distinguish books you can't understand because you're missing prerequisite knowledge from books you can't understand because you just aren't reading them carefully enough. The prevailing wisdom seems to be that you can't really expect to be able to follow Jaynes through if you pick it up as your first serious textbook on probability.
An interesting factoid. Drawing implications is left as an exercise for the reader.
"...for two decades, all the Minuteman nuclear missiles in the US used the same eight-digit numeric passcode: 00000000. ... And while Secretary of Defense Robert McNamara directly oversaw the installation of PALs on the US-based ICBM arsenal, US Strategic Command generals almost immediately had the PAL codes all reset to 00000000 to ensure that the missiles were ready for use regardless of whether the president was available to give authorization." (source)
duplicate
[http://lesswrong.com/lw/12r/how_likely_is_a_failure_of_nuclear_deterrence/].
I'm surprised I can only find this one.
The original source is Bruce Blair, 2004
[https://web.archive.org/web/20120511191600/http://www.cdi.org/blair/permissive-action-links.cfm],
who has made related complaints since 1977
[https://web.archive.org/web/20120419032140/http://www.cdi.org/blair/terrorist-threat.cfm].
Supposedly Eric Schlosser's book (2013) is an independent source. Luke quotes it
at length here
[http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/9zkt],
but not about the zeros. The most common source is Steven Bellovin
[https://www.cs.columbia.edu/~smb/nsam-160/pal.html], who makes some historical
remarks here [http://nielsenhayden.com/makinglight/archives/015629.html#1617500]
more candidly than most accounts.
Looking for people older than me (I'm 26) to tell me their memories of what kind of nutrition messages they remember getting from Nutrition Authority Type People (USDA or whatever).
The reason I ask is because I read a bunch of Gary Taubes over the weekend, and at first glance his claims about what mainstream nutritionists have been saying strike me as... not what I've experienced, to put it mildly. In particular, the nutritiony stuff I learned as a kid was always pretty clear on sugary soda and snacks being bad for you. Charitable hypothesis: maybe mainstream nutrition messaging was much crazier in the 80s? I don't actually think this is likely but I thought I'd ask.
I may be a bit older than you're looking for (44, grew up in small town Indiana)
but it just so happens I was back in the US for Thanksgiving and happened to
discuss nutrition education with other members of my family.
All of the nutrition education I remember was structured in terms of the four
main food groups: meat, dairy, grain, fruit & vegetables - focusing on the idea
that these should all be represented in a balanced meal. We also were taught
about nutritional content, mainly which vitamins are represented in which food
groups (and which specific foods), but almost entirely separately from "meal
planning". This was hardly changed from the nutrition education my parents
received some 20 years previously.... although not surprising as a few of the
teachers were the same!
My younger siblings (38, 40) saw the introduction of the fifth food group, fats
& sugars as I recall, presented as bad things that should be avoided. Also the
presentation of the four food groups was somewhat altered, bringing nutritional
balance (and the "recommended daily allowance") a bit more to the forefront in
meal design.
(All of the above is based on our memories of nutrition education which may be
highly flawed!)
6A1987dM9y
What does Taubes say mainstream nutritionists said?
5ChrisHallquist9y
That they recommended that people reduce their fat intake (which is definitely
true) but then he tries to pin increased consumption of sugary crap on them
(which is much less credible). For example
[http://www.nytimes.com/2002/07/07/magazine/what-if-it-s-all-been-a-big-fat-lie.html]:
-2passive_fist9y
It doesn't sound like you're being neutral on this.
6ChrisHallquist9y
"Sugary crap" is just shorthand for "the sugary stuff everyone agrees is bad for
you." The badness of e.g. sugary soda is pretty uncontroversial among
nutritionists, "low-carb" or otherwise.
0passive_fist9y
It was my impression that dieticians recommend avoiding processed sugar because
of the lack of nutrients, thus making it easy for a diet high in processed sugar
to have too many calories and not enough nutrients. Also, that in people with a
genetic predisposition to insulin resistance, diets high in sugar have been
shown to be correlated with developing insulin resistance and diabetes.
I have never seen a professional dietician refer to 'sugary stuff' as 'bad for
you'.
0Lumifer9y
That terminology has always confused me. What, sucrose is not a nutrient? Why
not?
Not to mention that this is talking apples and oranges -- calories are a term
from the physics-level description and nutrients are a term from the
biochemistry-level description.
5hyporational9y
The correct word is micronutrients. Perhaps some people mistakenly interchange
the words.
I doubt anyone's confusing physics with biochemistry when they talk about these
things.
1Lumifer9y
Mass media uses "nutrients" in the sense of "a magical substance, akin to aether
or flogiston, that makes you thin and healthy". It is mostly generated by
certificates of organic farming and is converted into its evil twin named
"calories" by a variety of substances, e.g. anything connected to GMOs.
2hyporational9y
Ok. You clearly have a different kind of mass media there.
"It's got electrolytes."
3passive_fist9y
You're right that sucrose can indeed be considered a nutrient, but I'm just
using the word to refer to essential nutrients i.e. molecular groups that you
need to consume in your diet for the proper functioning of human biochemistry
and cannot be substituted for anything else. As Nornagest says, these are
vitamins, minerals, essential amino acids and essential fatty acids. Sucrose is
not any of these so it is not an essential nutrient.
I don't see why 'comparing apples and oranges' invalidates the argument, though.
What difference does it make if they refer to different processes?
I also agree that nutrition is extremely contentious and politically charged.
-1Lumifer9y
Well, essential nutrients are a bit different thing, but even that doesn't
really help. The issue here is that there is an unstated underlying assumption
that everyone needs all the essential nutrients and the more the better.
To give an example, iron is an essential nutrient. Without it you get anemia and
eventually die. So, should I consume more of this essential nutrient? In my
particular case, the answer happens to be no -- I have a bit too much iron in my
blood already.
Unsurprisingly, for many essential nutrients you can have too much as well as
too little. And yet the conventional wisdom is that the more nutrients the
better.
Human biochemistry is very complicated and all the public discourse about the
diet can manage is Less calories! More nutrients! Ugh.
(yes, I know, I'm overstating things for dramatic effect :-P)
2passive_fist9y
I agree with you that 'more nutrients!' is not sound advice, but again, I never
said anything like that, not even implicitly.
Human biochemistry is indeed very complicated. That's exactly why I responded to
ChrisHallquist's remark about 'sugar being bad', because I feel that that is
vastly oversimplifying the issues at hand. For instance, simple sugars like
fructose exist in fruit, and not necessarily in small amounts either. Yet I
don't think he would argue that you should avoid all fruit.
3Lumifer9y
I am not arguing against you...
Well, ChristHallquist is reading Taubes and for Taubes insulin is the devil,
along with the carbs leading to it :-/
0A1987dM9y
What do you mean by small amounts? In the context of Taubes claiming that people
are drinking soda because they don't realize it's unhealthy, this
[http://xkcd.com/1035/] is the amount you're comparing it with. (For comparison,
that [http://www.sugarstacks.com/fruits.htm]'s the amount in fruits.)
1hyporational9y
I once tried to plan a very simple diet consisting of as few foodstuffs as
possible. Calculating the essential nutrient contents I quickly realized that's
not possible and it's better to eat a little bit of everything to get what you
need, unless of course, you take supplements.
2Lumifer9y
Yes, that's the idea behind Soylent
[https://campaign.soylent.me/soylent-free-your-body] but I'm rather sceptical of
that concept.
0hyporational9y
Anyone else notice at least three of the soylent guys
[https://campaign.soylent.me/soylent-free-your-body] seem to have this unusual
flush on their cheeks? Is this just sheer vitality glowing from them or could
there be something
[https://www.google.com/search?q=malar+erythema&channel=new&source=lnms&tbm=isch&sa=X&ei=kRCfUp6-I8m8ygPDsILwCQ&ved=0CAkQ_AUoAQ&biw=980&bih=397]
else [https://en.wikipedia.org/wiki/Malar_rash] going on? :)
I've seen several pictures of Rob and his face seems to be constantly red.
3gwern9y
Do you know if their Soylent recipe uses carrots or other pigmented vegetables?
It could be an accumulation of the coloring. (This apparently happened to me as
an infant with carrots. Made my face red/orangish.)
2hyporational9y
The early version [http://robrhinehart.com/?p=424] contains carotenoids found in
pigmented vegetables, at least lycopene found in tomatoes, and alpha-carotene
found in carrots. It seems you'd get much less carotenoids from Soylent than
just eating one tomato and one carrot per day.
He mentions "not very scientific, but the males in my family have always loved
tomatoes." Perhaps that's the explanation and not Soylent, although you get
three times less carotenoids from tomatoes compared to carrots so you'd probably
have to eat ridiculous amounts of them to become red. Perhaps they love carrots
too.
0gwern9y
Early recipe, and practically speaking, I don't know what the effects of one
tomato & carrot a day would be! Rhinehart and the others have been on Soylent
for, what, a year now? That's a long time for stuff to slowly accumulate. Most
people don't eat a single vegetable that routinely. During the summer I eat 1
tomato a day (we grow ours) without glowing, but then I don't eat any tomatoes
during spring/winter, which is disanalogous.
0[anonymous]9y
I didn't know that. Seems a likely explanation.
0A1987dM9y
Does anyone actually think that the optimal amount of calories is zero and the
optimal amount of nutrients is infinity? I haven't seen many people taking a
dozen multivitamins a day but otherwise fasting, so...
(If what they actually mean is that more people in the First World are eating
more calories than optimal than fewer, and vice versa for certain essential
nutrients, I'd guess they're probably right.)
Then again, it's hard for most people to think quantitatively rather than
qualitatively, but that doesn't seem to be a problem specific to nutrition.
2Lumifer9y
It's common for people to think that they (or others) should consume less
calories and more nutrients. They generally stop thinking before the question of
"how much more or less?" comes up.
0A1987dM9y
And sometimes they are right.
True that, but that doesn't seem to be specific to nutrition.
(That said, I am peeved by advice that assumes which way the listener is doing
wrong, e.g. “watch less TV and read more books” rather than “don't watch too
much TV and read enough books”.)
2kalium9y
Breatharians come close, but I guess the only nutrient they acknowledge is
sunlight/vitamin D.
0[anonymous]9y
Um, no. Nutrients are things your body needs to function. Some, but not all, of
them can be burned for calories. They can also be used for other things.
0Nornagest9y
In this context, I'd take "nutrients" to refer loosely to the set of things
other than food energy that we need to consider in diet: vitamins, dietary
minerals (other than sodium, usually), certain amino acids and types of fat, and
so forth. That doesn't map all that closely to the biochemical definition of a
nutrient, but I don't expect too much from pop science, especially not in a
field as contentious and politically charged as nutrition.
0Lumifer9y
Oh, I don't expect much from it at all, but unfortunately this terminology is
pervasive and, IMHO, serves to confuse and confound thinking on this topic.
-3Laoch9y
I'm sorry to say I expected more from you Chris. How do you reducing fat intake
is definitely true? Especially when you've read Gary Taubes.
3jaime20009y
He means that it is definitely true that they were advocating reducing fat
intake.
3Laoch9y
Wires crossed moment. Yes they were indeed, pity they were sooo wrong and that
the word fat is conflated with a dietry meaning and a physiological energy
storage meaning. In other words people hear "make me fat" when you mention fat
and how one(me specifically) eats so much of it.
Peter Attia and Gary Taubes have set up NUSI [http://nusi.org/] to get some much
needed science behind optimal diet.
Peter's site http://eatingacademy.com/ [http://eatingacademy.com/] has loads of
cool data on his experience with the keto fyi.
2Lumifer9y
It's been up for quite a while but I haven't noticed any progress. Is anything
happening with it or it stalled?
2Laoch9y
Hmm I haven't seen anything either, my bad. Tis a shame, I'm not aware of any
other optimal diet science. Is there any?
2Lumifer9y
Well, there are lot of claims for that :-/
4ephion9y
That they advocated reducing fat intake and especially saturated fats, and
encouraged grain and carbohydrate intake.
9stoat9y
This sounds familiar to me. I'm 32 and I definitely remember hearing stuff like
this. I remember in elementary school (so, late 80s early 90s) seeing the Canada
food guide recommend a male adult eat something like up to 10 servings of grains
a day, which could be bread or pasta or cereal. You were supposed to have some
dairy products each day, maybe 2-4. And maybe 1-3 servings from Meat &
Alternates.
I remember that pretty much all fat was viewed (popularly) with caution, at
least until Udo Erasmus came out with his book Good Fat, Bad Fat.
But I do recall a clear message that soda and snacks were unhealthy. It wasn't
as though soda was thought ok just because it was low fat / high carb.
3Lumifer9y
This
[http://www.cnpp.usda.gov/Publications/DietaryGuidelines/2010/DGAC/Report/E-Appendix-E-4-History.pdf]
may help... And that [http://www.nal.usda.gov/fnic/pubs/bibs/gen/DGA.pdf], too.
5hyporational9y
If someone decides to read these, a tiny summary would be nice.
5ChrisHallquist9y
I will likely end up doing so in an upcoming post, but I may not find time to
write it for a few weeks.
0hyporational9y
Does he argue there was a change of opinion in the 80s or before that? If I
recall correctly, he argues that the guidelines have remained roughly the same
for decades, or even changed for worse.
2ChrisHallquist9y
Don't have the quotes readily on-hand but basically, yes, he claims offical
low-fat recommendations of 70s/80s were important.
I would like some feedback on a change I am considering in my use of some phrases.
I propose that journal articles be called "privately circulated manuscripts" and that "published articles" should be reserved for ones that be downloaded from the internet without subscription. A more mild version would be to adopt the term "public article" and just stop using "published article."
I think that if you do this and few others do, the main result will be to confuse your readers or hearers -- and of those who are confused, when you've explained I fear that a good fraction of those who didn't already agree with you will pigeonhole you as a crank.
Which is a pity, because it would be good for far more published work to be universally accessible than presently is.
A possibly-better approach along similar lines would be to find some term that accurately but unflatteringly describes journals that are only accessible for pay (e.g., "restricted-access") and use that when describing things published on such terms. That way you aren't redefining anything, you aren't saying anything incorrect, you're just drawing attention to a real thing you find regrettable. You might or might not want a corresponding flattering term for the other side (e.g. "publicly accessible" or something). "There are three things worth reading on this topic. There's a book by Smith, a restricted-access journal article by Jones, and a publicly-accessible paper by Black."
You don't think "privately circulated manuscript" is 100% accurate?
I think it's pretty clear to say "a privately circulated article by Jones and a
published paper by Black," at least as long as I provide links. The ambiguity
I'm concerned about is where my comment is very short; the typical situation is
providing the public version to someone who cited the private version.
"Privately circulated" implies something that's only available to a very small group and not widely available. This might be a fair characterization in the case of some very obscure journals, but we might reasonably expect that most of the universities in the world would have subscriptions to journals such as Nature. According to Wolfram Alpha, there are 160 million students in post-secondary education in the world, not including faculty or people at other places that might have an institutional subscription.
Even taking into account the fact that not all of "post-secondary education" includes universities but probably also includes more vocational institutions that likely don't subscribe to scientific journals, we can probably expect the amount of people who have access to reasonably non-niche journals to be in the millions. That doesn't really fit my understanding of "privately circulated".
Would you consider Harry Potter not to have been published because it is not being given away for free? Why should "published articles" be defined differently from "published books"?
Everyone applies "published" differently to books and articles. In fact, most
people use "published article" to mean "peer-reviewed article," but even
ignoring that there are pretty big differences.
Why did you choose to make this comment here, rather than in response to my
original comment?
2A1987dM9y
Like what?
2Lumifer9y
No, I read "privately circulated" as distributed to a limited and mostly closed
circle. If anyone with a few bucks can buy the paper, I wouldn't call it
"privately circulated".
0kalium9y
Exactly.
0lmm9y
A word is just a label for an empirical cluster. It's misleading to talk about
"accurate" as though there were a binary definition.
0komponisto9y
The "manuscript" part certainly isn't, since these things are generally typeset.
0Douglas_Knight9y
I choose libel.
0drethelin9y
as always a phrase being technically 100 percent correct has a lot less to do
with whether it's understood as intended than you might think. a privately
circulated manuscript implies the protocols of the elders of zion to me.
Wouldn't it be more practical to simply adopt a personal rule of jailbreaking (if necessary) any paper that you cite? I know this can be a lot of work since I do just this, but it does get easier as you develop the search skills and is much more useful to other people than an idiosyncratic personal vocabulary.
Any how-to-advice on jailbreaking? Do you just mean using subscriptions at your
disposal?
I wonder if "pirating" papers has any real chance of adverse repercussions.
I think there have been past threads on this. The short story is Google Scholar, Google, your local university library, LW's research help page, /r/Scholar, and the Wikipedia Resource Request page.
I wonder if "pirating" papers has any real chance of adverse repercussions.
I have 678 PDFs on gwern.net alone, almost all pirated, and perhaps another 200 scattered among my various Dropboxes. These have been building up since 2009. Assuming linear growth, that's something like 1,317 paper-years (((678+200)/2)*3) without any warning or legal trouble so far. By Laplace, that suggests a risk of trouble per paper-year of 0.076% (((1+0)/(1317+2)) * 100). So, pretty small.
There is no dichotomy. Word choice is largely independent of action. You set a
good example, but you cite very few papers compared to your readers. Word choice
to nudge your readers might have a larger effect. Do your readers even notice
your example?
My question is how to get people to link to public versions, not how to get them
to jailbreak. I think that when I offer them a public link it is a good
opportunity to shame them. If I call it an "ungated" link, that makes it sound
abnormal, a nice extra, but not the default. An issue not addressed by my
proposal is how to tell people that google scholar exists. Maybe I should not
provide direct links, but google scholar links. Not search links, but cluster
links ("all 17 versions"), which might also be more stable than direct links.
0gwern9y
I don't know. I know they often praise my articles for being well-cited, but I
don't know if they would say the same thing were every citation a mere link to
Pubmed.
If you just want to shame them, then there's much more comprehensible choice of
terms. For example, 'useful' or 'usable'. "Here is a usable copy" - implying
their default was useless.
8hyporational9y
Universities have a lot subsriptions so that their students can access journal
articles for free, so "privately circulated" perhaps isn't as accurate as you'd
like to think. Journals can also be accessed from libraries.
1shminux9y
Feel free to elaborate on your reasons and goals for this (beyond the obvious
signaling).
1Douglas_Knight9y
What is the obvious signaling?
1shminux9y
That you are the type of person who thinks that all research should be freely
available and charging for access to scientific journals is morally wrong. (You
likely also prefer Linux over Windows because MS is evil, but put up with Apple
because it is cool.)
0[anonymous]9y
I had to double-check that you weren't secretly RMS.
0Douglas_Knight9y
RMS? Try Nina Paley [http://copyheart.org/] cf
[http://blog.ninapaley.com/2010/10/20/creative-commons-branding-confusion/]
But today I am not encouraging people to violate copyright, just to prefer links
that work.
Is there a better expression for the "my enemy must be the friend of my other enemy" fallacy, or insistence on categorizing all your (political or ideological) opponents as facets of the same category?
Semi related article
[http://onlinelibrary.wiley.com/doi/10.1111/j.1468-2508.2007.00497.x/abstract]
(pdf link [http://soc.haifa.ac.il/~talmud/The%20Enemy%20of%20My%20Enemy4.pdf]):
What Is the Enemy of My Enemy? Causes and Consequences of Imbalanced
International Relations, 1816–2001
Abstract:
Recently found this paper, entitled "On the Cruelty of Really Teaching Computer Science" by Dijkstra (plaintext transcription here). It outlines ways in which computer programming had failed to (and still has) actually jump across the transformative-insight gap that led to the creation of the programmable computer. Probably relevant to many of this crowd, and very reminiscent of some common thoughts I've seen here related to AI design.
In the same place I found this paper discussed, there was mention of this site, which was recommended as teaching computer science in a way implementing Dijkstra's suggestions and this textbook, similarly. I can't vouch for them personally yet, but this might be an appropriate addition to the big list of textbooks.
Dijkstra's ideas may be relevant to safety-critical domains (at least to some
extent) but the article is flagrantly ignoring cost-benefit tradeoffs.
Empirically we see that (manual) proof-oriented programming remains a small
niche while test-driven programming has been very successful.
0VAuroch9y
He's certainly not ignoring cost-benefit tradeoffs. He acknowledges this as a
perceived weak point, and claims that, when practiced properly, the tradeoff is
illusory. (I rate this unlikely but possible, around 2% that it's purely true
and another ~20% that the cost increase is greatly exaggerated.)
I'm pretty sure Dijkstra would argue (and I'm inclined to agree) that
proof-oriented programming hasn't gotten a fair field test, since the field is
taught in the test-driven paradigm and his proof-oriented teaching methods were
never widely tried. There's definitely some status quo bias at work; the
critical question is whether Dijkstra's methods would pass the reversal test,
and if so how broadly. My intuition suggests "Yes, narrowly with positive
outlook"; as we move toward having more and more information on cloud-computing
servers and services and social networks, provably-secure computing seems likely
to be appealing in increasingly broad applications, particularly when you look
at large businesses wanting to reap the benefits of new technologies but very
leery of the negative consequences of bugs.
And of course, even in the status quo, these methods still have relevance to
anyone looking to make high-risk things like AI.
9Kaj_Sotala9y
I would be skeptical of this claim, given how diverse the field of software
engineering is, and many programmers are both self-taught and mathematically
talented, so they would be prone to trying out neat things like proof-oriented
programming even if mainstream schools only taught the test-driven paradigm. At
the same time, many schools actually focus on teaching computer science instead
of software engineering, taking a much more theoretical and mathematical
approach than what most programmers will ever actually need. People coming from
these backgrounds would also seem to be inclined to try out neat formal methods.
(If they pursued an academic career, they could even do so without profitability
concerns.)
2passive_fist9y
Dijkstra's general senitment seems to be that applying existed engineering
practices from civil, mechanical, electrical, etc. engineering disciplines to
computer science is woefully inadequate. With this, I agree. I also agree that
there seems to be some weird set of beliefs in mathematical culture that the
human brain is superior to a computer and that no computer could ever do
mathematics like a human could (I've seen even prominent mathematicians use
Godel's theorem as bogus 'evidence' of this).
But the problem is that there doesn't seem to be a viable alternative to the
status quo of software engineering, not at the moment. The only type of radical
new thinking that I am aware of is the functional programming approach to things
taken by e.g. haskell. But there are a lot of issues there as well. So far,
productivity has been far higher using the more traditional way of doing things.
2Gvaerg9y
I did some Googling after reading the article and found this book by Dijkstra
and Scholten [http://www.amazon.com/dp/1461279240] actually showing how a
first-order language could be adapted to yield easy and teachable corectness
proofs. That is actually amazing! I have a degree in CS and unfortunately I've
never seen a formal specification system that could actually be implemented and
not be just some almost-tautological mathematical logic, like so many systems
that exist in the academia. Thanks very much for the link.
4Pfft9y
If you are interested in this kind of thing, you should check out Dafny
[http://research.microsoft.com/en-us/projects/dafny/]. It's a programming
language with Hoare-logic style pre- and postconditions (and the underlying
implementation computes weakest preconditions, Dijkstra-style). But what sets it
apart is that it is backed by an automatic theorem prover (Z3) which is
sufficiently powerful to handle most things that seem trivial to a human. To me
Dafny feels like the promise of programming verification research in the 1970s
finally came through: you can carry out program verification like you would with
pen and paper, without being overwhelmed by finicky algebraic manipulations.
2[anonymous]9y
Mathematicians (and Dijkstra qualifies as one) have been bemoaning the lack
rigour in undergraduate education for some time now. (Aye, even as early as the
French vs. English trigonometry textbook debates of the 1800s.) The United
States has a peculiar cultural mismatch between the relative quality of
secondary and undergraduate education, which in my mind causes most of the
drama. In particular, EWD1036 was written during Dijkstra's career at UT Austin.
I'd like to know if this phenomena is global, though.
If the human race is down to 1000 people, what are the odds that it will
continue and do well? I realize this is a nitpick-- the argument would be the
same if the human race were reduced to a million or ten million.
6knb9y
It's an interesting question. The Toba Catastrophe Theory suggests that human
population reached as low as 10,000 individuals during a climate change period
linked to supervolcano eruption. Another theory suggests that human population
reached as low as 2000 individuals. Overall I think 1000 individuals is enough
genetic diversity that humans could recover reasonably well.
The real problem seems to me to be whether humans could ever catch up to where
we are after being knocked down so low. Some people have suggested that if
civilization collapses humanity won't be able to start a new industrial
revolution due to depleted deposits of oil and surface minerals.
6NancyLebovitz9y
Garbage dumps would have metal that's more concentrated than you'd find it in
ore. I'm not sure how much energy would be needed to refine it.
If I were writing science fiction, I think I'd have modest tech-level efforts at
mining garbage dumps in coastal waters.
The History of the Next Ten Billion Years
[http://thearchdruidreport.blogspot.co.uk/2013/09/the-next-ten-billion-years.html]--
a Stapledonian handling of the human future. Entertaining, though I think it
underestimates human inventiveness.
3taelor9y
Aluminum, in particular, is known for being very difficult to extract from ore,
but once extracted, very easy to recycle into new products.
3Nornagest9y
Oil (and coal, which is less topically sexy but historically more significant to
industrialization) is the big problem, though rare earths and other materials
that see use more in trace than in concentration could also be an issue. If
you're a medieval-level smith, you probably wouldn't care too much whether
you're getting your Fe from bog iron nodules or from the melted skeletons of
god-towers in the ruins of Ellae-that-Was, although certain types of bottleneck
event could make the latter problematic for a time.
Still, I'd be willing to bet at even odds that that wouldn't be a showstopper if
it came to it.
0Adele_L9y
On the other hand, these future humans would probably be able to learn things
like science much more quickly because of all the information we have lying
around everywhere.
2Nornagest9y
Our information storage media has a surprisingly short shelf life. Optical disks
of most types degrade within decades; magnetic media is more variable but even
more fragile on average (see here
[http://en.wikipedia.org/wiki/Media_preservation] and the linked pages). There
are such things as archival disks, and a few really hardcore projects like
HD-Rosetta, but they're rare. And then there's encryption and protocol confusion
to take into account.
A couple centuries after a civilization-ending event, I'd estimate that most of
the accessible information left would be on paper, and not a lot of that.
0lmm9y
I don't know. Those objects make certain kinds of superstitions seem much more
plausible.
1knb9y
The audio cuts out partway through the talk.
-1Richard_Kennaway9y
I haven't watched it all the way through, but I can jump into it at any point
and it plays fine.
5knb9y
Audio cuts out at around 38 minutes, after that there is no sound from Eliezer's
mic, so it's apparently relying on the camera mic which makes the recording
noisy and hard to hear.
-4NoSuchPlace9y
The audio is only gone for a minute or so, so while this is annoying it's not
major.
4knb9y
No, it cuts out completely for a minute, and then it apparently switches to the
camera mic, which makes Eliezer very hard to hear over noise.
LW meta (reposted, because a current open thread did not exist then): I have received a message from “admin”:
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I have seen, indeed, options to create a wiki account. But I already have one; how do I associate the existing accounts?
A related question: I clicked the (modified) URL that "admin" sent me, and the
page contained a form where I could fill in my LW password in order to create a
wiki account. I submitted it but I cannot login on the wiki with my LW
credentials. What's going on?
Today I skim-read Special Branch (1972), the first book-length examination of Good's "ultra-intelligent machine."
It is presented in the form of a 94-page dialogue, and the author (Stefan Themerson) is clearly not a computer scientist nor an analytic philosopher. So the book is largely a waste of attempted "analysis." But because I'm interested in how ideas develop over time and across minds, I'll share some pieces of the dialogue here.
A detective superintendent from "special branch," named Watson, meets up with the author (the... (read more)
Here are two (correct) arguments that are highly analogous.
Brownian motion, the fact that a particle in water or air does not come to rest, but dances at a minimal rate is an important piece of evidence for the atomic hypothesis. Indeed, Leucippus and Democritus are said to have derived the atomic hypothesis from Brownian motion; certainly Lucretius provided it as evidence.
Similarly, Darwin worried that "blending" inheritance would destroy variation in quantitative traits. He failed to reach the conclusion that heredity should be discrete, though.
I'm planning to run a rationality-friendly table-top roleplaying game over IRC and am soliciting players.
The system is Unknown Armies, a game of postmodern magic set in a creepier, weirder version of our own world. Expect to investigate crimes, decipher the methods behind occult rituals, interpret symbols, and slowly go mad. This particular game will follow the misadventures of a group of fast food employees working for an occult cabal (well, more like a mailing list) that wants to make the world a better place.
Sessions will be 3-4 hours once a week over I... (read more)
EDIT: I am specifically referring to Debit Card Overdraft p̶r̶o̶t̶e̶c̶t̶i̶o̶n̶ service
EDIT 2: I have been made aware that I am using the wrong term, overdraft service is the term most commonly used by major banks to refer to the "service" they offer on debit card overdrafts. If you see me refer to somethin... (read more)
I bank with Chase, and unless the written information I've received from them is
a straight-up lie (which would put them at risk for a lawsuit...) this
information is factually inaccurate. What you describe as "overdraft protection"
is actually the policies you'll be subjected to without overdraft protection.
Overdraft protection does come with fees, but they're much less, no more than
$10 a day.
(The moral of the story: don't be overdrawn. It will cost you money in fees with
or without overdraft protection.)
4niceguyanon9y
The confusion stems from the fact that there are two different services for
Chase, one for check writing and one for debit cards. I am specifically talking
about debit card usage and will edit my post to make it more clear.
Chase will charge you $10 per day for check writing overdraft protection on
accounts that are linked, this is true. However for debit card use, you would be
charge $0 if you opt-out and indeed pay $34 per transaction if you opt-in. The
problem is that many banks combine checking and debit card usage in to one plan,
and others like Chase split it up. My main point is that check writing is
becoming very rare, and most people get dinged from fees using their debit
cards. So if they are combined and you really don't write checks, then you
definitely should opt-out.
There is $34 fee for debit card overdraft protection and $0 fee for opting
out(here
[https://www.chase.com/index.jsp?pg_name=ccpmapp/shared/marketing/page/od-nsf-triple-your-protection-sec]
and here
[https://www.chase.com/ccpmweb/shared/document/Web_DCOC_and_A9_V5_ada.pdf]).
Does this resolve your disagreement?
If you opt-out of debit card overdraft protection it will not cost you any
money! If you opt-in for debit card overdraft protection it will cost you money.
I know it sounds ridiculous, because it is.
0ChrisHallquist9y
Based on the links, Chase doens't even call their service for debit cards
"overdraft protection," so this doesn't support the original point about words
misleading people. Also, it seems that if you have debit card coverage and
overdraft protection, you'll at most be changed $10/day for overdrawing with
your debit card. Still better to use a credit card when you don't have money in
your checking account, obviously.
(Also, as Louie Helm recently pointed out, as long as you pay your balance in
full every month, you're better off using your credit card for everything
because the rewards program will reduce the cost of everything you buy by 1% or
more.)
-1niceguyanon9y
In the spirit of being helpful and trying to be as factually accurate as
possible, I have edited my original post, as you are absolutely are correct
about the terminology. I would only argue that I consider my original point to
be merely a segway to introduce my main argument that debit card overdraft
services are typically poor decisions.
I do not believe this is accurate.
However assume it is accurate, if you weigh the cost/benefit (again talking
about debit card use) it IMO is still a terrible investment. My bank happens to
be Wells Fargo and they charge $12, for debit OD service, better, but still
pretty bad. But ultimately you must decide what is an acceptable fee. The vast
majority of people getting dinged for debit card overdrafts, are not buying life
saving medication, its more likely to be a cup of coffee or a hot dog. So if you
asked them what they would have done if they knew they had insufficient funds,
they would likely reject the $10 or $34 fee. This isn't even considering that
most banks are not obligated to tell you that you are overdrawn, so you could
get dinged $10 a day until you finally realize it, as opposed to being notified
right away from being declined. BTW since you're a Chase customer Chase happens
to waive the fee if you can fund your account by day end, but they aren't
obligated to inform you that you are negative.
You're better off using your credit card and saying no to debit card overdraft
service – for the most part. Unless you frequently find yourself in the position
of having your purchases must go though for what ever reason.
0fubarobfusco9y
Also, use financial institutions whose incentives are better-aligned with the
interests of their depositors; notably, credit unions.
3palladias9y
Oh wheee, this is what I worked on in DC. There are a few different things that
can happen when you try to make a purchase on a debit card with insufficient
funds:
* the merchant sees you don't have the money, the card is declined, and you pay
the bank nothing
* the bank transfers money from a linked account (usually a savings account or
line of credit) and charges a fee for this service (median $10, at least back
in 2012)
* the bank covers the cost of the purchase, which you now need to pay back,
along with a fee of (at median) $35
Both the second and third option are sometimes called Overdraft Protection.
There is no industry standard term, so it can be very hard to contrast between
banks and disambiguate overdrafts covered by a transfer and regular overdrafts.
(You can see the 14 different terms we found across 24 banks and credit unions
here [http://www.pewtrusts.org/our_work_report_detail.aspx?id=85899396977]).
The law changed recently (in the last 5 years) so that banks have to ask you to
opt-in to overdraft. If you take no action, when you try to buy something with
your debit card you don't have the money to cover, you just can't do it, and you
incur no fee. So, banks have done a big push to get people to opt-in, including
using the "Overdraft Protection" language, but, for most people, it's a bad
choice.
And, fun fact, some banks reorder your purchases, when they're processed, in
order to maximize the number of overdrafts you incur. (i.e. if you had $20 in
you account and bought, in order, items costing $5, $5, $5, $20, some banks
reorder your purchases high-to-low so they can charge three overdraft fees
instead of one). You can see a graphic with data from a real world case here
[http://www.pewtrusts.org/our_work_report_detail.aspx?id=85899364999].
0niceguyanon9y
Fun Fact, if you overdraw and are protected by a bank transfer from a linked
account but that linked account is also insufficient, you get charged both fees
– one fee for the transfer, and another for not having enough after the
transfer! How can they justify this? Easy, the fee for just a transfer, not
guaranteeing you that your transfer will be adequate.
2gwern9y
Last month I signed up for a bank account at my local credit union, and they do
offer overdraft protection of various sorts. One of the things that impressed me
was that the woman who was setting up my account explained to me why I did not
want overdraft protection, using a very similar example.
2Ander9y
I cannot speak for all Banks policies, but that isn't how the 'overdraft
protection' on my account works. How mine (actually a credit union, maybe thats
a difference) works is:
Without it, if I was to write a check with insufficient funds, I would get
charged some large fee. But with the Overdraft Protection, it will transfer
money from my savings account to checking to cover it, for free, helping me
avoid the fee. Essentially it lets me use the savings accounts as a safety net
to avoid the charges.
This 'protection' has in fact saved me in a couple of instances.
0xnn9y
UK banks lost a test case a few years ago that led to a lot of people getting
back however many years of overdraft charges, plus interest. The same thing
happened a bit later with "payment protection insurance", intended to cover loan
repayments if you lost your job, but with so many exclusions as to be almost
worthless.
The end result was something like a forced savings policy. Cue people who
avoided the initial trap wondering where their free money is.
You have to wonder sometimes.
I'm curious about this, and specifically what's meant by this "decoupling".
Anyone have a link to research about that?
It sounds somewhat like "financial AIs are paperclipping the economy" or
possibly "financial AIs are wireheading themselves", or both. If either is true,
that means my previous worries about unfriendly profit-optimizers
[http://lesswrong.com/lw/j0e/open_thread_november_8_14_2013/a1ea] were crediting
the financial AIs with too much concern for their owners' interests.
Louie on G+ links an interesting pair of philosophy papers: http://plus.google.com/104557909419304580033/posts/jNdsspkqGH8 - An attempt to examine the argument from disagreement ('no two people seem able to agree on anything in ethics') by using computer simulations of belief convergence. Might be interesting reading.
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
I've been diagnosed with avoidant personality disorder and obsessive compulsive personality disorder, as well as major depression, about 4 months ago, and even though my depression has been drastically reduced by medication, I still often have suicidal thoughts. Does anyone have advice on dealing with this? It's just hard to cope with feeling like I'm someone that it isn't good or healthy to be around.
Lots of people enjoy hanging out with my me despite my occasional suicidal
ideation! Most people can't read your mind!
4witzvo9y
naive question (if you don't mind): What sort of things trigger your
self-deprecating feelings, or are they spontaneous? E.g. can you avoid them or
change circumstances a bit to mitigate them?
2pdsufferer9y
The prospect of social interaction, whether it actually happens or not, can
trigger it. Any time I start a project (including assignments at university), go
back to edit something, and it doesn't meet my standards, I get quite severe
self-deprecating feelings.
For the second one I managed to mitigate it by changing my working process to
something more iterative and focused on meeting the minimum requirements before
optimizing. I still have not found a remotely serviceable solution for the
social interaction problems, and the feedback loops there are more destructive
too. At least with the perfectionism problem I can move to another project to
help restore some of my self-esteem.
1witzvo9y
It's easy to be sympathetic with these two scenarios -- I get frustrated with
myself, often enough. Would it be helpful to discuss an example of what your
thoughts are before a social interaction or in one of the feedback loops? I'm
not really sure how I'd be able to help, though... Maybe your thoughts are
thoughts like anyone would have: "shoot! I shouldn't have said it that way, now
they'll think..." but with more extreme emotions. If so, my (naive) suggestion
would be something like meditation toward the goal of being able to observe that
you are having a certain thought/reaction but not identify with it.
Evolution in humans does not work to produce an integrated intellectual system but produces the set of hacks better suited to the ancestral environment than any other human. Thus we should expect the average human brain to have quite insular but malleable capabilites. Indeed I have the impression that old arts like music try to repurpose those specific pathways in novel ways. Are there parts of our brains we can easily repurpose to aid in our quest for rationality?
I have a notion we aren't just adapted to the ancestral environment-- we've also
got adaptations for low-tech agriculture (diligence, respect for authority) and
cities (tolerance for noise, crowding, and strangers). Neither list is intended
to be complete.
I've wondered whether people in separatist/supremacist movements have fewer city
genes than average.
0Metus9y
Great point, we have adaptions for a multitude of environments.
Still, the question is, can we repurpose some of those little adaptions for
rationality or finetune them by some technique?
0lmm9y
You mean like imagining you're going to present an issue to an authority figure
when thinking about it? Or something more wacky like converting reasoning
problems into visual problems?
0Metus9y
Either. The first will be lower hanging fruit, the second will be much more
non-obvious.
My point, I think, is that most of the stuff on lesswrong is on the theoretical
side of things but quite impractical.
I am trying to find a post here and am unable to find it because I do not seem to have the right keywords.
It was about how the rational debate tradition, reason, universities, etc. arose in some sort of limited context, and how the vast majority of people are not trained in that tradition and tend to have emotional and irrational ways of arguing/discussing and that it seems to be the human norm. It was not specifically in a post about females, although some of the comments probably addressed gender distributions.
I read this post definitely at least six months and probably over a year ago. Can anyone help me?
Probably not what you're after, but there's Making Rationality General-Interest
[http://lesswrong.com/lw/i1k/making_rationality_generalinterest/] by Swimmer963.
Further out but with a little overlap with what you describe: Of Gender and
Rationality [http://lesswrong.com/lw/ap/] by Eliezer. Or No Safe Defense, Not
Even Science [http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/] by
Eliezer.
I think it's less than 25% probable that any of these is what you're after, but
(1) looking at them might sharpen your recollection of what you are after, (2)
one or more might be a usable substitute for whatever your purpose is, and (3)
others reading your comment and wanting to help now needn't check those :-).
0hesperidia9y
"No Safe Defense, Not Even Science" is close enough for the purpose I was using
it for. Thank you!
Someone led me to Emotional Baggage Check. The idea appears to be that people can leave an explanation of what's troubling them, or respond to other people's issues with music or words of encouragement. It sounds like a good idea (the current popular strategy of whining on a public forum seems to be more trouble than it's worth). It doesn't look particularly troll-proof, though.
If nothing else, I'd like to look at them in a year or so and see how it's turned out.
Hey dude! I am the creator of that site. Hm yeah we are working on the whole
troll-proof thing. Actually have a whole alert system set up but we still have
more to work on. And yeah I'm pretty interested to see where we will be in a
year too. Stay tuned.
Can someone change the front page so it doesn't say "Lesswrong:Homepage"? This sounds like it is a website from 1995. Almost any other plausible wording would be better.
It's funny time now in Slovakia; as if someone declared a call: "Irrational people of all beliefs, unite!"
It started two years ago with the so-called "Gorilla scandal". (TL;DR: Not a real gorilla, just a nickname of some criminal who was investigated by the secret service. By wiretapping his house the investigation revealed that almost all of our political parties, both left and right, participated in economical crime, cooperating with the same small group of people. The transcripts of the investigation were leaked to internet.) It was followed by a few demonstrations, after which pretty much nothing happened. Realizing that most media in our country actually belong to people involved in the scandal, so they don't have an incentive to investigate and report on the scandal, an internet radio called "the free broadcast" was created. From that point, it gradually went downhill.
By deciding to focus on 'news that don't have place in the official media', the radio was gradually selecting for hoaxes, conspiracy theories, etc. Which probably led to saner people leaving the radio, concentrating the irrationality of the remaining ones. One year later, it was mos... (read more)
Sounds like an interesting real-world example of http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/
On the plus side, now you have all the material you need to write a satirical novel.
I used to wish that people be more interested in how society works, go outside of their homes and try to improve things. After seeing this, I just wish they all lost interest, returned home, a started watching some sitcoms.
I wasn't sure whether this largely political comment was okay to write on LW, but then I realized LW is pretty much the only place I know where I could write such comment without receiving verbal abuse, racist comments, explanations that homosexuality really is the greatest danger of our civilization, or offended complaints about how I am insensitive towards religion. Recently, LW feels like an island of sanity in the vast ocean of madness.
Perhaps this will give me more energy to promote rationality in my country. I already arranged another LW meetup after a few months pause.
Martin Odersky, the inventor of the Scala programming language, writes regarding a recent rant against Scala publicized on Hacker News:
The quotation is remarkable for its absolute lack of awareness of selection bias. Odersky doesn't appear to even consider the possibility that he might be noticing the anti-Scala rants more readily than rants against other programming languages. Not having considered the possibility of the bias, he has no chance to try and correct for it. The wildly distorted impression he's formed leads him to language bordering on conspiracy theories ("grand coalition of people who want to a... (read more)
I think if you read what he wrote less ungenerously (e.g. as if you were reading a mailing list post rather than something intended as a bulletproof philosophical argument), you'll see that his implicit point - that he's just talking about the reaction to Scala in particular - is clear enough, and - and this is the important point - the eventual discussion is productive in terms of bringing up ideas for making Scala more suitable for its intended audience. Given that his post inspired just the sort of discussion he was after, I do think you're being a bit harsh on him.
Yesterday I received the following message from user "admin" in my Less Wrong inbox:
But the link goes to a 404.
Petition to the FDA not to ban home genomic kits like 23andMe. I recommend people here interested in personalized medicine, transhumanism, or have any libertarian bent consider reading and signing.
I added my own comment
The last line I added in part to aim at the current left-wing attitudes about personal bodily integrity. I stole the less well known disease from Yvain's excellent letter here, where I got to find about yet one more fun disease potentially in my gene pool. I strongly recommend people read Yvain's letter.
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn't, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can't fully eliminate even when you know that they're there.
(I have a feeling that this might have been discussed before, but I don't remember where in that case.)
It's more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It's not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
The phrenology guy isn't showing up on the homepage for me. Did LW take him off?
I've been teaching myself the basics of probability theory (I'm sixteen) but I'm having trouble on the first step. My basic definitions of probabilities are all frequentist, and I don't know a good Bayesian source appropriate for a secondary school student. Is Jaynes' PT:LOS able to be read by moi, given that I know basic set theory? If not, can anyone recommend a different textbook?
Jayne's book probably requires a university undergraduate-level familiarity with probability theory to fully appreciate.
I'd say that for the time being you don't need to worry about bayesianism vs. frequentism. Just learn the basics of probability theory and learn how to solve problems.
With math, it's useful to be able to distinguish books you can't understand because you're missing prerequisite knowledge from books you can't understand because you just aren't reading them carefully enough. The prevailing wisdom seems to be that you can't really expect to be able to follow Jaynes through if you pick it up as your first serious textbook on probability.
An interesting factoid. Drawing implications is left as an exercise for the reader.
"...for two decades, all the Minuteman nuclear missiles in the US used the same eight-digit numeric passcode: 00000000. ... And while Secretary of Defense Robert McNamara directly oversaw the installation of PALs on the US-based ICBM arsenal, US Strategic Command generals almost immediately had the PAL codes all reset to 00000000 to ensure that the missiles were ready for use regardless of whether the president was available to give authorization." (source)
Looking for people older than me (I'm 26) to tell me their memories of what kind of nutrition messages they remember getting from Nutrition Authority Type People (USDA or whatever).
The reason I ask is because I read a bunch of Gary Taubes over the weekend, and at first glance his claims about what mainstream nutritionists have been saying strike me as... not what I've experienced, to put it mildly. In particular, the nutritiony stuff I learned as a kid was always pretty clear on sugary soda and snacks being bad for you. Charitable hypothesis: maybe mainstream nutrition messaging was much crazier in the 80s? I don't actually think this is likely but I thought I'd ask.
I would like some feedback on a change I am considering in my use of some phrases.
I propose that journal articles be called "privately circulated manuscripts" and that "published articles" should be reserved for ones that be downloaded from the internet without subscription. A more mild version would be to adopt the term "public article" and just stop using "published article."
I think that if you do this and few others do, the main result will be to confuse your readers or hearers -- and of those who are confused, when you've explained I fear that a good fraction of those who didn't already agree with you will pigeonhole you as a crank.
Which is a pity, because it would be good for far more published work to be universally accessible than presently is.
A possibly-better approach along similar lines would be to find some term that accurately but unflatteringly describes journals that are only accessible for pay (e.g., "restricted-access") and use that when describing things published on such terms. That way you aren't redefining anything, you aren't saying anything incorrect, you're just drawing attention to a real thing you find regrettable. You might or might not want a corresponding flattering term for the other side (e.g. "publicly accessible" or something). "There are three things worth reading on this topic. There's a book by Smith, a restricted-access journal article by Jones, and a publicly-accessible paper by Black."
"Privately circulated" implies something that's only available to a very small group and not widely available. This might be a fair characterization in the case of some very obscure journals, but we might reasonably expect that most of the universities in the world would have subscriptions to journals such as Nature. According to Wolfram Alpha, there are 160 million students in post-secondary education in the world, not including faculty or people at other places that might have an institutional subscription.
Even taking into account the fact that not all of "post-secondary education" includes universities but probably also includes more vocational institutions that likely don't subscribe to scientific journals, we can probably expect the amount of people who have access to reasonably non-niche journals to be in the millions. That doesn't really fit my understanding of "privately circulated".
Would you consider Harry Potter not to have been published because it is not being given away for free? Why should "published articles" be defined differently from "published books"?
Wouldn't it be more practical to simply adopt a personal rule of jailbreaking (if necessary) any paper that you cite? I know this can be a lot of work since I do just this, but it does get easier as you develop the search skills and is much more useful to other people than an idiosyncratic personal vocabulary.
I think there have been past threads on this. The short story is Google Scholar, Google, your local university library, LW's research help page, /r/Scholar, and the Wikipedia Resource Request page.
I have 678 PDFs on gwern.net alone, almost all pirated, and perhaps another 200 scattered among my various Dropboxes. These have been building up since 2009. Assuming linear growth, that's something like 1,317 paper-years (
((678+200)/2)*3
) without any warning or legal trouble so far. By Laplace, that suggests a risk of trouble per paper-year of 0.076% (((1+0)/(1317+2)) * 100
). So, pretty small.Is there a better expression for the "my enemy must be the friend of my other enemy" fallacy, or insistence on categorizing all your (political or ideological) opponents as facets of the same category?
Out-group homogeneity seems closely related, at least.
Recently found this paper, entitled "On the Cruelty of Really Teaching Computer Science" by Dijkstra (plaintext transcription here). It outlines ways in which computer programming had failed to (and still has) actually jump across the transformative-insight gap that led to the creation of the programmable computer. Probably relevant to many of this crowd, and very reminiscent of some common thoughts I've seen here related to AI design.
In the same place I found this paper discussed, there was mention of this site, which was recommended as teaching computer science in a way implementing Dijkstra's suggestions and this textbook, similarly. I can't vouch for them personally yet, but this might be an appropriate addition to the big list of textbooks.
Is it just me, or is solipsism wrong?
The talk Eliezer Yudkowsky held at Oxford (and the resulting discussion) are now online.
LW meta (reposted, because a current open thread did not exist then): I have received a message from “admin”:
I have seen, indeed, options to create a wiki account. But I already have one; how do I associate the existing accounts?
It looks like the Sheep Marketplace is done since a major heist for its bitcoins took place. At least one part of this prediction worked out.
Today I skim-read Special Branch (1972), the first book-length examination of Good's "ultra-intelligent machine."
It is presented in the form of a 94-page dialogue, and the author (Stefan Themerson) is clearly not a computer scientist nor an analytic philosopher. So the book is largely a waste of attempted "analysis." But because I'm interested in how ideas develop over time and across minds, I'll share some pieces of the dialogue here.
A detective superintendent from "special branch," named Watson, meets up with the author (the... (read more)
Make sure you use the tag "open_thread" so that it will show up in the latest open thread on the sidebar.
Here are two (correct) arguments that are highly analogous.
Brownian motion, the fact that a particle in water or air does not come to rest, but dances at a minimal rate is an important piece of evidence for the atomic hypothesis. Indeed, Leucippus and Democritus are said to have derived the atomic hypothesis from Brownian motion; certainly Lucretius provided it as evidence.
Similarly, Darwin worried that "blending" inheritance would destroy variation in quantitative traits. He failed to reach the conclusion that heredity should be discrete, though.
I'm planning to run a rationality-friendly table-top roleplaying game over IRC and am soliciting players.
The system is Unknown Armies, a game of postmodern magic set in a creepier, weirder version of our own world. Expect to investigate crimes, decipher the methods behind occult rituals, interpret symbols, and slowly go mad. This particular game will follow the misadventures of a group of fast food employees working for an occult cabal (well, more like a mailing list) that wants to make the world a better place.
Sessions will be 3-4 hours once a week over I... (read more)
I̶s̶ ̶t̶h̶e̶r̶e̶ ̶a̶ ̶n̶a̶m̶e̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶h̶a̶l̶o̶ ̶e̶f̶f̶e̶c̶t̶ ̶o̶f̶ ̶w̶o̶r̶d̶s̶?̶ ̶ ̶T̶h̶e̶r̶e̶ ̶s̶h̶o̶u̶l̶d̶ ̶b̶e̶ ̶b̶e̶c̶a̶u̶s̶e̶ ̶o̶n̶e̶ ̶e̶x̶a̶m̶p̶l̶e̶ ̶o̶f̶ ̶t̶h̶i̶s̶ ̶i̶s̶ ̶"̶O̶v̶e̶r̶d̶r̶a̶f̶t̶ ̶P̶r̶o̶t̶e̶c̶t̶i̶o̶n̶"̶.̶
EDIT: I am specifically referring to Debit Card Overdraft p̶r̶o̶t̶e̶c̶t̶i̶o̶n̶ service
EDIT 2: I have been made aware that I am using the wrong term, overdraft service is the term most commonly used by major banks to refer to the "service" they offer on debit card overdrafts. If you see me refer to somethin... (read more)
Computer programs which maximize entropy show intelligent behavior.
Kevin Kelly linked to it, which means it might make sense, but I'm not sure.
It sounds like Prigogine (energy moving through a system causes local organization), but I'm not sure about Prigogine, either.
Louie on G+ links an interesting pair of philosophy papers: http://plus.google.com/104557909419304580033/posts/jNdsspkqGH8 - An attempt to examine the argument from disagreement ('no two people seem able to agree on anything in ethics') by using computer simulations of belief convergence. Might be interesting reading.
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
I've been diagnosed with avoidant personality disorder and obsessive compulsive personality disorder, as well as major depression, about 4 months ago, and even though my depression has been drastically reduced by medication, I still often have suicidal thoughts. Does anyone have advice on dealing with this? It's just hard to cope with feeling like I'm someone that it isn't good or healthy to be around.
Evolution in humans does not work to produce an integrated intellectual system but produces the set of hacks better suited to the ancestral environment than any other human. Thus we should expect the average human brain to have quite insular but malleable capabilites. Indeed I have the impression that old arts like music try to repurpose those specific pathways in novel ways. Are there parts of our brains we can easily repurpose to aid in our quest for rationality?
I am trying to find a post here and am unable to find it because I do not seem to have the right keywords.
It was about how the rational debate tradition, reason, universities, etc. arose in some sort of limited context, and how the vast majority of people are not trained in that tradition and tend to have emotional and irrational ways of arguing/discussing and that it seems to be the human norm. It was not specifically in a post about females, although some of the comments probably addressed gender distributions.
I read this post definitely at least six months and probably over a year ago. Can anyone help me?
Someone led me to Emotional Baggage Check. The idea appears to be that people can leave an explanation of what's troubling them, or respond to other people's issues with music or words of encouragement. It sounds like a good idea (the current popular strategy of whining on a public forum seems to be more trouble than it's worth). It doesn't look particularly troll-proof, though.
If nothing else, I'd like to look at them in a year or so and see how it's turned out.
Can someone change the front page so it doesn't say "Lesswrong:Homepage"? This sounds like it is a website from 1995. Almost any other plausible wording would be better.
Directed Technological Change and Resources
http://whynationsfail.com/blog/2013/11/26/directed-technological-change-and-resources.html
"Temporary interventions are sufficient to redirect technological change..."