Related to: List of public drafts on LessWrong
Article based on this draft: Conspiracy Theories as Agency Fictions
I was recently thinking about a failure mode that classical rationality often recognizes and even reasonably competently challenges, yet nearly all the heuristics it uses to detect it, seem remarkably easy to use to misuse. Not only that they seem easily hackable to win a debate. How much has the topic been discussed on LW? Wondering about this I sketched out my thoughts in the following paragraphs.
On conspiracy theories
What does the phrase even mean? They are generally used to explain events or trends as the results of plots orchestrated by covert groups. Sometimes people use the term to talk about theories that important events are the products of secret plots that are largely unknown to the general public. Conspiracy in a somewhat more legal sense is also used to describe agreement between persons to deceive, mislead, or defraud others of their legal rights, or to gain an unfair advantage in some endeavour. And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people ac...
At Reason Rally a couple of months ago, we noticed that a lot of atheists there seemed to be there for mutual support - because their own communities rejected atheists, because they felt outnumbered and threatened by their peers, and the rally was a way for them to feel part of an in-group.
There seem to be differing concentrations of people who have had this sort of experience on LessWrong. Some of us felt ostracized by our local communities while growing up, others have felt pretty much free to express atheist or utilitarian views for their whole lives. Does anyone else think this would be worth doing a poll on / have experiences they want to share?
I get a ridiculous amount of benefit by abusing store return deadlines. I've tested and returned an iPhone, $400 Cole Haan bag, multiple coats, jeans, software, video games, and much more. It's surprising how long many return periods are, and it's a fantastic way to try new stuff and make sure you like it.
Because there is an unspoken understanding, that michaelcurzi is clearly aware of, that a no-questions-asked returns policy is intended for cases where the buyer found the item unsuitable in some way, rather than to provide free temporary use of their stuff.
Re-reading my own post on the 10,000 year explosion, a thought struck me. There's evidence that the humans populations in various regions have adapted to their local environment and diet, with e.g. lactose tolerance being more common in people of central and northern European descent. At the same time, there are studies that try to look at the diet, living habits etc. of various exceptionally long-lived populations, and occasionally people suggest that we should try to mimic the diet of such populations in order to be healthier (e.g. the Okinawa diet).
That made me wonder. How generalizable can we consider any findings from such studies? What odds should one assign to the hypothesis that any health benefits such long-lived populations get from their diet are mostly due to local adaptation for that diet, and would not benefit people with different ancestry?
I feel as though this cobbled-together essay from '03 has a lot of untapped potential.
An economics question:
Which economic school of thought most resembles "the standard picture" of cogsci rationality? In other words, which economists understand probability theory, heuristics & biases, reductionism, evolutionary psychology, etc. and properly incorporate it into their work? If these economists aren't of the neo-classical school, how closely does neo-classical economics resemble the standard picture, if at all?
Unnecessary Background Information:
Feel free to not read this. It's just an explanation of why I'm asking these questions.
I'm somewhat at a loss when it comes to economics. When I was younger (maybe 15 or so?) I began reading Austrian economics. The works of Murray Rothbard, Ludwig von Mises, etc., served as my first rigorous introduction to economics. I self-identified as an Austrian for several years, up until a few months ago.
For the past year, I have learned a lot about cognsci rationality through LW sequences and related works. I think I have a decent grasp of what cognsci rationality is, why it is correct, and how to conflicts with the method of the Austrian school. (For those who aren't aware, Austrians use an apriori method and claim absolu...
Econ grad student here (and someone else converted away from Austrian econ in part from Caplan's article + debate with Block). Most of economics just chugs right along with the standard rationality (instrumental rationality, not epistemic) assumptions. Not because economists actually believe humans are rational - well some do, but I digress - but largely because we can actually get answers to real world problems out of the rationality assumptions, and sometimes (though not always) these answers correspond to reality. In short, rationality is a model and economists treat it as such - it's false, but it's an often useful approximation of reality. The same goes for always assuming we're in equilibrium. The trick is finding when and where the approximation isn't good enough and what your criteria for "good enough" is.
Now, this doesn't mean mainstream economists aren't interested in cogsci rationality. An entire subfield of economics - Behavioral Economics - rose up in tandem with the rise of the cogsci approach to studying human decision making. In fact, Kahneman won the nobel prize in economics. AFAICT there's a large market for economic research that applies behavioral econ...
Far from being batshit crazy, Mises was an eminently reasonable thinker. It's just that he didn't do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.
People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don't propagate to all others, much of what they say or write is signaling that's incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it's more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.
Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatso...
Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that "what's going so wrong in society" can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don't say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it's about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the "human condition". But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you're talking about and what the two magic insights are.
Sure. He wrote about it a lot. Here is a concise quote:
The concepts of chance and contingency, if properly analyzed, do not refer ultimately to the course of events in the universe. They refer to human knowledge, prevision, and action. They have a praxeological [relating to human knowledge and action], not an ontological connotation.
Also:
...Calling an event contingent is not to deny that it is the necessary outcome of the preceding state of affairs. It means that we mortal men do not know whether or not it will happen. The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members. A statement is probable if our knowledge concerning its content is deficient. We do not know everything which would be required for a definite decision between true and not true. But, on t
Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.
And Ludwig and Richard themselves were arch enemies. Well only sort of, but they certainly didn't agree on everything, and the idea that Ludwig simply took his philosophy of probability from his brother couldn't be further from the truth. Ludwig devoted an entire chapter in his Magnum Opus to uncertainty and probability theory, and I've seen it mentioned many times that this chapter could be seen as his response to his brother's philosophy of probability.
I see what you're saying in your post, but the confusion stems from the fact that Ludwig did in fact believe that frequency probability, logical positivism, etc., were useful epistemologies in the natural sciences, and led to plenty of advancements etc., but that they were strictly incorrect when extended to "the sciences of human action" (economics and others). "Class probability" is what he called the instances where frequency worked, and "case probability" where it didn't.
The most concise quote ...
In many artificial rule systems used in games there often turn out to be severe loopholes that allow an appropriate character to drastically increase their abilities and power. Examples include how in Morrowind you can use a series of intelligence potions to drastically increase your intelligence and make yourself effectively invincible or how in Dungeons and Dragons 3.5 a low level character can using the right tricks ascend to effective godhood in minutes.
So, two questions which sort of go against each other: First is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes? (essentially going FOOM in a general sense). Second, and in the almost exact opposite direction, such aspects are common in games and one has quite a few science fiction and fantasy novels where a character (generally evil) tries to do something similar. Less Wrong does have a large cadre of people involved in nerd-literature and the like. Is this aspect of such literature and games acting as fictional evidence which is acting in our backgrounds to improperly make such scenarios seem likely or plausible?
I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.
...A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more s
I've just uploaded an updated version of my comment scroller. Download here. This update makes the script work correctly when hidden comments are loaded (e.g. via the "load all comments" link). Thanks to Oscar Cunningham for prompting me to finally fix it!
Note: Upgrading on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (Uninstall via Tools > Extensions)
For others who aren't using it: I wrote a small user...
Yesterday I was lying in bed thinking about the LW community and had a little epiphany, guessing the reason as to why discussions on gender relations and the traditional and new practices of inter-gender choice and manipulation (or "seduction", more narrowly) around here consistently "fail" as people say - that is, produce genuine disquiet and anger on all sides of the discussion.
The reason is that both opponents and proponents of controversial things in this sphere - be it a techincal approach to romantic relations ("PUA") o...
Just posted today: a small rant about hagiographic biographers who switch off their critical thinking in the presence of literary effect and a cool story. A case study in smart people being stupid.
Has anybody actually followed through on Louie's post about optimal employment (i.e. worked a hospitality job in Australia on a work visa)? How did you go about it? Did you just go there without a job lined up like he suggests? That seems risky. And even if you get a job, what if you get fired after a couple of weeks?
I really like the idea, but I'd also like a few more data points.
Argument for Friendly Universe:
Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.
Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.
Generally, it will succeed. (General intelligence = power of general-purpose optimization.)
Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a lo...
I'm trying to put together an aesthetically pleasing thought experiment / narrative, and am struggling to come up with a way of framing it that won't attract nitpickers.
In a nutshell, the premise is "what similarities and differences are there between real-world human history and culture, and those of a different human history and culture that diverged from ours at some prehistoric point but developed to a similar level of cultural and technological sophistication?"
As such, I need some semi-plausible way for the human population to be physically ...
It's not clear to me why you don't just appeal to Many Worlds, or more generally to alternate histories. These are fairly well-understood concepts among the sort of people who'd be interested in such a thought experiment. Why not simply say "Imagine Carthage had won the Punic Wars" and go from there?
What, costs/benefits are there to pursing a research career in psychology. Both from a personal perspective and in terms of societal benefit?
When assessing societal benefit, consider: are you likely to increase the total number of research psychologists, or just increase the size of the pool from which they are drawn? See Just what is ‘making a difference’? - Counterfactuals and career choice on 80000hours.org.
The decision of what career to pursue is one of the largest you will ever take. The value of information here is very high, and I recommend putting a very large amount of work and thought into it - much more than most people do. In particular there is a great deal of valuable stuff on 80000hours.org - it's worth reading it all!
John Derbyshire on Ridding Myself of the Day
...I used to console myself with the thought that at least I’d been reading masses of news and informed opinion, making myself wiser and better equipped to add my own few cents to the pile. This is getting harder and harder to believe. There’s something fleeting, something trivializing about the Internet. I think what I have actually done is wasted five or six perfectly good hours when I could have been working up a book proposal, fixing a side door in the garage, doing bench presses, or…or…reading a novel.
...
I sh
The idea that a stone falls because it is 'going home' brings it no nearer to us than a homing pigeon, but the notion that 'the stone falls because it is obeying a law' makes it like a man, and even a citizen.
--C. S. Lewis
Is it a problem to think of matter/energy as obeying laws which are outside of itself? Is it a problem to think of it as obeying more than one law? Is mathematics a way of getting away from the idea of laws of nature? Is there a way of expressing behaviors as intrinsic to matter/energy in English? Is there anything in the Sequences or elsewhere on the subject?
A dicussion on the IRC LessWrong channel about how to provide an incentive to learning the basic mathematics of cool stuff for the mathphobic aspiring rationalists on LW (here is the link to a discussion of that idea, gave us another one.
The Sequences are long
Longer than Lord of the Rings. There is reason rational wiki translates our common phrase of "read the sequences" as "f##k you". I have been here for nearly 2 years and I still haven't read all of them systematically. And even among people read them, how much of them will they rec...
......the outstanding feature of any famous and accomplished person, especially a reputed genius, such as Feynman, is never their level of g (or their IQ), but some special talent and some other traits (e.g., zeal, persistence). Outstanding achievements(s) depend on these other qualities besides high intelligence. The special talents, such as mathematical musical, artistic, literary, or any other of the various “multiple intelligences” that have been mentioned by Howard Gardner and others are more salient in the achievements of geniuses than is their typical
How much is this statistically correct? I agree with the fact that most high-IQ people are not outstanding geniuses, but neither are most non-high-IQ people. This only proves that high IQ alone is not a guarantee for great achievements.
I suspect a statistical error: ignoring a low prior probability that a human has very high IQ. Let me explain it by analogy -- you have 1000 white boxes and 10 black boxes. Probability that a white box contains a diamond is 1%. Probability that a black box contains a diamond is 10%. It is better to choose a black box? Well, let's look at the results: there are 10 white boxes with a diamond and only 1 black box with a diamond... so perhaps choosing a black box is not so great idea; perhaps is there some other mysterious factor that explains why most diamonds end in the white boxes? No, the important factor is that a random box has only 0.01 prior probability of being black, so even the 1:10 ratio is not enough to make the black boxes contain the majority of diamonds.
The higher the IQ, the less people have it, especially for very high values. So even if these people were on average more successful, we would still see more total success achieved by people with not so high IQ.
(Disclaimer: I am not saying that IQ has a monotonous impact on success. I'm just saying that seeing most success achieved by people with not so high IQ does not disprove this hypothesis.)
I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?
Sure. Relatedly, the Mona Lisa currently hanging in the Louvre isn't the original... that only existed in the early 1500s. All we have now is the 500-year-old descendent of the original Mona Lisa, which is not the same, it is merely a descendent.
Fortunately for art collectors, human biases are such that the 500-year-old descendent is more valuable in most people's minds than the original would be.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.