If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

New Comment
266 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Related to: List of public drafts on LessWrong

Article based on this draft: Conspiracy Theories as Agency Fictions

I was recently thinking about a failure mode that classical rationality often recognizes and even reasonably competently challenges, yet nearly all the heuristics it uses to detect it, seem remarkably easy to use to misuse. Not only that they seem easily hackable to win a debate. How much has the topic been discussed on LW? Wondering about this I sketched out my thoughts in the following paragraphs.

On conspiracy theories

What does the phrase even mean? They are generally used to explain events or trends as the results of plots orchestrated by covert groups. Sometimes people use the term to talk about theories that important events are the products of secret plots that are largely unknown to the general public. Conspiracy in a somewhat more legal sense is also used to describe agreement between persons to deceive, mislead, or defraud others of their legal rights, or to gain an unfair advantage in some endeavour. And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people ac... (read more)

Polish this and it will make a decent discussion post.

I have since expanded and polished it into an article. I hope it isn't unworthy!
Related: Reversed stupidity is not intelligence; Knowing About Biases Can Hurt People. An argument against a conspiracy theory is probabilistic, because we don't deny that conspiracies exist, only that in this specific case, a non-conspiracy explanation is more probable than a conspiracy explanation, therefore focusing on the conspiracy explanation is privileging a hypothesis. People are not very good at probabilistic reasoning. So some of them prefer an interesting story. And others try to reverse stupidity by making fully general counterarguments against conspiracies. The situation is further complicated by not having a precise definition of what conspiracy is. Does it require mutual verbal agreement, or does a silent cooperation on Prisonner' Dilemma also count as a conspiracy? (Two duopolistic producers decide to avoid lowering their product prices, without ever speaking with each other.) Somewhere between this is a cooperation organized by people who avoid to speak about the topic directly. (Each of the duopolistic producers publishes a press article "we try to provide the best quality, because making cheap junk would be bad for our customers".) Actually, the players can even deceive themselves that they are really following a different goal, and the resulting cooperation is just a side effect.

At Reason Rally a couple of months ago, we noticed that a lot of atheists there seemed to be there for mutual support - because their own communities rejected atheists, because they felt outnumbered and threatened by their peers, and the rally was a way for them to feel part of an in-group.

There seem to be differing concentrations of people who have had this sort of experience on LessWrong. Some of us felt ostracized by our local communities while growing up, others have felt pretty much free to express atheist or utilitarian views for their whole lives. Does anyone else think this would be worth doing a poll on / have experiences they want to share?

Since this got upvoted, I drafted a rough version of a form to use for this poll. Feedback on the survey design is more than welcome.
Are we meant/allowed to fill it out yet?
Since it's been up for a few hours and I haven't gotten any criticisms yet, I'm going to post it as a discussion post. So, go ahead :)

Whatever happened to the second Quantified Health Prize?


I get a ridiculous amount of benefit by abusing store return deadlines. I've tested and returned an iPhone, $400 Cole Haan bag, multiple coats, jeans, software, video games, and much more. It's surprising how long many return periods are, and it's a fantastic way to try new stuff and make sure you like it.

How often do store personnel give you a hard time about returning these objects?
Never. Usually they ask why I'm returning it, and I just decide how literally true I want my answer to be, and that's that. I try to return it before the deadline, though, and I ask what the terms are at the time of purchase. Sometimes stores will let you return stuff later than the deadline, anyway.
In that case how is it "abuse"? Are you speaking of your intent?

Because there is an unspoken understanding, that michaelcurzi is clearly aware of, that a no-questions-asked returns policy is intended for cases where the buyer found the item unsuitable in some way, rather than to provide free temporary use of their stuff.

Re-reading my own post on the 10,000 year explosion, a thought struck me. There's evidence that the humans populations in various regions have adapted to their local environment and diet, with e.g. lactose tolerance being more common in people of central and northern European descent. At the same time, there are studies that try to look at the diet, living habits etc. of various exceptionally long-lived populations, and occasionally people suggest that we should try to mimic the diet of such populations in order to be healthier (e.g. the Okinawa diet).

That made me wonder. How generalizable can we consider any findings from such studies? What odds should one assign to the hypothesis that any health benefits such long-lived populations get from their diet are mostly due to local adaptation for that diet, and would not benefit people with different ancestry?

The diets I've seen described all sound like fairly old-fashioned diets. None of them seem to suggest foods that would be novel in any areas - what, fish are novel? Fruits and vegetables? Smaller portions and regular exercise?
Your categories are rather broad. "Fish" are not going to be novel anywhere, but "Atlantic cod" might be. Likewise, by "fruit" do you mean apples, oranges, or some African fruit that's completely obscure in the West because nobody's figured out how to commercialise it yet?
It's a rather broad topic. The main non-disputed example for the 10,000 Year Explosion is lactose-intolerance; lactose is present in most milks, so you could with justice say that this example is an example of an entire unadapted food group. The recommended foods in things like the Mediterranean or Okinawan diets all use food groups consumed by pretty much all ethnicities. No ethnicity is 'fruit intolerant' or 'fish intolerant', that I've heard of. Milk seems to pretty much be the special-case exception that proves the rule.
The risks and benefits of alcohol consumption for different ethnic groups seems like another example.
Milk is two mutations (one in Europe and one in Kenya) and we've worked out when and where. It's a very special case.
This was my rationale for sticking with my mostly wheat-based diet, but I think my belief in this position is slipping. It does appear that there are strong biochemical reasons to favor rice over wheat, for example. I think there's reason to be skeptical of "Okinawans eat this way, so you should too" but I think "Okinawans eat this way" is at least weak evidence for any particular diet change, like "you should eat more rice" or "you should eat more seaweed," but that those changes need other evidence ("you're gluten intolerant" or "iodine is good for you").
There's a lot to be said for self-experimentation. One of my friends has found that his digestive system "shuts down" (constipation, lack of satiation, possibly additional symptoms I've forgotten) if he doesn't eat wheat. This trait runs in his family. I haven't heard of anyone else having it. One's ancestry and long-lived populations give clues about what to experiment with, though.
That's fascinating. Do you know what he tried to replace it with?
I'm pretty sure it was rice. He hasn't experimented to find out whether he needs gluten or if it's something more specific to wheat.
I'm pretty sure it was rice. He hasn't experimented to find out whether he needs gluten or if it's something more specific to wheat.

I feel as though this cobbled-together essay from '03 has a lot of untapped potential.

Here's the PDF, and I'll also link to my comment about this essay in another thread.
I think most thoughts could probably not be represented externally. It definitely seems useful but "How to Make a Complete Map of Every Thought You Think" sounds like an exaggeration.
You don't say?
Statements of the "obvious" contribute plenty to the conversation. Putting the silent consensus into words is useful. Condescending snark is not.
Actually I intended to state precisely what I did.

An economics question:

Which economic school of thought most resembles "the standard picture" of cogsci rationality? In other words, which economists understand probability theory, heuristics & biases, reductionism, evolutionary psychology, etc. and properly incorporate it into their work? If these economists aren't of the neo-classical school, how closely does neo-classical economics resemble the standard picture, if at all?

Unnecessary Background Information:

Feel free to not read this. It's just an explanation of why I'm asking these questions.

I'm somewhat at a loss when it comes to economics. When I was younger (maybe 15 or so?) I began reading Austrian economics. The works of Murray Rothbard, Ludwig von Mises, etc., served as my first rigorous introduction to economics. I self-identified as an Austrian for several years, up until a few months ago.

For the past year, I have learned a lot about cognsci rationality through LW sequences and related works. I think I have a decent grasp of what cognsci rationality is, why it is correct, and how to conflicts with the method of the Austrian school. (For those who aren't aware, Austrians use an apriori method and claim absolu... (read more)

Econ grad student here (and someone else converted away from Austrian econ in part from Caplan's article + debate with Block). Most of economics just chugs right along with the standard rationality (instrumental rationality, not epistemic) assumptions. Not because economists actually believe humans are rational - well some do, but I digress - but largely because we can actually get answers to real world problems out of the rationality assumptions, and sometimes (though not always) these answers correspond to reality. In short, rationality is a model and economists treat it as such - it's false, but it's an often useful approximation of reality. The same goes for always assuming we're in equilibrium. The trick is finding when and where the approximation isn't good enough and what your criteria for "good enough" is.

Now, this doesn't mean mainstream economists aren't interested in cogsci rationality. An entire subfield of economics - Behavioral Economics - rose up in tandem with the rise of the cogsci approach to studying human decision making. In fact, Kahneman won the nobel prize in economics. AFAICT there's a large market for economic research that applies behavioral econ... (read more)

Wow! That was extraordinary helpful. My only regret is that I have but one upvote to give. You're right about the unintentional self-mindkilling from focusing of schools of thought. It's obvious to me in hindsight. It might just be a leftover from my Austrian days, but I am thoroughly skeptical of any macroeconomic model. A red flag for me is when I read that macro models aren't generally reducible to micro models. The only reason I'm reading a macro textbook is that my school requires intro to macro as a prerequisite to intro to micro. And I was thinking of studying introductory macro so I have a decent hand on it when I have to take it in school. Reading the first 100 pages of Mankiw's Principles of Macroeconomics hasn't been too terrible. Though so far I think it has basically been micro disguised as macro. But based on what you're saying, I think it might be better to stop reading it for now. I'll just learn it when I take it in school. My math background is okay, but not fantastic. I took some calculus my senior year of high school and got up to integration. For my freshman year of university, I'm taking Calc 1 in the fall and Calc 2 in the spring. Mathematics is likely my primary major, so I think I'll read Mathematics for Economists and then move onto Varian. Thank you very much for the suggested books, advice, and insight.
No problem! If you're going to attack Varian, I'd suggest not focusing on Mathematics for Economists too much. Make sure you understand basic constrained maximization using the lagrangian and then you're ready for Varian. Anything else he does that seems weird you can pick up as needed. Constrained maximization is usually taught in Calc 3 AFAIK, but I don't think it's too difficult if you can handle Calc 1. This shouldn't be as much of a red flag as it is to most people. Is it a red flag when micro models don't reduce to plausible theories of psychology? Not if it isn't worth the effort of doing micro with said theories. Similarly, there's a trade-off between microeconomic foundations in macro models and actually getting answers out of the models. Often the microeconomic foundations themselves aren't even plausible to begin with. It still might be a red flag based on the details of the tradeoff at the margin, but I'm not sure it's that clear.
I was just reviewing Mathematics for Economists. While a lot of it sounds fascinating, it's probably not what I need at the moment. Too much of it is over my head. So on second thought, I'll probably just review the first half of Calc 1, learn the second half, and tackle Varian. On the topic of macro reducing to micro, point taken. I appreciate the clarification.
Good idea. I wouldn't worry about complicated integrals if you're just preparing for Varian. You'll need integration, but I don't recall anything too complicated. It's mainly the differential calculus that you'll need.
Debating with Block would turn any rationalist off of Austrian econ. No one got it comletely right except Mises himself. Actually not even him, but he was usually extremely rational and rigorous in his approach - more than any other economist I know of - albeit often poorly communicated. In any case, any non-ideologically motivated rationalist worth their salt ought to be able to piece together a decent understanding of the epistemological issues by reading the first 200 pages of Human Action.
Um... and you - alone among the ignorant masses - realize and know all this because...? Sorry, but I don't have a high prior on your authority in the discipline.
I would have prefaced that with "in my opinion," but I thought that was obvious. (What else would it be?)
Interestingly, this is pretty much what I used to say about Marx when I was a Marxist.
My point was to indicate that not all people who put stock in the "Austrian school" accept post-Misesians as competent intepreters. I meant, essentially: Mises had it right, but read his original work (not later Austrians) and you'll be able to tell whether I'm right.
Is your nick from those times? Or a memory of them?
Slightly. Of course, the word has been used by many.
Economics is much bigger than it looks from the outside. People sometimes ask me why I'm studying economics, and my honest answer is "I want to be able to build machines that know how to trust one another".
Experimental economists use cogsci sometimes. Many economists incorporate those findings into models. And you can find Bayesian models in game theory, as alternate equilibrium concepts. But if you're looking for a school of universally Bayesian economists who employ research from cognitive science to make predictions, you won't find them. And I don't really know why it would matter. You won't find many biologists using cogsci rationality either, but that doesn't mean their research findings are false. Ignore schools of thought entirely and focus on independent empirical/theoretical questions. Use your cogsci rationality skills to differentiate between good and bad arguments and to properly weigh empirical papers. The historical disciplines are largely about politics anyway. The biggest tips for assessing econ are: 1) Most empirical papers are (sometimes necessarily) bad and should only change your priors by a small amount; you should look for overwhelming empirical findings if an argument goes against your (reasonable) priors, and 2) High degrees of consensus are a very good sign. On that second point, most textbooks will be stuff that most economists agree on.
I found your comment very helpful. Thanks! But your following point trips me up: Sure, I don't think a biologist studying mitochondria needs to be an expert on cogsci. Not being an expert on cogsci doesn't make the biologist's findings false. Similarly, it doesn't necessarily make the economist's findings false if she isn't well versed in cogsci. But the reason I'm interested in economists who know cogsci (as opposed to biologists, chemists, or physicists) is that their work directly involves human judgments and decision making under uncertainty. And isn't the precisely what cogsci discusses? Working from a better model of how human beings reason might lead to a better model of how the economy operates. Maybe I'm wrong about that? Ether way, I think your later point suffices to address this. Even if economists aren't well versed in cogsci, if they're making any relevant mistakes, then I'll hopefully catch them when reading.
As another econ grad student and former self-professed Austrian, I'll concur with Matt Simpson. Some economists have a good handle on these topics and others don't, but there aren't clear demarcating lines. Except for in macro, there aren't clearly identifiable schools, which is a good sign. By field of study, micro theorists are more likely to use rationality jargon, be familiar with probabilistic logic, and know a few H&B classic like the Allais or Ellsberg paradoxes. Whether micro theorists actually apply this knowledge better than other economists is another question. If you are interested in macro, check out Snowden and Vane's Modern Macroeconomics. It presents the full gamut of perspectives, steel-manning mainstream and heterodox schools chapter by chapter.
Out of personal interest: does Modern Macroeconomics discuss the "Monetary Disequilibrium" approach to macro?
Assuming you are referring to Austrian-style business cycle theory, the book has a chapter written by Roger Garrison on the subject. While the theory might not be applicable in general, he make a good case that a boom/bust cycle could be generated by credit expansion.
Oops, I wasn't clear. Monetary Disequilibrium is "Austrian" but is not the same thing as "Austrian Business Cycle Theory" (I think it's mostly orthogonal and I think some Austrians discuss both as important). Monetary Disequilibrium theory might more accurately be called a monetary economic theory rather than a macro economic theory.
Hey badger, thanks for the information. All of that is good to hear, especially since I'm mostly interested in micro. Down the line I may study finance, possibly get a CFA. But if/when there comes to time for me to learn advanced macro, I'll be sure to check out Modern Macroeconomics. Steel-manning all the perspectives sounds like it would be very useful to me. Thanks for the suggestion!
I don't have anywhere near enough time to elaborate on this, but I always feel compelled to respond when anyone mentions Austrian economics. I just want to say--for what it's worth--that even though I'm well-versed in LW-style rationality and epistemology, I consider the work of Ludwig von Mises, and everything that's been an extension thereof, to be in good epistemological standing. But beware. Mises himself was extremely terrible at explaining the epistemological foundation of his work in a way that avoided being as impenetrable as it was reminiscent of the sort of philosophy most looked down upon on this website, and those who have more than a mere glimmer of understanding of where he was coming from are few and far between, and none of them are popular Austrian economists one would normally run into. I implore you, and anyone else reading this who's interested, to investigate and scrutinize the epistemological status of the Austrian School not by reading the incompetent, confused regurgitations of the work of a man who himself could hardly do justice to his method, but by analyzing Austrian economic theory itself, and let it stand or fall by its own strength. I know I know, the epistemological commentary makes it sound like religion. It does! But this is merely an epic failure of communication--something (I consider) monumentally unfortunate given the massive importance of what (I believe) this school has to offer to the world.
That comment was at -2 for several hours, but just now went back to 0. Judging from those two downvotes, some clarification may be in order. I think I may have sounded too confident about my unsubstantiated assertions while not being clear enough about the core issue I was attempting to raise. What I was trying to bring up is that a school's epistemological commentary and their actual epistemological practice need not necessarily be aligned. There's nothing that says that one must know exactly what one is doing, and furthermore be able to communicate it effectively, to be competent at the task itself. This, I believe, is the story of the Austrian school. Their actual epistemological practice is in many ways solid, but their epistemological commentary is not. All too many intelligent, scientifically-minded people reject the economic theory because the epistemological theory sounds so ridiculous or pseudoscientific. But what I'm saying is that these people are correct about the latter, but not about extending to backward to the former. What basis does one have for rejecting the epistemological basis of the actual economic theory on the grounds that their epistemological commentary is bad? In what way does one's commentary about what one is doing have that strong of a causal connection with the success of the endeavor itself? Instead, one must let the theory itself stand or fall upon its own strength. Rather than looking at the economic theory itself, figuring out the epistemological basis (or lack thereof), and then deciding whether it stands on firm epistemological ground, they look to the Austrians to do their research for them. This, I believe, is a mistake. Mises was bad at communicating his epistemology (though I consider it in many ways solid), and others were just plain bad on epistemology. This does not mean the economic theory is (necessarily) on shaky ground. How did this happen? Isn't studying epistemology a tool for coming up with sound theory? Wouldn
How do you know when it's epistemology and when it's just epistemological commentary? ("If you're still alive afterwards, it was just epistemological commentary" -- not quite from The Ballad of Halo Jones)
Although I don't fully understand the reference, I think I sort of see where it's going. Either way though, epistemological practice is what one does in coming up with a way of modeling economic activity or anything else, and epistemological commentary is one's attempt to explain the fundamentals of what exactly is going on when one does the former. In this case, you know it's the result of epistemological practice when it's an actual economic model or whatever (e.g., the Austrian Business Cycle Theory), and you know it's epistemological commentary when they start talking about a priori statements, or logical positivism, or something like that.
In other words, they're batshit crazy, but somehow manage to say some sensible things anyway? I'd be uneasy about assuming that getting the right answers implies that they must be doing something rationally right underneath, and only believe they believe that stuff about economics being an a priori science. Re the Halo Jones reference: At one point, Halo Jones has joined the army fighting an interstellar war, and in a rare moment of leisure is talking with a hard-bitten old soldier. The army is desperate to get new recruits into the field as fast as possible, and the distinction between training exercises and actual combat is rather blurred. Halo asks her (it's an all-female army), "How do you know if it was combat, or just combat experience?". She replies, "If you're still alive afterwards, it was just combat experience."

Far from being batshit crazy, Mises was an eminently reasonable thinker. It's just that he didn't do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.

People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don't propagate to all others, much of what they say or write is signaling that's incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it's more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.

Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatso... (read more)

Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that "what's going so wrong in society" can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don't say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it's about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the "human condition". But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you're talking about and what the two magic insights are.

Oh sorry. I didn't mean that "what's going so wrong in society" is a single piece that can be understood given those two ingredients but is otherwise destined to remain confusing. I meant that what one finds on Less Wrong explains part of what's going so wrong, and Austrian economics (if properly distilled) elucidates the other. I should clarify though that Less Wrong certainly provides the bigger picture understanding of the situation, with the whole outdated hardware analysis etc., and thus it would be less like two symmetrical pieces being fit together, and more like a certain distilled form of Austrian economics being slotted into a missing section in the Less Wrong worldview. I also didn't mean to suggest that adding some insight from Less Wrong to some insight from the Austrian school would suddenly reveal the solution to civilization's problems. Rather, what I'm suggesting would just be another step in the process to understanding the issues we face--perhaps even a very large step--and thus would simply put us in a better position to figure out what to do to make it significantly more likely that the future will go well. Not two magic insights, but two very large collections of knowledge and information that would be very useful to synthesize and add together. Less Wrong has a lot of insights about outdated hardware, cognitive biases, how our minds work and where they're likely to go systematically wrong, certain existential risks, AI, etc., and Austrian economics elucidates something much more controversial: the joke that is the current economic, political, and perhaps even social organization of every single nation on Earth. As people from Less Wrong, what else should we expect but complete disaster? The current societal structure is the result of tribal political instincts gone awry in this new, evolutionarily discordant situation of having massive tribes of millions of people. Our hardware and factory presets were optimized for hunter-gatherer situati
I have also found claims that one or a few simple ideas can solve huge swaths of the world's problems to be a sign of naivity, but another exception is when there is mass delusion or confusion due to systematic errors. Provided such pervasive and damaging errors do exist, merely clearing up those errors would be a major service to humanity. In this sense, Less Wrong and Misesian epistemology share a goal: to eliminate flawed reasoning. I am not sure why Mises chose to put forth this LW-style message as a positive theory (praxeology), but the content seems to me entirely negative in that it formalizes and systematizes many of the corrections economists (even mainstream ones) must have been tired of making. Perhaps he found that people were more receptive to hearing a "competing theory" than to having their own theories covered in red ink.
Considering we already had a post on the epistemic problems of the school, would you be willing to write a post or sequence on what you consider particularly interesting or worthwhile in Austrian economics?
Yes. May be a while though.
A possible analogy for how Crux views the Austrian economics might be how most of us view the Copenhagen quantum mechanics of Bohr, Heisenberg et al: excellent science done by top-notch scientists, unfortunately intertwined with a confused epistemology which they thought was essential to the science, but actually wasn't. (I don't know enough about Austrian economics to say if the analogy is at any level correct, but it seems a sensible interpretation of what Crux says).
Block and Rothbard do not understand Austrian economics and are incapable of defending it against serious rationalist criticism. Ludwig von Mises is the only rigorous rationalist in the "school". His works make mincemeat of Caplan's arguments decades before Caplan even makes them. But don't take my word for it - go back and reread Mises directly. You will see that the "rationalist" objections Caplan raises are not new. They are simply born out of a misunderstanding of a complex topic. Rothbard, Block, and most of the other "Austrian" economists that followed merely added another layer of confusion because they weren't careful enough thinkers to understand Mises. ETA: Speaking of Bayesianism, it was also rejected for centuries as being unscientific, for many of the same reasons that Mises's observations have been. In fact, Mises explains exactly why probability is in the mind in his works almost a century ago, and he's not even a mathematician. It is a straightforward application of his Austrian epistemology. I hope that doesn't cause anyone's head to explode.
It's been a while since I read Man, Economy, and State, but it seemed to me that Rothbard (and therefore possibly von Mises) anticipated chaos theory. There was a description of economies chasing perfectly stable supply and demand, but never getting there because circumstances keep changing.
This intrigues me, could you elaborate?

Sure. He wrote about it a lot. Here is a concise quote:

The concepts of chance and contingency, if properly analyzed, do not refer ultimately to the course of events in the universe. They refer to human knowledge, prevision, and action. They have a praxeological [relating to human knowledge and action], not an ontological connotation.


Calling an event contingent is not to deny that it is the necessary outcome of the preceding state of affairs. It means that we mortal men do not know whether or not it will happen. The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members. A statement is probable if our knowledge concerning its content is deficient. We do not know everything which would be required for a definite decision between true and not true. But, on t

... (read more)
Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century. And your quote has him taking Richard's exact position: When he says "class probability" he is specifically talking about this. ... Which is the the precise opposite of the position of the subjectivist.

Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.

And Ludwig and Richard themselves were arch enemies. Well only sort of, but they certainly didn't agree on everything, and the idea that Ludwig simply took his philosophy of probability from his brother couldn't be further from the truth. Ludwig devoted an entire chapter in his Magnum Opus to uncertainty and probability theory, and I've seen it mentioned many times that this chapter could be seen as his response to his brother's philosophy of probability.

I see what you're saying in your post, but the confusion stems from the fact that Ludwig did in fact believe that frequency probability, logical positivism, etc., were useful epistemologies in the natural sciences, and led to plenty of advancements etc., but that they were strictly incorrect when extended to "the sciences of human action" (economics and others). "Class probability" is what he called the instances where frequency worked, and "case probability" where it didn't.

The most concise quote ... (read more)

I didn't say he was in the Bayesian camp, I said he had the Bayesian insight that probability is in the mind. In the final quote he is simply saying that mathematical statements of probability merely summarize our state of knowledge; they do not add anything to it other than putting it in a more useful form. I don't see how this would be interpreted as going against subjectivism, especially when he clearly refers to probabilities being expressions of our ignorance.
Double post

In many artificial rule systems used in games there often turn out to be severe loopholes that allow an appropriate character to drastically increase their abilities and power. Examples include how in Morrowind you can use a series of intelligence potions to drastically increase your intelligence and make yourself effectively invincible or how in Dungeons and Dragons 3.5 a low level character can using the right tricks ascend to effective godhood in minutes.

So, two questions which sort of go against each other: First is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes? (essentially going FOOM in a general sense). Second, and in the almost exact opposite direction, such aspects are common in games and one has quite a few science fiction and fantasy novels where a character (generally evil) tries to do something similar. Less Wrong does have a large cadre of people involved in nerd-literature and the like. Is this aspect of such literature and games acting as fictional evidence which is acting in our backgrounds to improperly make such scenarios seem likely or plausible?

Can you analogize this to being Turing-complete? One thing esoteric languages - and security research! - teaches is that the damndest things can be Turing-complete. (For example, return-into-libc attacks or Wang tiles.)
Yep. Which is why letting a domain-specific language reach Turing-completeness is a danger, because when you can do something you will soon have to do it. I've ranted on this before. Idle speculation: I wonder if this is analogous to the intelligence increase from chimps to humans. Not Turing-completeness precisely, but some similar opening into a new world of possibility, an open door to a previously unreachable area of conceptspace.
That is theoretically possible, but ignores Rule Zero. No GM would allow it. Also, I'm not sure what you mean by "randomized rule systems"; these games are highly designed and highly artificial, not random.
Not necessarily. I've allowed things like that. There isn't anything WRONG with your adventurers ascending to Godhood, if that's what they find fun. I had it happen to the one meta campaign world to the point where there was a pantheon made up of nothing but ascended characters (either from the PC's, or from NPC's who ascended using other methods.) It made a good way of keeping track of things that had been done and so couldn't be done in future games: (Ah, you can't use Celerity, Timestop, and Bloodcasting to get infinite turns, Celerity was turned into a divine power by your earlier character Neo.). However, the game sort of runs out of non-hand wavy content at that point, so you just have to make up things like Carnage Endelphia Over-Deities, Mass Produced Corrupted Paragon Dragons, etc. I even had an official metric: If you can use your powers to beat a single character with a EL of 8 higher (The point at which the chart just flat out says "We aren't giving EXP for this, they shouldn't have been able to do that.") They are ascension worthy. It seemed more fun than saying "No, you can't!" And eventually I just stopped planning things out far in advance because I expected a certain amount of gamebreaking from my players. It's like the mental equivalent of eating cake with a cup of confectioners sugar on top though. Eventually, even the players eventually sort of get sick of the sweetness and move onto something else. Once they played around with Godly power for a bit, they usually got tired of it and we moved on to a new campaign in the meta campaign world. But it does still allow you to say "Remember that time we made our own pantheon of gods who clawed their way up from the bottom using a variety of methods?" Which, as memories go, is a neat one to have.
Fair enough, though I think that's a special case and most GMs wouldn't be willing to go within a mile of that kind of game play. It sounds amazingly fun though! Kudos!
Ok. But Rule Zero is essentially in this context a stop-gap on what the actual rules allow. The universe as far as we can tell isn't intelligently design and thus doesn't have stop gap feature added in. The idea here is that even rule systems which are designed to make ascension difficult often seem to still allow it. Still, you are correct that this isn't really at all a sample of randomized rule systems. In that regard, your point is pretty similar to that by sixes_and_sevens.
The notion of "loopholes" rests on the idea that rules have a "spirit" (what they were ostensibly created to do) and a "letter" (how they are practically implemented). Finding a loophole is generally considered to be adhering to the letter of the law while breaking the spirit of the law. In the examples you cite, the spirit of the rules is to promote a fun, balanced game. Making oneself invincible is considered a loophole because it results in an un-fun, unbalanced game. It's therefore against the spirit of the rules, even though it adheres to the letter. What "spirit" would you be breaking if you suddenly discovered a way to drastically increase your own abilities?
Loophole may have been a bad term to use given the connotation of rules having a spirit. It might make more sense in context to use something like "Surprisingly easy way to to make one extremely powerful if one knows the right little small things."
I think you're missing my point, though I didn't really emphasise it. Rule systems are artificial constructs designed for a purpose. Game rules in particular are designed with strong consideration towards balance. Both the examples you gave would be considered design failures in their respective games. The reason they are noteworthy is because the designers have done a good job of eliminating most other avenues of allowing a player character to become game-breakingly overpowered. You ask "is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes?" Most rule systems aren't randomised; if they were they probably wouldn't do anything useful. They're also not interesting on the basis of how complicated they are, but because they've been explicitly designed to engage humans.
Ah, I see. I didn't understand correctly the first time. Yes, that seems like a very valid set of points
My D&D heyday was 2nd ed, where pretty much any three random innocuous magic items could be combined to make an unstoppable death machine. They've gotten better since then.
That of envy avoidance- rising too high too quickly can also raise ire.

I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.

A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.

But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more s

... (read more)

I've just uploaded an updated version of my comment scroller. Download here. This update makes the script work correctly when hidden comments are loaded (e.g. via the "load all comments" link). Thanks to Oscar Cunningham for prompting me to finally fix it!

Note: Upgrading on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (Uninstall via Tools > Extensions)

For others who aren't using it: I wrote a small user... (read more)

Yay! Thanks!
Very much a 101 question.... how do I download your program?
The program is a short snippet of code in your web browser that runs whenever you visit lesswrong.com. The precise method of installation depends on your web-browser: * Firefox: you need to install the Greasemonkey extension, and then just click on the "download here" link above. * Google Chrome: just click the "download here" link above. * Opera: I think you can just click the "download here" link. (I'm not 100% sure.) * Internet Explorer and Safari: this page has links to some help on getting user script support; once you've done that then just click on the "download here" link above. Once you have got to this stage and clicked the link, a pop-up should appear asking if you want to install this script. It will probably have a warning about "this script can collect your data on lesswrong.com", this particular script is safe to install: it doesn't send any information anywhere (or even store anything for longer than you are viewing a specific page). (I haven't been able to test it in Opera, Safari or Internet Explorer, so there is no guarantee that it will work correctly for them.)

Yesterday I was lying in bed thinking about the LW community and had a little epiphany, guessing the reason as to why discussions on gender relations and the traditional and new practices of inter-gender choice and manipulation (or "seduction", more narrowly) around here consistently "fail" as people say - that is, produce genuine disquiet and anger on all sides of the discussion.

The reason is that both opponents and proponents of controversial things in this sphere - be it a techincal approach to romantic relations ("PUA") o... (read more)

Before I comment a nitpick: While I do agree we are worse off in this regard because of the strangeness of the modern world, there is no reason to think nature wouldn't produce some or perhaps quite a bit of social and psychological suffering even with us being perfectly well adapted to our environment. I mean we don't expect it to do so with physical pain.
Yes, yes, I agree. By the local standards I might be a bit of a hippie, but the last thing I want to do is demonize the modern life and compare it negatively with the "natural" (mindless & chaos-spawned) alternative. I was merely focusing on the current problem.
Well, I certainly agree that the controversial topics you list have the property you describe -- that is, no popular position on them is unflawed. I don't believe this significantly explains the low light:heat ratio of discussions about those topics, though. There are lots of topics where no popular position on them is unflawed that nevertheless get discussed without the level of emotional investment we see when gender relations or tribal affiliations (or, to a lesser extent, morality) get involved. That said, it's not especially mysterious that gender relations and tribal affiliations reliably elicit more emotional involvement than, say, decision theory.
The problem is that the positions on this topic (not just the popular ones, but all the conceivable non-transhumanist ones) are not just "unflawed", they're pretty damn horrible, absolutely speaking. Consider everyone (who's smart enough for it and cares to) unabashedly using "PUA"-style psychological manipulation (not the self-improvement bits there, what they call "inner game" and what's found in all other self-help manuals, but specifically "outer game", internalizing the "marketplace" logic and applying it to their love life) versus things staying as they are, with the sexual status race accelerating and getting more crazy. Clearly, both situations are not just "flawed" but fucking horrible, full of suffering and adversity and shit. That's very easy to imagine, and that's where the tension comes from. (BTW, privately I'm so disgusted at those "seduction" tricks that it took some willpower not to heap abuse at such practices throughout this comment. Don't talk to me about it.)
To make sure I understand... do you predict that for any question, if a group of people G has a set of possible answers A, and G is attempting to come to consensus on one of those answers, G's ability to cooperate in that effort will anticorrelate (p > .95) with how unpleasant G's expected results of implementing any of A are? That would surprise me, if so, but it wouldn't vastly shock me. Call it ~.6 confidence that the above is false. I'm ~.7 confident that G's ability to cooperate in that effort would anticorrelate more strongly with the standard deviation within G of pre-existing individual identifications with political or social entities associated with a particular member of A.
It's partly so in my opinion. I expect a modest effect like that for most issues, but in a much more dramatic fashion on the most painful problems, where our instincts are highly involved and can easily tell us that all the answers are going to hurt - like sex. Why else 'd you think that most of European classical tragic/dramatic literature touches on intimate dissatisfaction/suffering, and irrational behavior in regards to it?
Because intimate relations are really important to us, so we tell lots of stories about it. It's also why so many popular stories are about couples getting together and living happily ever after.
You're saying that technology - tinkering with human biology and human psychology - can supply a technical fix for problems with sex and death. But the imperfection and dysfunction of social and cultural solutions will also extend to technological solutions. Some methods of life extension will be lethal. Some hopes will be deluded. Some scientific analyses of psychology will be wrong, but they will supply the basis of a subculture or a technological intervention anyway. Rather than discuss it on a meta level first - whatever that means - it would be better if you supplied one or two concrete examples of what you have in mind.
It means that we should not just start discussing whether e.g. polyamory is good, but instead discuss how we, in practice, think and make value judgments about such things - without dwelling too much on concrete examples. I hope that it will, but it might well not, or the cure might be as bad as the disease. That's an useful thought in our current discussions because it puts things in perspective and by contrast illuminates the hard-wired, "inevitable" aspects of baseline humanity, that's what I mean. Absolutely, but my main point is not that we should wait for 50 years/100 years/the Singularity and it'll all be great, but that we should imagine a "good" condition of people and society that's unachievable by "ordinary" means (e.g. hacking ourselves to negate men's attraction to body shape and women's attraction to tribal chieftains) and use it as an example of a desirable outcome when we're talking policy - because this should allow us to notice the imperfection of all those "ordinary" means we're considering. We should allow ourselves a ray of hope to notice the darkness that we're in.

Just posted today: a small rant about hagiographic biographers who switch off their critical thinking in the presence of literary effect and a cool story. A case study in smart people being stupid.

Has anybody actually followed through on Louie's post about optimal employment (i.e. worked a hospitality job in Australia on a work visa)? How did you go about it? Did you just go there without a job lined up like he suggests? That seems risky. And even if you get a job, what if you get fired after a couple of weeks?

I really like the idea, but I'd also like a few more data points.

Well, following through in the sense that my flight is this coming wednesday (as in, in a few days), actually. :) I'm going without a job lined up. And I'll find out how it works out. I don't have data points for you so much as "about to perform the experiment"

Argument for Friendly Universe:

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a lo... (read more)

Its own pain, probably. Why do you believe it will care about the pain of other beings?
Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one's terminal value.
If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.
Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they'll cooperate.
Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That's pretty impressive for a bug... but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so. You don't have to want to make the bugs suffer. It's enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.) Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc. (And that's still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)
Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I'd say they'd leave us alone. Unless, of course, there's a hyperspace bypass that needs to be built.
The conclusion doesn't follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.
Only if there's general lack of atoms around. When atoms are in abundance, it's more instrumentally useful to ask me for help constructing whatever you find terminally useful.
Right, but your conclusion still doesn't follow - my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
Well, of course. But which my conclusion you mean that doesn't follow?
But "[of others]" part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there's a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
This is highly dependent on the strategic structure of the situation.
Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don't care, are not enough to conserve the pain.
I'd be interested in seeing you playing a Devil's advocate to your own position and try your best to counter each of the arguments.
Fair enough :) Counterarguments: The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts. A significant number of evolved intelligent agents may have directly opposing values. The power of general intelligence may be greatly exaggerated.
I rather think, that the power of general intelligence is greatly underestimated. Don't missunderestimate!
The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that "the power of general intelligence is greatly exaggerated" - nonexistent intelligence is unable to optimize anything anymore.
Which side do you find more compelling and why?
What's your opinion?
What other mechanisms have you compared it to? How do you define "pain" in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?
To a lack of any. Sharp negative reinforcement in a behavioristic learning process. Useless/inefficient for the necessary learning purposes. Depends on the circumstances. When boredom is inevitable and there's nothing I can do about it, I would prefer to be without it. Same time range in which my utility function operates. (EDIT: I'm sorry, I should have asked you for your own answers to your questions first. Stupid me.)
Do you actually buy this? I don't have the spoons or the time to refute it point-by-point, but I think it's completely, maybe even obviously and overdetermined-ly wrong, if a somewhat interesting idea.
I wrote it for novelty value, although it seems to be a defensible position. I can think of counterarguments, and counter-counterarguments, etc. Of course, if you are not interested and/or don't have time, you shouldn't argue about it. Thanks for the "spoons" link, a great metaphor there.

I'm trying to put together an aesthetically pleasing thought experiment / narrative, and am struggling to come up with a way of framing it that won't attract nitpickers.

In a nutshell, the premise is "what similarities and differences are there between real-world human history and culture, and those of a different human history and culture that diverged from ours at some prehistoric point but developed to a similar level of cultural and technological sophistication?"

As such, I need some semi-plausible way for the human population to be physically ... (read more)

It's not clear to me why you don't just appeal to Many Worlds, or more generally to alternate histories. These are fairly well-understood concepts among the sort of people who'd be interested in such a thought experiment. Why not simply say "Imagine Carthage had won the Punic Wars" and go from there?

I'm beginning to doubt my motives for this line of thinking, but I'm not abandoning it altogether. The trouble with alternate histories is as soon as you say "imagine so-and-so won such-a-war", people start coming up with stories that lead them to a very specific idea about what such a world would be like. I imagine your appeal to imagine Carthage winning the Punic Wars would involve someone picturing a world practically identical to ours, only retro-fitted with Carthaginian influences instead of Roman ones. I also feel (and it is a feeling I have trouble substantiating) that when posed with a question like "there's another society of humans over there; do they have [x]?", it's a much more straightforward pragmatic question to address than "in an alternate history where such-a-thing happened, do they have [x]?"
I see your point. Perhaps you could try to appeal to non-specific alternate histories? Not "imagine Carthage wins" but "imagine a butterfly zigged instead of zagging on August 3rd, 5823 BCE".
Does that not sound like a super-abstract question to you? I recognise it as asking pretty much exactly the same question as "an alternate several-thousand years of human history has taken place concurrent to, but separate from, our own; what's it like?", but the Many Worlds appeal is like saying "here is a blank canvas where anything can happen", while the equatorial wall or counter-earth scenario is like saying "here is a situation: how do you deal with it?" I think that's what I meant by Many Worlds being too open-ended in my response to drethelin.
So, this actually happened, right? At least, 95% of it. You could give the New World a few advantages (like more animals that are easy and useful to domesticate) and speculate other ways for them to develop. Keeping parts of the world separated after you have ocean-faring ships and air travel seems hard / implausible / you can make a similarly interesting worlds collide experience without needing the first contact to be now-ish.
It's not my intention to write a piece of fiction. It's a thought experiment I am trying to prettify. I want to ask questions like "would they have something like women's lib on the other side?" or "would they have public key cryptography?" or "what would their art have in common with our art?" I am quite surprised to find "prettify" is already in Chrome's spell check dictionary.
Are you interested in what the cultures / economics / politics look like, or are you interested in what the technologies look like? It seems to me that stuff like public key cryptography is in some sense the optimal answer to an engineering problem- and so if you have the problem and the engineering skill, then you will find that answer eventually. For the cultures / economics / politics, then it depends on your view of history. Would the idea of liberty have happened the same way without a New World to expand into? It's really not clear. Could you have an Enlightenment that is politically traditionalist while being culturally and economically radical? If you're interested in those sorts of questions, it seems like you're better off directly trying to build good models of the cultural / economic / political shifts and memes than you are trying to imagine the outcomes of a general thought experiment. [Edit] You may be interested in phrasing things as "What would have to change to result in an Enlightenment that is politically traditionalist while being culturally and economically radical?" to build those models and constrain the deviation from reality.
The broader point of the thought experiment is "is [artefact X] an accident of history or is it somehow inevitable that humans will end up with [artefact X]?" More pointedly, when looking at various academic works and disciplines, I've been using it as an intuition pump for the question "are you describing something present in all human environments, describing aspects of our history, or just making stuff up?" I have privately been using it to my own satisfaction for about six months. I'm trying to come up with a way of aesthetically presenting it to other people in such a way that they won't get bogged down in how a separate 10,000 years of human history, with different humans, has happened somewhere.
Right, and I think the question (that I put in an edit) of "what would have to change for X to (not) have happened?" is relatively good at answering that question for X. It seems to me like to not get public key cryptography you would need math to be different, but to not get women's lib you would need either biology to be different or the idea of personal autonomy to not have become a cultural hegemon, both of which could have been the case (and point to where to look for why they weren't).
Just because the equations would have to be the same, it does not mean the other society would know them and use them like we do. Maybe they don't have Internet yet. Maybe their version of Internet has some (weaker) form of cryptography in the lower layers, so inventing cryptography for higher layers did not feel so necessary. Maybe they researched quantum physics before Internet, so they use quantum cryptography. Or at least they can use different kinds of functions for private/public key pairs.
This is the sort of reasoning I'm looking to generate.
I think that question is better for more thorough analysis but less good as an intuition pump. I'm now trying to figure out whether I find the does-the-alternate-human-society-have-it more tractable as a way of thinking about it, or whether I'm simply attached to it. The question "there's another society of humans over there: do they have [x]?" certainly seems a lot easier to me than "what needs to have happened for this counterfactual to be true?"
I recently ran into the question of whether photography would inevitably lead to loss of interest in representational art.
Depends on what you mean by "interest", presumably. I don't think people have necessarily lost interest in live music since the inception of recorded music; they just have a cheaper substitute for it.
The universe glitched, and an exact duplicate of the entire solar system appeared two lightyears to the right? Contact happens when radio telescopey is invented. Divergence from a new star appearing in opposite places in each ones sky.
this is the premise of Hominids.
I was ignorant of this novel until about five minutes ago. As a result, I'm still pretty ignorant about it. That seems to be an implementation of something like this scenario using an alternate reality sci-fi trope. I really want to avoid Sliders-style alternate realities because they're (a) too open-ended, and (b) too heavily influenced by existing fiction on the subject.
In what way is an alternate separate earth population functionally different from an alternate universe? You say you're trying to avoid a scifi scenario but your two proposals are already pretty silly scifi. If open-endedness is a problem, simply limit your universes to 2, like in Hominids. Also, it would be easier to give recommendations if I knew what argument you were trying to win with this thought experiment.
I'm not trying to win any arguments. I'm trying to reason about artefacts of human culture that are parochial (accidents of history) or human-universal (practically inevitable products of human history). More to the point, I'm trying to equip other people with tools to reason in a similar fashion. I'm also not trying to avoid sci-fi scenarios, but I am trying to avoid scenarios which have such a long history as a sci-fi trope that they will inevitably influence people's intuitions. I'm not writing a story (although I do want to frame the thought experiment as a fictional narrative). I'm not writing specific details about what's on the other side of the wall / solar system / interdimensional gateway. The whole point of the thought experiment is that we don't know what's on the other side, apart from the fact that it contains a bunch of humans with as much chronological history as us. Based on that knowledge, what can we reason about them?

What, costs/benefits are there to pursing a research career in psychology. Both from a personal perspective and in terms of societal benefit?

When assessing societal benefit, consider: are you likely to increase the total number of research psychologists, or just increase the size of the pool from which they are drawn? See Just what is ‘making a difference’? - Counterfactuals and career choice on 80000hours.org.

The decision of what career to pursue is one of the largest you will ever take. The value of information here is very high, and I recommend putting a very large amount of work and thought into it - much more than most people do. In particular there is a great deal of valuable stuff on 80000hours.org - it's worth reading it all!


John Derbyshire on Ridding Myself of the Day

I used to console myself with the thought that at least I’d been reading masses of news and informed opinion, making myself wiser and better equipped to add my own few cents to the pile. This is getting harder and harder to believe. There’s something fleeting, something trivializing about the Internet. I think what I have actually done is wasted five or six perfectly good hours when I could have been working up a book proposal, fixing a side door in the garage, doing bench presses, or…or…reading a novel.


I sh

... (read more)
By the way, here's my enlightened opinion on the recent... controversy (what a contrast between the word's neutral blandness and its meaning) featuring that guy: He's not really a "racist" at all. He does not have any hatred, irrational or otherwise, of other ethnicities. He's just a bit of an asshole - or more than a bit. He's very protective of his in-group and very insensitive to everyone outside it - and flaunts it, at an age when he should really know better. He appears to be not the type of person that we want to encourage in civilized society. I certainly wouldn't care to meet him. But firing him just for being an asshole was stupid.
Your implicit assertion that hating other ethnicities is a necessary condition for meriting the label "racist" is not universally accepted.
Considering what a horrible can of worms the definition of that word is and that "racist" represents a strong political and debating weapon against any enemy, I think society would be much helped to adopt a rationalist taboo on it. Even LessWrong discussions would be improved by this I think.
Yeah, I'm inclined to agree.
Yeah, sure. I was going from the minimal (= most right-wing) definition still accepted in polite society today, because I don't want to hear someone complaining that leftists and postmodernists and Jews are overcomplicating things, or whatever.
Nonsense! No one here would say such a thing. There are no Jews on LessWrong.

The idea that a stone falls because it is 'going home' brings it no nearer to us than a homing pigeon, but the notion that 'the stone falls because it is obeying a law' makes it like a man, and even a citizen.

--C. S. Lewis

Is it a problem to think of matter/energy as obeying laws which are outside of itself? Is it a problem to think of it as obeying more than one law? Is mathematics a way of getting away from the idea of laws of nature? Is there a way of expressing behaviors as intrinsic to matter/energy in English? Is there anything in the Sequences or elsewhere on the subject?

I'm not sure what Lewis is trying to say here, but the physical science meaning and the legal meaning of "law" are different enough that I think it's better to consider them different words that are spelled the same (and etymologically related of course). Which means he's making a pun.
I think it does makes sense to consider them as particular cases of a more general concept, after all. Grammatical rules and the rules of chess would be other instances, somewhere in between.
They are all regularities, but laws of physics are regularities that people notice (or try to notice), while legal laws and chess rules are regularities that people impose. (Grammar rules as linguists study them are more like physics; grammar rules as language teachers teach them are more like chess rules.)
OK... let's add one more intermediate point and consider the laws of a cellular automaton. I can see analogies both between them and the laws of our universe¹ and the analogies between them and the rules of chess. 1. And mathematical realists à la Tegmark would see them even more easily than me.

A dicussion on the IRC LessWrong channel about how to provide an incentive to learning the basic mathematics of cool stuff for the mathphobic aspiring rationalists on LW (here is the link to a discussion of that idea, gave us another one.

The Sequences are long

Longer than Lord of the Rings. There is reason rational wiki translates our common phrase of "read the sequences" as "f##k you". I have been here for nearly 2 years and I still haven't read all of them systematically. And even among people read them, how much of them will they rec... (read more)

Of course, the flashcards are not the only way to test the student's knowledge. If the volunteer puts in some effort, he should be able to come up with his own questions, and if the volunteer knows the sequence good enough, he can ask questions and discuss the matter with the student freely. This would ensure that the student has actually understood the sequence posts he was trying to learn, and did not simply memorize. In the end, the instructor simply has to judge whether the student read and understood the sequence he wanted to learn, and how he's doing this doesn't matter that much, IMO. A good and reliable method could be worked out in detail while the first trials are running. If this idea gets approval, the next thing to do would be trying it out!
Yes it needs to be emphasised that the anki idea was floated just as a ready made question set or notes for the person doing the testing.

...the outstanding feature of any famous and accomplished person, especially a reputed genius, such as Feynman, is never their level of g (or their IQ), but some special talent and some other traits (e.g., zeal, persistence). Outstanding achievements(s) depend on these other qualities besides high intelligence. The special talents, such as mathematical musical, artistic, literary, or any other of the various “multiple intelligences” that have been mentioned by Howard Gardner and others are more salient in the achievements of geniuses than is their typical

... (read more)

How much is this statistically correct? I agree with the fact that most high-IQ people are not outstanding geniuses, but neither are most non-high-IQ people. This only proves that high IQ alone is not a guarantee for great achievements.

I suspect a statistical error: ignoring a low prior probability that a human has very high IQ. Let me explain it by analogy -- you have 1000 white boxes and 10 black boxes. Probability that a white box contains a diamond is 1%. Probability that a black box contains a diamond is 10%. It is better to choose a black box? Well, let's look at the results: there are 10 white boxes with a diamond and only 1 black box with a diamond... so perhaps choosing a black box is not so great idea; perhaps is there some other mysterious factor that explains why most diamonds end in the white boxes? No, the important factor is that a random box has only 0.01 prior probability of being black, so even the 1:10 ratio is not enough to make the black boxes contain the majority of diamonds.

The higher the IQ, the less people have it, especially for very high values. So even if these people were on average more successful, we would still see more total success achieved by people with not so high IQ.

(Disclaimer: I am not saying that IQ has a monotonous impact on success. I'm just saying that seeing most success achieved by people with not so high IQ does not disprove this hypothesis.)

It's interesting how one can read the excerpt in two different ways: 1. "wow, IQ isn't all it's cracked up to be, look at how none of the sample won Nobels but two rejected did" 2. "wow, in this tiny sample of a few hundred kids, they used a test which was so accurate in predicting future accomplishment that if the sample had been just a little bit bigger, it would have picked up two future Nobels - people whose level of accomplishment are literally one in millions, and it does this by only posing some boring puzzles without once looking at SES, personality, location, parents, interests, etc!"
Good point. Also, on typical test I'd expect well educated moderately high IQ person to have 100% success rate on everything that's strongly related to intelligence. So at the top range the differences are driven by the parts that have much less direct relation (e.g. verbal, guess next in sequence, etc). Correlation is a line but real relation we should expect would be more like sigmoid as the relevant parts of test saturate. Furthermore, IQ test doesn't test capacity to develop competence in a complex field.

I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?

If I was to use an advanced molecular assembler to create a perfect copy the Mona Lisa and destroy the old one in the process, it would still lose a lot of value. That is because many people not only value the molecular setup of things but also their causal history, what transformations things underwent. Personally I wouldn't care if I was disassembled and reassembled somewhere else. If that was a safe and efficient way of travel then I would do it. But I would care if that happened to some sort of artifact I value. Not only because it might lose some of its value in the eyes of other people but also because I personally value its causal history to be unaffected by certain transformations. So in what sense would a perfect copy of the Mona Lisa be the same? In every sense except that it was copied. And if you care about that quality then a perfect copy is not the same, it is merely a perfect copy.

Sure. Relatedly, the Mona Lisa currently hanging in the Louvre isn't the original... that only existed in the early 1500s. All we have now is the 500-year-old descendent of the original Mona Lisa, which is not the same, it is merely a descendent.

Fortunately for art collectors, human biases are such that the 500-year-old descendent is more valuable in most people's minds than the original would be.

This has nothing to do with biases, although some people might be confused about what they actually value.
(shrug) Fortunately for art collectors, human minds are such that they reliably ascribe more value to the 500-year-old descendent than to the original.
I'd rather have the early original-- I'd like to see the colors Leonardo intended, though I suppose he was such a geek that he might have tweaked them to allow for some fading. Paint or Pixel: The Digital Divide in Illustration Art has more than a little (and more than I read) about what collecting means when some art is wholly or partially digital. Some artists sell a copy of the process by which the art was created, and some make a copy in paint of the digital original. Strange but true: making digital art is more physically wearing than using paintbrushes and pens and such. Note: the book isn't about illustration in general, it's about fantasy and science fiction illustration in particular.
Surely the original, if discovered to be still extant after all (and proved to really be the original), would be even more highly valued if we had it?
Can you expand a little on how you imagine this happening? I suspect we may be talking past one another.
Ah, I have completely misunderstood you! Thanks for suspecting that we were talking past one another, because it made me reread your comment. I thought that you were taking as factual certain theories that the Mona Lisa in the Louvre is a copy (not descendant) of a painting that has since been lost. Rather than directly engage that claim (which I think is pretty thoroughly disbelieved), I just responded to the idea that the true original would be less valuable, which I find even weirder. But you were not talking about that at all. My only defence is that "the original would be" doesn't really make sense either; perhaps you should write "the original was"?
Heh. I wasn't even aware of any such theories existing. You don't really need defense here, my point was decidedly obscure, as I realized when I tried to answer your question. I got about two paragraphs into a response before I foundered in the inferential gulf. I suspect that any way of talking about "the original" as distinct from its "descendent" is going to lose comprehensibility as it runs into the human predisposition to treat identity as preserved over time.
https://en.wikipedia.org/wiki/Speculation\_about\_Mona\_Lisa#Other\_versions (which is more than just speculation about the original)
Requoting: "Look at any photograph or work of art. If you could duplicate exactly the first tiny dot of color, and then the next and the next, you would end with a perfect copy of the whole, indistinguishable from the original in every way, including the so-called 'moral value' of the art itself. Nothing can transcend its smallest elements" - CEO Nwabudike Morgan, "The Ethics of Greed", Sid Meier's Alpha Centauri In that case, its history would be that it started off as atoms, was transformed into bits, and then was transformed back into atoms again. If the transformation were truly lossless, people familiar with this fact wouldn't care. Now, this specific example sounds silly because we have no such technology applicable to the Mona Lisa. But consider something like a mass-produced CD. You could take a CD in Europe, losslessly rip it, destroy the CD and copy the bits to America, then send them to a factory to stamp another CD. The resulting variation would be identical to that between the original CD and one of its siblings in Europe. People are familiar with the technologies involved, and they value CDs only for their bits, so the copy really is as good as the original. (Here I have even taken pains to state that the copy is not a burned CD-R, nor that the original was signed by a band member, or any such thing.) "During World War II, the medals of German scientists Max von Laue and James Franck were sent to Copenhagen for safekeeping. When Germany invaded Denmark, chemist George de Hevesy dissolved them in aqua regia, to prevent confiscation by Nazi Germany and to prevent legal problems for the holders. After the war, the gold was recovered from solution, and the medals re-cast." - Wikipedia
Here is an example. Imagine there was a human colony in another star system. After an initial exploration drone set up a communication node and a molecular assembler on a suitable planet, all other equipment and all humans were transmitted digitally an locally reassembled. Now imagine such a colony would either receive a copy of the Venus figurine digitally transmitted and reassembled or by means of a craft capable of interstellar travel. If you don't perceive there to be a difference then you simply don't share my values. But consider how much resources, including time, it took to accomplish the relocation in the latter case. The value of something can encompass more than its molecular setup. There might be many sorts of sparkling wines that taste just like Champagne. But if you claim that simply because they taste like Champagne they are Champagne, then you are missing what it is that people actually value.
To try to better understand your value system, I'm going to take what I think you value, attempt to subdivide it in half, and then reconnect it together, and see if it is still valuable. Please feel free to critique any flaws in the following story. The seller of the Venus tells you "This Venus is the original, carried from Earth on a shuttle, that went through many twists and turns, and near accidents to get here." and there was recently a shuttle carrying "untransportium." so that is extremely plausible and he is a trustworthy seller. You feel confident you have just bought the original Venus. However, later you find out that someone else next door has one of those duplicates of the Venus. He got it for much much cheaper, but you still enjoy your original. You do have to admit you can't tell them apart in the slightest. Then later than that, you find out that a unrelated old man who just died had been having fun with you two by flipping a coin, and periodically switching the two Venuses from one house to the other when it came up tails. He cremated himself beyond recovery, so can't be revived and interrogated, and you have confirmed video evidence he appears to have switched the Venuses multiple times in a pattern which resembles that of someone deciding on a fair coinflip, but there doesn't appear to be a way of determining the specific number of switches with any substantial accuracy (video records only go back so far, and you did manage to find an eyewitness who confirms he had done it since before the beginning of the video records). A probabilistic best guess gives you a 50-50 shot of having the original at this point. Your neighbor, who doesn't really care about the causal history of his Venus, offers to sell you his Venus for part of the price you paid for the original, and then buy himself another replica. Then you will be as certain as you were before to have the original (and you will also have a replica), but you won't know which of the two is which
The value of both, one of them being the original, would be a lot less than the original. I'd pay maybe 40% for both. The value of just one would reduce to a small fraction. I wouldn't be interested to buy it at all. The reason is the loss of information.
You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans) Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference? Note that this doesn't mean no longer caring about causal history. (you care about your own casual history in the form of memories and such) Also, can you trace where this preference is coming from?
Yeah, I would use a teleporter any time if it was safe. But I would only pay a fraction for certain artifacts that were teleported. I would keep that preference. And there is a difference. All the effort it took to relocate an object adds to its overall value. If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more. Part of the value of touching an asteroid from Mars is the knowledge of its spacetime trajectory. An atomically identical copy of a rock from Mars that was digitally transmitted by a robot probe printed out for me by my molecular assembler is very different. It is also a rock from Mars but its spacetime trajectory is different, it is artificial. Which is similar to drinking Champagne and sparkling wine that tastes exactly the same. The first is valued because while drinking it I am aware of its spacetime trajectory, the resources it took to create it and where it originally came from and how it got here.
How about if there were two worlds - one where they care about whether a spacetime trajectory does or does not go through a destroy-rebuild cycle, and one where they spend the effort on other things they value. In that case, in which world would you rather live in? The Champagne example helps, I can understand putting value on effort for attainment, but I'd like another clarification: If you have two rocks where rock 1 is brought from mars via spaceship, and rock 2 is the same as rock 1 only after receiving it you teleport it 1 meter to the right. Would you value rock 2 less than rock 1? If yes, why would you care about that but not about yourself undergoing the same?
It is not that important. I would trade that preference for more important qualities. But that line of reasoning can also lead to the destruction of all complex values. I have to draw a line somewhere or end up solely maximizing the quality that is most important. Rock 1 and 2 would be of almost equal value to me.
In a hypothetical case where you werent oposed to slave trade... what'd you pay for a transported slave very much like yourself? would it matter if you had been transported? If the slave had some famous causal history, would it matter if it was mental (composed an important song) or physical (lone survivor of a disaster)?
So the labour theory of value is true for art?

You're off by a couple months. (should read "May 1-15," instead of "March 1-15").

Edit: It's fixed now

[This comment is no longer endorsed by its author]Reply
As I've said. A script should open such threads. (And I would expect a "thanks" for you.)

I won't be wasting any more time on TVTropes. The reason is that I've become so goddamn angry at the recent shocking triumph of hypocrisy, opportunism, idiocy and moral panic that I literally start growling after looking at any page for more than five seconds. Never again will I become "trapped" in that holier-than-thou fascist little place. Every cloud has a silver lining, I guess. Still, I'm kinda sad that this utter madness happened.

(One particular thing I'm mad about is their perverted treatment of Sengoku Rance, an excellent and engaging vid... (read more)

Er, not to interrupt your moral outrage, but their policy seems more or less reasonable given their size and presumably their legal resources.
If they had the courage to call for a real discussion of issues like teenage sexuality - not to mention call out the schizophrenic mainstream view of those issues - things might've turned out very differently. How the hell does 4chan get away with things that would make that tinpot dictator of an admin faint - and is no less popular for it? From what I've heard, it's not exactly an unprofitable venture for Moot, either.
There's a certain degree of irony involved in your comment, posted as it is on another discussion site run by hopefully-benevolent dictators. 4chan has toed the line considerably; the only thing that keeps them from getting van'd is their ruthlessness in weeding out and banning those responsible for posting child pornography.
I'd say there's a greater distance between an oppressive tinpot dictator and a genuinely benevolent one than between a generic dictator and a generic representative democracy. And their limits on what is and what isn't child pornography are some of the most narrow and liberal in the world. E.g. any written stuff is considered a harmless fantasy, as it should be. Particularly shocking drawn pictures might be censored, but as long as it's not "real" you're not in real trouble. Have you seen what /a/ has been like for the last few years? (I should clarify that, personally, I don't see any specific appeal in erotic material with childlike features, and am faintly pushed off by it on an emotional level. But I have absolutely no problem with those who indulge in it, as long as they don't engage in anything harmful to real people or support those who do.)

I'm worried I'm too much of a jerk. I used to think I had solved this problem, but I've recently encountered (or, more accurately, stopped ignoring) evidence that my tone is usually too hostile, negative, mean, horrible &c.

Could some people go through my comment history, and point out where I could improve? Sometimes think I'm exactly enough of a jerk, but other times I bet I cross the line.

Anonymous feedback can go here. Else reply to this comment or send a private message.

Have you tried going through your critical posts/IRC comments and pretending to be on the receiving end? Typical mind fallacy notwithstanding, this should be a decent first step.

I think I see a problem in Robin Hanson's I'm a Sim, or You're Not. He argues:

Today, small-scale coarse simulations are far cheaper than large-scale detailed simulations, and so we run far more of the first type than the second. I expect the same to hold for posthuman simulations of humans – most simulation resources will be allocated to simulations far smaller than an entire human history, and so most simulated humans would be found in such smaller simulations.

Furthermore I expect simulations to be quite unequal in who they simulate in great detail –

... (read more)

From the fact that all of Shadowzerg's comments in this thread have at least three upvotes, I can only assume that the karma sockpuppets are out in force.


The sockpuppets have now been overwhelmed.

I registered the domains maxipok.com and maxipok.org and set them up to redirect to http://www.existential-risk.org/concept.html .

Are you keeping track of hits?
1Paul Crowley
I'm just using the registrar's forwarding facility, and it doesn't provide that. I can't quite be arsed to set the domain up myself and do my own redirects, though I guess I could.
Those are exceedingly particular jargon terms, and if you're going to bother it would be interesting to know if they got any hits at all. (The question occurred to me because I was, at the time of writing it, procrastinating from going through log files to answer this precise question concerning several internal domain names I want to kill off on a server I want to kill off. When something gets no hits for six months, nobody cares.)
7Paul Crowley
Good points. It's not so much that I think people might be searching for "maxipok" now - if they do they already largely get the right hit - as that I'd like to popularize the term, and it's always wise to buy the domain before you do that.
Oh, of course!

I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:



So mstevens, Konkvistador and me had an IRC discussion which sparked an interesting idea.

The basic idea is to get people to read and understand the sequences. As a reward for doing this, there could either be some sort of "medals" or "badges" for users or a karma reward. The "badges" solution would require that changes are made to the site design, but the karma rewards could work without changes, by getting upvotes from the lw crowd.

To confirm that the person actually understood what is written in the sequences, "instruct... (read more)

[This comment is no longer endorsed by its author]Reply

An interesting read I stumbled upon in gwern's Google+ feed.

Shelling Out -- The history of money

I need advice on proof reading. Specifically:

How can I effectively read through 10-20 page reports, searching for spelling, formatting and similar mistakes?

and, more importantly, how can I effectively check calculations and tables done in excel for errors?

What I'm looking for is some kind of method to do those tasks. Currently, I try to check my results, but it is hard for me not just to glaze over the finished work - I'm familiar with it and it is hard for me to read a familiar text/table/calculation thoroughly.

Does anybody know how one can improve in this respect?

My best advice, though it might not be helpful to you, is to have someone else proofread it.
Possible, but that reflects on my performance if they do indeed find mistakes I could have corrected. The goal is to eliminate most of the stuff myself so I don't waste my co-workers time.
If your co-workers are also proof reading their own work, and having similar issues proofreading what they are too familiar with, then your time and theirs will be more effeciently utilized by proofreading each other's work. So they find mistakes you could have corrected, and you find mistakes they could have corrected, but all these corrections get done with less time and effort.
Some more background: We're a small enterprise (boss, six employees, secretary, two trainees). Except our secretary and the trainees, everybody has an academic degree. We did try to institute that as a rule, but only me and one person working from home office consistently do so. The person working from home office is also, at the moment, very busy because of some deadlines, so I can't ask him to proofread. The others do proofread, but don't ask for proofreading in return, which makes asking low-status. They are either better at proofreading than I am or make less mistakes in the first place. In fact, I fear the underlying problem is that I am not able to concentrate well, so my work is more error prone. Making as little mistakes as possible in the first place would obviously be the best solution, but I have even less of an idea how to achieve that, given my current abilities and work conditions.

I've recently spent a lot of time thinking about and researching the sociology and the history of ethics. Based on this I'm going to make a prediction that may be unsettling for some. At least it was unsettling for me when I made it less abstract, shifted to near mode thinking and imagined actually living in such a world. If my model proves accurate I probably eventually will.

"Between consenting adults." as the core of modern sexual morality and the limit of law is going to prove to be a poor Schelling fence.

Unsettling in what sense? Like, it will eventually erode monogamy, and so cause more sexual inequality? It will break down gender-based sexuality, effectively turning everyone bi? Legalized rape? Dogs and cats living together? (Channeling Multiheaded: Don't be so vague, especially when you're making predictions!)
Meh this is already a fait accompli from what I see.. This is excellent advice. I will do so. Children (and I do mean children, I'm not talking about young teens) will be considered capable of giving consent to have sex with adults. Their parents will be discouraged from influencing their choice (even if it is "choice") of sexual partner too much. Rape will become a much less serious crime. A significantly smaller fraction of rapes will be persecuted. Western countries will have higher rates of rape than they currently do. I think the first urge of a LessWronger reading the above is to pattern match such claims to weirdtopia. A place where sex with children and more rape is actually a good thing according to our current utility function just in really counter-intuitive way. No. I want people to try and discard just universe instincts and try to just for the sake of visualisation consider the above outside of weirdtopia.
I agree completely that over time, our current beliefs about who is and isn't capable of giving informed consent to enter into a sexual relationship will be replaced by different beliefs. I don't quite see why trending towards considering more and more people capable of consent is more likely than trending towards fewer and fewer people capable of it, or something else, but it's certainly possible. (If you can share your thinking along these lines, I'd be interested.) In terms of my reactions, I am of course repulsed by the idea of people I don't consider capable of informed consent being manipulated or forced into providing the semblance of it, except in those cases where I happen to endorse the thing they're being manipulated or forced into doing, and also repulsed by the idea of people I consider capable of giving informed consent being denied the freedom to do so, except in those cases where I happen to endorse them being denied that freedom. This includes (but is very much not limited to) being repulsed by the example of eight-year-olds being considered capable of giving consent to have sex with adults, and of anyone not being considered capable of refusing such consent. I am of course aware that my own notions of who is and isn't capable of consenting to what degree to what acts are essentially arbitrary, and I don't lend much credence to the idea that I am by a moral miracle able to make the right distinction. I make the distinctions I make; as my social context changes I will make different distinctions. I'm OK with that.
Thanks, that clarifies it. Agreed about the decline of monogamy as largely inevitable now, though I'm undecided how bad it is, especially with "more fun than sex" superstimuli becoming more widespread. (And for reference, here's some previous discussion about children and sexuality.)
I think I see a possible slippery slope based on between consenting adults, although EDIT: based on the above it was not be the one Konkvistador was thinking of. Presumably clearly illegal: Let's say I mind control thousands of people into doing me as many favors as I want using a magical mind control ring that has a 99.9% success rate. (Obviously, these are not consenting adults!) Currently legal: Let's say I advertise to thousands of people into buying a product of mine using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate. (Obviously, these are consenting adults!) Incremental steps: Let's say I mind control thousands of people into buying a product of mine using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate. Let's say I mind control thousands of people into doing me a single favor using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate. Let's say I mind control thousands of people into doing me a single favor using a variety of technological devices and methods which altogether takes several days to fully work but it has a 99.9% success rate. Let's say I mind control thousands of people into doing me a single favor using a technological mind control helmet which takes several days to fully work, but it has a 99.9% success rate. Let's say I mind control thousands of people into doing me as many favors as I want using a technological mind control helmet which takes several days to fully work, but it has a 99.9% success rate. Let's say I mind control thousands of people into doing me as many favors as I want using a technological mind control helmet which takes several days to fully work but it has a 99.9% success rate. Let's say I mind control thousands of people into doing me as many favors as I want usin

An interesting debate has surfaced after a small group of people have claimed to have success inducing hallucinations through autosuggestive techniques.





It's a really bad idea to link to 4chan since their pages by design disappear so quickly; I've made a copy of it at http://www.webcitation.org/67UBpOp6v and the followup thread at http://www.webcitation.org/67UBuzPru but if I had been a day later...

Maths is great.

Many of us don't know as much maths as we might like.

Khan Academy has many educational videos and exercises for learning maths. Many people might enjoy and benefit from working through them, but suffer from Akrasia that means they won't actually do this without external stimulus.

I propose we have a KA Competition - people compete to do the maths videos and exercises on that site, and post the results in terms of KA badges and karma (they can link to their profiles there so this can be verified).

The community here will vote up impressive achi... (read more)

I'm not sure I'd necessarily advocate this. Don't get me wrong, I love Khan Academy. It's great for revising topics I haven't seen in a while, or getting a different perspective on ones I'm currently learning, but I'm actually studying towards a maths degree, which I then want to go and do something with. If I didn't need to learn linear algebra, I don't think I could call me not learning it a case of akrasia. I'd call that a case of me making more time for eating sandwiches and talking to pretty girls. I might wish I knew lots about linear algebra, but the sandwiches and pretty girls are clearly more important to me. As it happens, I do need to learn linear algebra, and as a result, I end up learning it. If you want people to spend their time learning a skill for the purpose of competing, you may as well tell them to play StarCraft 2. If you want them to learn a useful skill, give them a genuine use for it. Project Euler does this by posing problems solved with algorithms you may actually have to code up in real life one day. Why not assemble actual real-world problems solvable with higher mathematics, let people see what kind of problems they want to solve, and have that direct them in what to learn?

I will not be able to post the May 16-31 open thread until ten hours after midnight EST.

Edit: Circumstances have changed. I will be able to post it on time. (Thus, comment retracted.)

[This comment is no longer endorsed by its author]Reply

Requesting Help on Applying Instrumental Rationality

I'm faced with a dilemma and need a big dose of instrumental rationality. I'll describe the situation:

I'm entering my first semester of college this fall. I'm aiming to graduate in 3-4 years with a Mathematics B.S. In order for my course progression to go smoothly, I need to take Calculus I Honors this fall and Calc II in the spring. These two courses serve as a prerequisite bottleneck. They prevent me from taking higher level math courses.

My SAT scores have exempted me from all placement tests, includin... (read more)

You could take the placement test, and then start studying calculus in the summer (perhaps this is what you meant by "prepare for classes I'll be taking in the fall"), reviewing specific precalc topics as needed when and if your calculus book seems to assume prior knowledge that you don't have.

There was a thread a while ago where somebody converted probabilities via logarithms to numbers, so it's easier to use conditional probabilities. Unfortunately, I didn't bookmark it. Doe snaybody know which thread I'm talking about?

http://lesswrong.com/lw/buh/the_quick_bayes_table/ Maybe?
Yep, that's it. Thank you!

A poem about getting out of the box....

Siren Song

This is the one song everyone
would like to learn: the song
that is irresistible:

the song that forces men
to leap overboard in squadrons
even though they see beached skulls

the song nobody knows
because anyone who had heard it
is dead, and the others can’t remember.

Shall I tell you the secret
and if I do, will you get me
out of this bird suit?

I don’t enjoy it here
squatting on this island
looking picturesque and mythical

with these two feathery maniacs, I don’t enjoy singing
this trio, fatal and valuable.

I wi

... (read more)

Update on the accountability system started about a month ago:it worked for about three weeks with everyone regularly turning in work, now I'm the only one still doing it. Lessons learnt: it seems that the half-life of a motivational technique is about 2 weeks. The importance of not breaking the chain (I suspect it's not coincidence that I'm the only one still going and I'm also the only one who hasn't had unavoidable missed days from travelling). Alternatively, I'm very good at committing to commitment devices, and they're not.

How can I improve my ability to manipulate mental images?

When I try to visualize a scene in my mind I find that edges of the visualization fade away until I only have a tiny image in the center of my visual field or lose the visualization entirely.

Here are some things I have noticed:

  • My ability to visualize seems to go through a cycle in which I first visualize a scene, then I lose pieces of it until it fades entirely, then a little while later I manage to reconstruct the scene.
  • My ability to visualize seems better when I am near sleep though still lucid.
  • Rather than fading evenly towards the center I find that I lose entire chunks of peripheral vision at a time.

In any decision involving an Omega like entity that can run perfect simulations of you, there wouldn't be a way to tell if you were inside the simulation or in the real universe. Therefore, in situations where the outcome depends on the results of the simulation, you should act as though you are in the simulation. For example, in counterfactual mugging, you should take the lesser amount because if you're in Omega's simulation you guarantee your "real life" counterpart the larger sum.

Of course this only applies if the entity you're dealing with happens to be able to run perfect simulations of reality.


It's frog season again. :-(

Is frog repellent a thing? Could keep them away from the stairwell.
Nothing, reliable or commercially sold that I found. Some homebrew ideas from ehow are here.
Perhaps frog attractant (mating scent or suchlike) put somewhere else to redirect the frogs? Downside: hard to wash off, so your person becomes very attractive to frogs. Just a clueless guess.

I've been reading up on working memory training (the general consensus is that training is useless or very nearly so). However, what I find interesting is how strongly working memory is correlated with performance on a wide variety of intelligence tests. While it seems that you can't train working memory, does anyone know what would stand in the way of artificial enhancements to working memory? (If there are no major problems aside from BCIs not yet being at that point, I know what I will be researching over the next few months. If there is something that would prevent this from working, it would be best to know now.)

Why doesn't someone like Jaan Tallinn or Peter Thiel donate a lot more to SIAI? I don't intend this to mean that I think they should or that I know better than them, I just am not sure what their reasoning is. They have both already donated $100k+ each, but they could easily afford much more (well, I know Peter Thiel could. I don't know exactly how much money Jaan Tallinn actually has). I am just imagining myself in their positions, and I can't easily imagine myself considering an organization like SIAI to be worth donating $100k to, but not to be worth do... (read more)

I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post

The Singularity Institute is in regular contact with its largest donors. Please do not bother them.

It would not be a solicitation for him to donate more (though certainly I'd have to be careful to make it clear that's not my intention) -- clearly that is something to best leave to SI. It would be a request for clarification of his opinion on these issues. Considering he's a public figure who has done multiple talks on the subject, I don't think it's out of line to ask him for his opinions on how best to allocate funding.
It seems like that might carry some risk of making him feel like he was being bugged to give more money, or something like that. Maybe it would be better to post a draft of such an email to the site first, just in case?
Tax issues. I can't find the original thread now, but its been repeatedly stated that it causes legal issues if SI get to much of its funding from a small number of sources.\
It would probably need to be much larger fractions for those sorts of issues to be relevant. In general, the IRS doesn't mind donations coming from a few large donors when they aren't too closely connected and there are other donors as well.
But surely there are ways around this? The first idea that comes to mind for me, couldn't they create an offshoot organization that's not officially part of SI but still collaborates closely? If not, there has to be some other way around it. edited to add: Of course there may be good reasons not to do the first idea I suggested above, I'm just saying that someone who wants to spend millions on SIAI-related funding probably wouldn't have trouble doing so for purely legal reasons.
I've long wondered what's Peter Thiel's master plan (more): ETA Also see, 'A Conversation with Peter Thiel':
Either they - those two and a few more: A - do not buy a near Singularity. Where "near" means from 1 to 3 decades away. B - have other, (some would say "maybe not friendly") plans C - they have a clandestine contract with the SIAI I think, people seldom live by what they are preaching. Had I billion of euros to spend, I would not initialize the Singularity through SIAI. Much less by donating and hoping for a good outcome. No. At the most I would invite somebody from SIAI to join MY team. I generalize from myself.

It occurred to me that I have no idea what people mean by the word "observer". Rather, I don't know if a solid reductionist definition for observation exists. The best I can come up with is "an optimization process that models its environment". This is vague enough to include everything we associate with the word, but it would also include non-conscious systems. Is that okay? I don't really know.

It occurs to me, reading your post, that I have almost no idea what people mean by "conscious system". I'm quite certain I am one, and I regularly experience other people apparently claiming to belong to that set too. I suspect that if we can nail down what it means to belong to the set of "conscious systems", we'll be much more readily able to determine if not being a member of that set disqualifies a thing from being an "observer".
I suppose you're right. Although it's pretty easy for me to imagine something that is "conscious" that isn't an "observer" i.e., a mind without sensory capabilities. I guess I was just wondering whether our common (non-rigorous) definitions of the two concepts are independent.