SUGGEST and VOTE: Posts We Want to Read on Less Wrong

Less Wrong is a large community of very smart people with a wide spectrum of expertise, and I think relatively little of that value has been tapped.

Like my post The Best Textbooks on Every Subject, this is meant to be a community-driven post. The first goal is to identify topics the Less Wrong community would like to read more about. The second goal is to encourage Less Wrongers to write on those topics. (Respecting, of course, the implicit and fuzzy guidelines for what should be posted to Less Wrong.)

One problem is that those with expertise on a subject don't necessarily feel competent to write a front-page post on it. If that's the case, please comment here explaining that you might be able to write one of the requested posts, but you'd like a writing collaborator. We'll try to find you one.



You may either:

  • Post the title of the post you want someone to write for Less Wrong. If the title itself isn't enough to specify the content, include a few sentences of explanation. "How to Learn a Language Quickly" probably needs no elaboration, but "Normative Theory and Coherent Extrapolated Volition" certainly does. Do not post two proposed post titles in the same comment, because that will confuse voting. Please put the title in bold.

  • Vote for a post title that has already been suggested, indicating that you would like to read that post, too. Vote with karma ('Vote Up' or 'Vote Down' on the comment that contains the proposed post title).

I will regularly update the list of suggested Less Wrong posts, ranking them in descending order of votes (like this).


The List So Far (updated 02/11/11)

  • (35) Conversation Strategies for Spreading Rationality Without Annoying People
  • (32) Smart Drugs: Which Ones to Use for What, and Why
  • (30) A Survey of Upgrade Paths for the Human Brain
  • (29) Trusting Your Doctor: When and how to be skeptical about medical advice and medical consensus
  • (25) Rational Homeschool Education
  • (25) Field Manual: What to Do If You're Stranded in a Level 1 (Base Human Equivalent) Brain in a pre-Singularity Civilization
  • (20) Entrepreneurship
  • (20) Detecting And Bridging Inferential Distance For Teachers
  • (19) Detecting And Bridging Inferential Distance For Learners
  • (18) Teaching Utilizable Rationality Skills by Exemplifying the Application of Rationality

  • (13) Open Thread: Offers of Help, Requests for Help
  • (13) Open Thread: Math
  • (12) How to Learn a Language Quickly
  • (12) True Answers for Every Philosophical Question
  • (10) The "Reductionism" Sequence in One Lesson
  • (10) The "Map and Territory" Sequence in One Lesson
  • (10) The "Mysterious Answers to Mysterious Questions" Sequence in One Lesson
  • (10) Lecture Notes on Personal Rationality
  • (10) The "Joy in the Merely Real" Sequence in One Lesson
(below 10 points not listed)


102 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:40 AM
Select new highlight date

Conversation Strategies for Spreading Rationality Without Annoying People

It occurs to me that resorting to manipulative methods to teach someone methods which will improve their ability to detect said manipulations has problems entirely separate from moral concerns.

Once you've climbed the ladder, you can discard it.

Will the person being manipulated discard it?

If you set it up properly, yes. The moral concerns remain- I'm just saying that one could teach resistance to manipulation in a (at first undetected) manipulative fashion, so that the eventual discovery reinforces rather than undermines the lesson.

If it helps, imagine QQ doing this.

Presumably Orthonormal is referring to "Quirinus Quirrell," the fictional character from Eliezer Yudkowsky's work of Harry Potter fanfiction, Harry Potter and the Methods of Rationality. Best wishes, the Less Wrong Reference Desk.

Trusting Your Doctor: When and how to be skeptical about medical advice and medical consensus

Smart Drugs: Which Ones to Use for What, and Why

I'd like to take a stab at writing this one, actually, if no one else is dead set on it. Expect it in the discussion section within forty-eight hours.

EDIT: Status as of 11:56AM EST, Feb 9: The first draft is about 90% completed, but I need to leave it aside and run off to class. I will post it this afternoon, and then revise it (aided by your contributions!) over the remainder of the week.

EDIT EDIT: Posted as of 8:10PM EST.

I'm also planning on writing an article. I'll be focusing on a small number of smart drugs that have the most evidence supporting them. I'll probably be able to write it sometime this week.

Rational Education

As a mom who can't afford private schools and is horrified by the current state of public education in my country (USA), I'm keenly interested in rational homeschool curriculum ideas--both explicitly teaching rationality itself, and also teaching specific subjects in a rational way. Teaching the skills necessary for self-education might be a third topic.

I was homeschooled - I should ask my mom about resources she used. It is worth noting that the three of us siblings all became readers, and she provided us with good textbooks to work from. I taught myself algebra and geometry out of a geometry textbook she bought, for instance.

(I think I saw a copy of John Holt's How Children Fail around the house once, to name an author who appeared in a few Rationalty Quotes threads, but I really don't know if that was a significant part of my parents' thinking.)

A Survey of Upgrade Paths for the Human Brain

Field Manual: What to Do If You're Stranded in a Level 1 (Base Human Equivalent) Brain in a pre-Singularity Civilization

Detecting And Bridging Inferential Distance For Teachers

Roughly: Generic tutoring skills where a stable curriculum doesn't exist and what the person who is being taught actually knows can be patchy or surprising.

Responding to Silas's comment about the learning side of the equation. He wrote:

Don't forget the problem from the other side, too: how to detect and bridge inferential distence for knowledge-havers, i.e., how to find the knowledge-gap and convey the information to them. (That was actually the long-delayed article I'm working on, given my success in teaching others and my difficulty in getting others to convey knowledge to me when the roles are reversed.)

(The use of the term "knowledge haver" rather than "teacher" was deliberate.)

Yes, I used the activity oriented "learner" over the institutional role of "student" specifically because I was trying to emphasize general life skills.

I think it says something about our culture that there doesn't appear to be a common term to describe "one who conveys a lesson" but that doesn't have the connotations of "teacher" in that it is something people do for money. When I suggested an article for "teachers" I used the best non-neologism I could think of. Having thought about this some more I'm wondering if I "mentor" might be a better term than "teacher"?

The trick with mentoring is that its a long term process and is less about delivery of pre-specified lessons and more about delivering supplementary insight into the mentee's ongoing currently articulated life processes.

Thinking about the terminological issues, it strikes me that these conceptual framing issues have implications for what kinds of learning/teaching are actually possible. Perhaps a lot of the skills here involve having a realistic model of a normal person's willingness and capacity to learn? Maybe you just can't teach/mentor/tutor very well without long term insight and life-driven discovery of knowledge gaps? Maybe other languages cut the world in better ways? For example there's senpai and kohai in Japanese, but that also carries baggage about organizational status hierarchies rather than transmission of specialist expertise itself.

I agree that there's no commonly used term for what you want to describe, and "knowledge haver" is just as problematic. Ideally, people will alternate between being a mentor and learner throughout their lives -- the process never ends.

Btw, though my article on this matter is ballooning, the advice for "teachers" amounts to:

a) Actually understand the subject matter yourself, in the sense of having a model that connects to your understanding of everything else. (Obligatory plug: that means Level 2.)

b) Identify the nearest point of common understanding ("nepocu"), and work back to your own understanding from there.

Detecting And Bridging Inferential Distance For Learners

Roughly: How to notice when someone has more levels of expertise than you do in some area and then effectively and ethically acquire their skills/wisdom/knowledge.

Don't forget the problem from the other side, too: how to detect and bridge inferential distence for knowledge-havers, i.e., how to find the knowledge-gap and convey the information to them. (That was actually the long-delayed article I'm working on, given my success in teaching others and my difficulty in getting others to convey knowledge to me when the roles are reversed.)

EDIT: Nevermind, I didn't read the discussion before saying that.

(The use of the term "knowledge haver" rather than "teacher" was deliberate.)

For reference, I responded here to put the useful conversation in the right part of the tree.

Teaching utilizable rationality skills by exemplifying the application of rationality

Open Thread: Offers of Help, Requests for Help

True Answers for Every Philosophical Question

I don't want true answers to those questions; I want confusion-extinguishing ones.

Are you saying there is no such thing as true and false in philosophy (only confusing and confusion-extinguishing), or that given the choice between a true but confusing answer and a false but confusion-extinguishing answer, you'd choose the latter?

Maybe I started sounding a little thick-headed to you, as I have in the past, so let me try to rephrase my criticism more substantively.

For the class of questions you're referring to, I believe that as you gain more and more knowledge, and are able to better refine what you're asking for in light of what you (and future self-modifications) want, it will turn out that the thing you're actually looking for is better described as "confusion extinguishment" rather than "truth".

This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined, and what you really want is an understandable mapping from yourself to reality. In our current state, with a specific ontology and language assumed, we can take an arbitrary utterance and classify it as true or false (edit: or unknown or meaningless). But as that ontology adjusts to account for new knowledge, there is no natural grounding from which to judge statements, and so you "cut out the middle" and search directly for the mapping from an encoding to useful predictions about reality, in which the encoding is only true or false relative to a model (or "decompressor").

(Similarly, whether I'm lying to you depends on whether you are aware of the encoding I'm using, and whether I'm aware of this awareness. If the truth is "yes", but you already know I'll say "no" if I mean "yes", it is not lying for me to say "no". Likewise, it is lying if I predicate my answer on a coinflip [when you're not asking about a coin flip] -- even if the coinflip results in giving me the correct answer. Entanglement, not truth, is the key concept here.)

Therefore, in the limit of infinite knowledge, the goal you will be seeking will look more like "confusion extinguishment" than "truth".

I'm afraid there's too big of an inferential gap between us, and I'm not getting much out of your comment. As an example of one confusion I have, when you say:

This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined

you seem to assuming a specific theory of truth, which I'm not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?

I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?

The purpose of my remarks following the part you quoted was to clarify what I meant, so I'm not sure what to do when you cut that explanation off and plead incomprehension.

I'll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that "100 angels can dance on the head of a pin". As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you're no longer answering anything like e.g. "Do humans have free will?" or about angels -- both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.

(Edit: So once you've learned enough, you no longer care if "Do humans have free will?" is "true", or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus "unasking" the question.)

I looked at the list of theories of truth you linked, and they don't seem to address (or be robust against) the kind of situation we're talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I'm judging answers to philosophical questions by, though.

Thanks, that's actually much clearer to me.

You know why you asked about the phenomenon you had in mind with the question, thus "unasking" the question.

But can't that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I'm to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I'm human and flawed).

I'm worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don't understand it, but we do understand the heuristics.

Do you understand my worry, and if so, do you think it applies here?

I think I understand your worry: you think there's a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.

I differ in that I think it's the reverse: truth always "cashes out" as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.

To put it another way, what if the model you were given performs perfectly? Would you have any worry that, "okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing ... but what if it's just tricking me? This might not all be really true." I would say at that point, you have your priorities reversed: if something fails at being "truth" but can perform that well, this "non-truth" is no longer something you should care about.

it will turn out that the thing you're actually looking for is better described as "confusion extinguishment" rather than "truth".

This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined, and what you really want is an understandable mapping from yourself to reality

Rather than "truth" being ill-defined, I would rather want to say that the problem is simply that an answer of the form "true" or "false" will typically convey fewer bits of information than an answer that would be described as "confusion-extinguishing"; the latter would usually involve carving up your hypothesis-space more finely and directing your probability-flow more efficiently toward smaller regions of the space.

Fair enough: I think it can be rephrased as a problem about declining helpfulness of "true/false" answers as your knowledge expands and becomes more well-grounded.

I'm saying that the "confusion-extinguishing" heuristic is a better one for identifying good answers to philosophical questions, as judged by me, and probably as judged by you as well.

Also that, given the topic matter, truth may be undecidable for some questions (owing to the process by which philosophers arrived at them), in which case you'd want the confusion-extinguishing answer anyway.

"confusion-extinguishing" heuristic is a better one

Better than what? Better than "it seems true to me"? But I didn't ask for "Answers That Seem True".

"Confusion-extinguishing" may be the best heuristic I have now for arriving at the truth, but if someone else has come up with better heuristics, I want them to write about the answers they arrived at using those heuristics. I think I was right to identify what I actually want, which is truth, and not answers satisfying a particular heuristic.

Do you want to know whether "100 angels can dance on the head of a pin" is true, or do you want the confusion that generated that question to be extinguished?

(It's true, by the way.)

Do you think this is possible right now? Would this be a joke post that you want to read, or something?

I hope it isn't a joke. I can see great use for a deconstruction of the many philosophical questions, failed philosophies, and most importantly, some kind of status report of more modern thought.

We've all heard Hume, Kant and Descartes, to name a few. But their ideas were formed long before the Scientific Revolution, which I arbitrarily deem to be the publishing of the Origin of the Species. It would be nice to point people arguing old school deontology, for example, to Wei Dei's chapter: True Answers About Why Good Will Alone Is Insufficient.

Suppose acting out of concern for the morality of my future selves was moral.

For a reductio, assume moral motive was sufficient for moral action. Suppose you self-modified yourself into a paperclipper, who believed it was moral to make paperclips. Now, post-modification you could be moral by making paperclips. Recognising this, your motive in self-modifying is to help your future self to act morally. Hence, by our Kantian assumption, the self-modification was moral. Hence it is moral to become a paperclipper!

In some ways I like this idea, but in some ways I don't think it would work. Suppose, for example, that I produce a post entitled "The real reason why philosophical realism sucks". The post consists of 20 lines or so of aphorisms, each a link to a more complete philosophical argument. Cool, potentially informative, and very likely useful as a reference. But how would you discuss a posting like that in the comments?

Full content of the actual post:

"I'm not sure."

Lecture Notes on Personal Rationality

(Not "in one lesson" summaries, but self-contained treatment of the topic, incorporating material from the Sequences probably, written from scratch by another author, as a presentation appropriate for teaching a course.)

"How to Learn a Language Quickly" probably needs no elaboration

That one doesn't sound bad. I'd like to read a take from a non-Ferris source.

In short: immersion, SRS and cloze deletion. Screw textbooks, classes and any "this isn't proper material for a learner" elitism.

Learning a language takes 3000-10000 hours with the best techniques (length depending only on how closely related it is to one you already know), half that for decent basic fluency, about 2-4 weeks of intense practice for pub-level conversations. There's no free lunch, but it can be pretty tasty.


1) There is no Immersion like Immersion and Khatzumoto is its prophet. (Slightly kidding, but he's my favorite advocate of the approach and fun to read. And he is absolutely right.)

2) What's cloze deletion? Anki FAQ. Why does it matter? It gives you lots of context around unknown pieces, making them stick better. Also, it's fun.

3) Anki is the best SRS, see the site for an explanation how to use it. At first, you make cards "word -> translation". Then "easy sentence -> translation". Then "easy sentence with cloze-deleted gap" -> "full sentence". Try adding more context, like surrounding sentences in a conversation, audio and so on. Always go "target language -> translation" or "target language -> target language". (Contrasting with Khatz' advice, I'd recommend staying with translations and bilingual material for a long time until you can actually feel how sucky the translation is.)

4) If you like talking more than reading, copy Benny. Otherwise just consume as described.

This might seem a bit Japanese-centric because a) I study it and b) it has the best learning community evar, but this stuff applies to all languages equally. Some esoteric choices (say, dead languages) require some additional tricks to fix specific issues, but essentially it's all the same.

If someone'd like more details, especially for some specific problem, technique or language, just ask. I've been studying languages for about 4-5 years as a main hobby with differing intensity now and have tried pretty much everything that's out there in some form or another. But basically, there are no shortcuts. Do what's fun, imitate relentlessly, use an SRS so you don't forget everything again.

Since when has Japanese had the best learning community evar? It may be very friendly online, but in my face-to-face experiences public courses have fallen painfully short - I've been studying independently for only a year and a half and talk circles around AP students. Although they do still have an edge on me in such fields as "ordering meals in restaurants" and "presenting business cards", they really have no functional knowledge of the language at all.

Here is my recommended method, in three complex but well-defined steps:

  1. Learn the grammar of the language using an old-fashioned (pre-1960) textbook.

  2. Access a large corpus of data (text and speech in the language).

  3. Practice using the language with people who know it, and receive feedback.

The "Joy in the Merely Real" Sequence in One Lesson

The "Map and Territory" Sequence in One Lesson

The "Mysterious Answers to Mysterious Questions" Sequence in One Lesson

How to incorporate Spaced-Repetition Systems (SRS) into your self-study program.

This might seem trivial, but I've personally never gotten up the activation energy to actually learn how to use SRS effectively, despite being convinced that I would benefit from doing so.

A short, "Do It Like This"-type post would be most helpful!

Why Cryonics, Uploading, and Destructive Teleportation Do Not Kill You

This was asked for in the IRC channel. I don't think anyone came up with a link to a good and accessible single-link refutation.

ETA: Changed the clumsy cutesy title according to the suggestion below.

ETA 2: David Chalmers' singularity paper has a reasonably good overview on the subject, but it's mixed up with a bunch of other stuff.

I've added this as: "Why Cryonics, Uploading, and Destructive Teleportation Do Not Kill You".

The Arrow of Time

Gary Drescher dissolves this old mystery in one chapter of "Good and Real". Amazing. I must have read a dozen pop science books that discuss this problem, analyze some proposed solutions, and then leave it as a mystery. Drescher crushes it.

This may not fit in one posting, but it might well fit in a sequence of four or so.

Believe it or not, I actually started an article on this around "17 October 2009" (per the date stamp) and never finished it. (I actually had the more ambitious idea of summarizing every chapter in one article, but figured Chapter 3 would be enough.) Might as well post what I have (formatting and links don't carry over; I've corrected the worst issues) ...

Here I attempt to summarize the points laid out in Gary Drescher's Good and Real: Demystifying Paradoxes from Physics to Ethics (discussed previously on Less Wrong), chapter 3, which explores the apparent flow of time and gives a reductionist account of it. To [...] What follows is a restating of the essential points and the arguments behind them in my own words, which I hope to make faithful to the text. It's long, but a lot shorter than reading the chapter, a lot cheaper than buying the book, and a lot less subjuntively self-defeating than pirating it.

The focus of the chapter is to solve three interrelated paradoxes: If the laws of physics are time-symmetric:

1) Why does entropy increase in only one direction?

2) Why do we perceive a directional flow of time?

3) Why do we remember the past but not the future?

Starting from the first: why does entropy -- the total disorder in the universe -- increase asymmetrically? To answer, start with a simple case: the billiard ball simulation, where balls have a velocity and position and inelastically bounce off each other as per the standard equations predicated on the (time-symmetric) conservation of linear momentum. For a good example of entropy's increase, let's initialize it with a non-uniformity: there will be a few large, fast balls, and many small, slow balls.

What happens? Well, as time goes by, they bounce off each other, and the larger balls transfer their momentum to balls with less. We see the standard increase in entropy as time increases. So if you were to watch a video of the simulation in action, there would be telltale signs of which is the positive and which is the negative direction: in the positive direction, large balls would plow through groups of smaller balls, leaving a "wake" during which it increases their speeds. But if we watch it in reverse, going back to the start, entropy, of course, decreases: highly-ordered wakes spontaneously form before the large balls go into them.

Hence, the asymmetry: entropy increases in only one direction.

The mystery dissolves when you consider what happens when you continue to view the simulation backwards, and proceed through the initial time, onward to t= -1, -2, -3, ... . You see the exact same thing happen going in the direction of negative time from t=0. So, we see our confusion: entropy does not increase in just the positive direction: it increases as you move away from zero, even if that direction isn't positive.

So, we need to reframe our understanding: instead of thinking in terms of positive and negative time directions, we should think in terms of "pastward" and "futureward" directions. Pastward means in the direction of the initial state, and futureward means away from it. Both the sequences t= 1, 2, 3, ... and t= -1, -2, -3, ... go into the future. (Note the parallel here to the reframing of "up" and "down" once your model of the earth goes from flat to round: "down" no longer means a specific vector, but the vector from where you are to the center of the earth. So you change your model of "down" and "up" to "centerward" and "anticenterward" [my terms, not Drescher's], respectively.)

Okay, that gets us a correct statement of the conditions under which entropy increases, but still doesn't say why entropy increases in only the futureward direction. For that, we need to identify what the positive-time futureward direction and the negative-time futureward direction have in common. For one thing, the balls become correlated. Previously (pastwardly), knowing a ball's state did not allow you to infer much about the other balls' states, as the velocities were set independently of one another. But the accumulation of collisions causes the balls to become correlated -- in effect, to share information with each other. [Rephrase to discuss elimination of gradients/exchange of information of all parts of system?...]

Note that the entropy does not need to increase uniformly: this model still permits local islands of lower entropy in the futureward direction, as long as the total entropy still increases. Consider the "wakes" left by the large balls that were mentioned above. In that case, the large balls will "plow" right through the small balls and leave a (low entropy) wake. (Even as they do this, the large balls transfer momentum to the smaller balls and increase total entropy.) The wakes allow you to identify time's direction: a wake is always located where the large ball was in an immediately pastward state. This relationship also implies that wake contains a "record" of sorts, giving physical form to the information in the current timewise state, regarding a pastward state.

This process is similar to what goes on in the brain. Just as wakes are islands of low entropy containing information about pastward states, so too is your brain an island of low entropy containing information about pastward states. (Life forms are already known to be dissipative systems that maintain an island of low entropy at the cost of a counterbalancing increase elsewhere.) [...]

So it's not that "gee, we notice time goes forward, and we notice that entropy happens to always increase". Rather, the increase of entropy determines what we will identify as the future, since any time slice will only contain versions of ourselves with memories of pastward states.

Hawking did this analysis in the first edition of A Brief History of Time - though he made a complete mess of it - and concluded that time will start going backwards when the universe stops expanding!

I remember back when I read this at university, I thought: Boltzman will be turning in his grave. I also remember immodestly thinking: here's a smart, famous scientist - and even spotty teenage I could see what a ridiculous argument he was making - in about two seconds.

When I re-read A Brief History of Time in college, I remember bemusedly noticing that Hawking's argument would be stronger if you reversed its conclusion.

A note to myself from 2009 claims that Hawking later dropped that argument. Can anyone substantiate that?

He has also edited A Brief History of Time to remove the howler. See page 64 for the updated text.

BTW, Sean Carroll just wrote an entire popular-level book on this subject.

Yes, I actually read a large portion of that book ("From Eternity to Here"?) whilst still in the bookstore. It provided great exposition of several difficult concepts, but ultimately I was unimpressed, since Carroll would frequently present a problem in thermodynamics, and I would be thinking, "Yeah, so what about the Barbour/Drescher solution to this?" and he wouldn't address it or say anything that would undermine it.

Cool. Except that one or the other of us didn't quite understand Drescher. Because my understanding was that he considered and rejected the idea that the arrow of perceived time is the same as the order of increased entropy. I thought he said that it is the inter-particle correlations that matter for subjective time - not entropy as such. But perhaps I misunderstood.

I'm glad you bring this up, I've been interested in a discussion on this.

Drescher makes extensive use of the generalized concept of a "wake": in the ball case, a wake is where you can identify which direction is "pastward", i.e., to the direction of minimal inter-particle entanglement. Any mechanism that allows such an identification can be though of as a generalization of the "wake" that happens in the setup.

One such wake is the formation of memories (including memories in a brain), which, like the literal wake, exploit regularities of the environment to "know" the pastward direction, and (also like the wake) necessarily involve localized decrease but global increase of entropy. (edit: original was reversed)

So yes, I agree that Drescher is saying that the interparticle correlations are what determine the subjective feeling of time -- but he's also saying that the subjective feeling (memory formation) necessarily involves a local decrease of entropy and counterbalancing increase somewhere else.

I'm glad you bring this up, I've been interested in a discussion on this.

Unfortunately, I'm probably not the ideal person to carry out this discussion with you. I got my copy of the book through interlibrary-loan and it is due back tomorrow. :-(

How to Select Good Textbooks for Self-Study of Unfamiliar Subjects

The cognitive processes of people doing science and engineering

There's a bunch of research about what seems to be going on in the heads of small children who are learning to read or count, a lot of it also seems to be used in attempts to make them learn better. Try asking what you should expect to see happening in the heads of university students successfully learning mathematical physics or trained scientists doing their stuff, and there seems to be next to nothing. Math and science education is cognitively very demanding, and seems to be mostly uninterested in the cognitive strategies the students should try to develop to master the subject material.

Human cognition at this level might be too complex to get a handle on with any reasonable amount of work, but that doesn't quite explain the sink-or-swim apathy that seems to be the common attitude towards getting students to understand advanced math.

A survey of systems theory approaches and applications

I've been meaning to look into various general theories about systems and processes, but the field seems pretty obscure and ill-defined. Category theory seems to have been popping up in relation to this since the 70s, but I don't know if this stuff has been successfully applied to modelling any real-world phenomena. The late Robin Milner was working on some sort of process formalism stuff, but what I tried to read of that was extremely formalism-heavy and very light on the motivation. Baez's Rosetta paper tries to unify physical processes and computations with a category theoretical formalism.

One basic theme seems to be looking for a formalism that deals with processes instead of static objects. Process philosophy sounds like it should be relevant.

It seems obvious that better tools for understanding complex processes would be nice, but given that systems theory has been a thing since at least the mid-20th century and seems to remain pretty obscure and confusing despite people having struggled with plenty of complex systems in between, it looks like it might not be a terribly handy or powerful tool.

How to Argue with Religious People, Conspiracy Theorists, and Other People Who Believe Crazy Things

On the opposite side, and also worthy of discussion: How NOT to Argue with Religious People, Conspiracy Theorists, and Other People Who Believe Crazy Things.

Obvious prerequisite: replace "How" with "Whether" or "When".

I'm not sure about the "__ in One Lesson" posts — I think it would be a good project to complete the sequence indexes that don't already have post summaries, but the sequences themselves are pretty information-dense; how would you condense them without losing a lot of their value?

Would they be targeted at people who have already read the full sequence and want a refresher/index, or at people who haven't read them yet, as an introduction?

It would indeed be hard to compress those sequences - and impossible for other sequences, such as those on meta-ethics and quantum physics. But I think it could be done. Some information would have to be lost, but that is okay: it's still there in the original sequence.

The goal would be to lower the barrier of entrance to Less Wrong. Right now the entrance exam is, "Go read the sequences," which is a command to read more words than are in Lord of the Rings. That's insane. We need a better way to welcome newbies into the site.

(1) Smart Drugs: Which Ones to Use for What, and Why

Out of curiosity, would you be interested in something like ?

(Also, shouldn't you have posted each of those topics as a comment to be voted on or not?)

Oops, yes, thanks. I've commented with the titles I provided.

I suggest you leave a comment with a proposed post title for the Drug Heuristics thing you wrote, and see how many up-votes it gets!

I suggest you leave a comment with a proposed post title for the Drug Heuristics thing you wrote, and see how many up-votes it gets!

I don't have any catchy titles for it; 'How an evolutionist takes drugs'? 'Evolution's Excellent Encyclopedia of Enhancements'? 'Nick Bostrom's Favorite Nootropics'? 'Nootropics and You and Your Ancestors'? 'Heuristics & Huperzines'? They're all so silly.

(And there are already a lot of upvotes on my first comment, so I guess I'll let it stand.)

Epistemic Rationality from First Principles to Solomonoff Induction

What topics would you like to see more of on Less Wrong

Whoops, we already did that one recently.

Could you explain more what you want, here? 'For Dummies' books are usually on fields for which there is a lot of well-accepted knowledge, but that's not the case with FAI.

Sorry, I originally had all my requests grouped together, which perhaps made it clearer that they all were made tongue-in-cheek.

Should we be crossing off ones that have already been done?

Well, there was, but I guess that was a bit less focused than "Smart Drugs: Which Ones to Use for What, and Why" was intended to be. (When I posted the grandparent I hadn't noticed the distinction.)

No one is suggesting titles of social skills-related or general success-related posts they want to read? ("Entrepreneurship" is the exception.)