Rational Education
As a mom who can't afford private schools and is horrified by the current state of public education in my country (USA), I'm keenly interested in rational homeschool curriculum ideas--both explicitly teaching rationality itself, and also teaching specific subjects in a rational way. Teaching the skills necessary for self-education might be a third topic.
I'd like to take a stab at writing this one, actually, if no one else is dead set on it. Expect it in the discussion section within forty-eight hours.
EDIT: Status as of 11:56AM EST, Feb 9: The first draft is about 90% completed, but I need to leave it aside and run off to class. I will post it this afternoon, and then revise it (aided by your contributions!) over the remainder of the week.
EDIT EDIT: Posted as of 8:10PM EST.
Detecting And Bridging Inferential Distance For Teachers
Roughly: Generic tutoring skills where a stable curriculum doesn't exist and what the person who is being taught actually knows can be patchy or surprising.
Field Manual: What to Do If You're Stranded in a Level 1 (Base Human Equivalent) Brain in a pre-Singularity Civilization
Detecting And Bridging Inferential Distance For Learners
Roughly: How to notice when someone has more levels of expertise than you do in some area and then effectively and ethically acquire their skills/wisdom/knowledge.
"How to Learn a Language Quickly" probably needs no elaboration
That one doesn't sound bad. I'd like to read a take from a non-Ferris source.
In short: immersion, SRS and cloze deletion. Screw textbooks, classes and any "this isn't proper material for a learner" elitism.
Learning a language takes 3000-10000 hours with the best techniques (length depending only on how closely related it is to one you already know), half that for decent basic fluency, about 2-4 weeks of intense practice for pub-level conversations. There's no free lunch, but it can be pretty tasty.
Techniques:
1) There is no Immersion like Immersion and Khatzumoto is its prophet. (Slightly kidding, but he's my favorite advocate of the approach and fun to read. And he is absolutely right.)
2) What's cloze deletion? Anki FAQ. Why does it matter? It gives you lots of context around unknown pieces, making them stick better. Also, it's fun.
3) Anki is the best SRS, see the site for an explanation how to use it. At first, you make cards "word -> translation". Then "easy sentence -> translation". Then "easy sentence with cloze-deleted gap" -> "full sentence". Try adding more context, like surrounding sentences in a conversation, audio and so on. Always go "target language -> translation" or "...
Why Cryonics, Uploading, and Destructive Teleportation Do Not Kill You
This was asked for in the IRC channel. I don't think anyone came up with a link to a good and accessible single-link refutation.
ETA: Changed the clumsy cutesy title according to the suggestion below.
ETA 2: David Chalmers' singularity paper has a reasonably good overview on the subject, but it's mixed up with a bunch of other stuff.
The Arrow of Time
Gary Drescher dissolves this old mystery in one chapter of "Good and Real". Amazing. I must have read a dozen pop science books that discuss this problem, analyze some proposed solutions, and then leave it as a mystery. Drescher crushes it.
This may not fit in one posting, but it might well fit in a sequence of four or so.
Lecture Notes on Personal Rationality
(Not "in one lesson" summaries, but self-contained treatment of the topic, incorporating material from the Sequences probably, written from scratch by another author, as a presentation appropriate for teaching a course.)
The cognitive processes of people doing science and engineering
There's a bunch of research about what seems to be going on in the heads of small children who are learning to read or count, a lot of it also seems to be used in attempts to make them learn better. Try asking what you should expect to see happening in the heads of university students successfully learning mathematical physics or trained scientists doing their stuff, and there seems to be next to nothing. Math and science education is cognitively very demanding, and seems to be mostly uninterested in the cognitive strategies the students should try to develop to master the subject material.
Human cognition at this level might be too complex to get a handle on with any reasonable amount of work, but that doesn't quite explain the sink-or-swim apathy that seems to be the common attitude towards getting students to understand advanced math.
I'm not sure about the "__ in One Lesson" posts — I think it would be a good project to complete the sequence indexes that don't already have post summaries, but the sequences themselves are pretty information-dense; how would you condense them without losing a lot of their value?
Would they be targeted at people who have already read the full sequence and want a refresher/index, or at people who haven't read them yet, as an introduction?
It would indeed be hard to compress those sequences - and impossible for other sequences, such as those on meta-ethics and quantum physics. But I think it could be done. Some information would have to be lost, but that is okay: it's still there in the original sequence.
The goal would be to lower the barrier of entrance to Less Wrong. Right now the entrance exam is, "Go read the sequences," which is a command to read more words than are in Lord of the Rings. That's insane. We need a better way to welcome newbies into the site.
A survey of systems theory approaches and applications
I've been meaning to look into various general theories about systems and processes, but the field seems pretty obscure and ill-defined. Category theory seems to have been popping up in relation to this since the 70s, but I don't know if this stuff has been successfully applied to modelling any real-world phenomena. The late Robin Milner was working on some sort of process formalism stuff, but what I tried to read of that was extremely formalism-heavy and very light on the motivation. Baez's Rosetta pap...
How to Argue with Religious People, Conspiracy Theorists, and Other People Who Believe Crazy Things
On the opposite side, and also worthy of discussion: How NOT to Argue with Religious People, Conspiracy Theorists, and Other People Who Believe Crazy Things.
(1) Smart Drugs: Which Ones to Use for What, and Why
Out of curiosity, would you be interested in something like http://www.gwern.net/Drug%20heuristics ?
(Also, shouldn't you have posted each of those topics as a comment to be voted on or not?)
What topics would you like to see more of on Less Wrong
Whoops, we already did that one recently.
No one is suggesting titles of social skills-related or general success-related posts they want to read? ("Entrepreneurship" is the exception.)
Believe it or not, I actually started an article on this around "17 October 2009" (per the date stamp) and never finished it. (I actually had the more ambitious idea of summarizing every chapter in one article, but figured Chapter 3 would be enough.) Might as well post what I have (formatting and links don't carry over; I've corrected the worst issues) ...
Here I attempt to summarize the points laid out in Gary Drescher's Good and Real: Demystifying Paradoxes from Physics to Ethics (discussed previously on Less Wrong), chapter 3, which explores the apparent flow of time and gives a reductionist account of it. To [...] What follows is a restating of the essential points and the arguments behind them in my own words, which I hope to make faithful to the text. It's long, but a lot shorter than reading the chapter, a lot cheaper than buying the book, and a lot less subjuntively self-defeating than pirating it.
The focus of the chapter is to solve three interrelated paradoxes: If the laws of physics are time-symmetric:
1) Why does entropy increase in only one direction?
2) Why do we perceive a directional flow of time?
3) Why do we remember the past but not the future?
Starting from the first: why does entropy -- the total disorder in the universe -- increase asymmetrically? To answer, start with a simple case: the billiard ball simulation, where balls have a velocity and position and inelastically bounce off each other as per the standard equations predicated on the (time-symmetric) conservation of linear momentum. For a good example of entropy's increase, let's initialize it with a non-uniformity: there will be a few large, fast balls, and many small, slow balls.
What happens? Well, as time goes by, they bounce off each other, and the larger balls transfer their momentum to balls with less. We see the standard increase in entropy as time increases. So if you were to watch a video of the simulation in action, there would be telltale signs of which is the positive and which is the negative direction: in the positive direction, large balls would plow through groups of smaller balls, leaving a "wake" during which it increases their speeds. But if we watch it in reverse, going back to the start, entropy, of course, decreases: highly-ordered wakes spontaneously form before the large balls go into them.
Hence, the asymmetry: entropy increases in only one direction.
The mystery dissolves when you consider what happens when you continue to view the simulation backwards, and proceed through the initial time, onward to t= -1, -2, -3, ... . You see the exact same thing happen going in the direction of negative time from t=0. So, we see our confusion: entropy does not increase in just the positive direction: it increases as you move away from zero, even if that direction isn't positive.
So, we need to reframe our understanding: instead of thinking in terms of positive and negative time directions, we should think in terms of "pastward" and "futureward" directions. Pastward means in the direction of the initial state, and futureward means away from it. Both the sequences t= 1, 2, 3, ... and t= -1, -2, -3, ... go into the future. (Note the parallel here to the reframing of "up" and "down" once your model of the earth goes from flat to round: "down" no longer means a specific vector, but the vector from where you are to the center of the earth. So you change your model of "down" and "up" to "centerward" and "anticenterward" [my terms, not Drescher's], respectively.)
Okay, that gets us a correct statement of the conditions under which entropy increases, but still doesn't say why entropy increases in only the futureward direction. For that, we need to identify what the positive-time futureward direction and the negative-time futureward direction have in common. For one thing, the balls become correlated. Previously (pastwardly), knowing a ball's state did not allow you to infer much about the other balls' states, as the velocities were set independently of one another. But the accumulation of collisions causes the balls to become correlated -- in effect, to share information with each other. [Rephrase to discuss elimination of gradients/exchange of information of all parts of system?...]
Note that the entropy does not need to increase uniformly: this model still permits local islands of lower entropy in the futureward direction, as long as the total entropy still increases. Consider the "wakes" left by the large balls that were mentioned above. In that case, the large balls will "plow" right through the small balls and leave a (low entropy) wake. (Even as they do this, the large balls transfer momentum to the smaller balls and increase total entropy.) The wakes allow you to identify time's direction: a wake is always located where the large ball was in an immediately pastward state. This relationship also implies that wake contains a "record" of sorts, giving physical form to the information in the current timewise state, regarding a pastward state.
This process is similar to what goes on in the brain. Just as wakes are islands of low entropy containing information about pastward states, so too is your brain an island of low entropy containing information about pastward states. (Life forms are already known to be dissipative systems that maintain an island of low entropy at the cost of a counterbalancing increase elsewhere.) [...]
So it's not that "gee, we notice time goes forward, and we notice that entropy happens to always increase". Rather, the increase of entropy determines what we will identify as the future, since any time slice will only contain versions of ourselves with memories of pastward states.
BTW, Sean Carroll just wrote an entire popular-level book on this subject.
Less Wrong is a large community of very smart people with a wide spectrum of expertise, and I think relatively little of that value has been tapped.
Like my post The Best Textbooks on Every Subject, this is meant to be a community-driven post. The first goal is to identify topics the Less Wrong community would like to read more about. The second goal is to encourage Less Wrongers to write on those topics. (Respecting, of course, the implicit and fuzzy guidelines for what should be posted to Less Wrong.)
One problem is that those with expertise on a subject don't necessarily feel competent to write a front-page post on it. If that's the case, please comment here explaining that you might be able to write one of the requested posts, but you'd like a writing collaborator. We'll try to find you one.
Rules
You may either:
or...
I will regularly update the list of suggested Less Wrong posts, ranking them in descending order of votes (like this).
The List So Far (updated 02/11/11)