This post tests how much exposure comments to open threads posted "late" get. If you are reading this then please either comment or upvote. Please don't do both and don't downvote. When the next open thread comes, I'll post another test comment as soon as possible with the same instructions. Then I'll compare the scores.
If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient.
PS: If you don't see a test comment in the next open thread (e.g. I've gone missing), please do post one in my stead. Thank you.
Edit: Remember that if you don't think I deserve the karma, but still don't want to comment, you can upvote this comment and downvote any one or more of my other comments.
I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.
It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted ...
So, I'm reading A Fire Upon The Deep. It features books that instruct you how to speedrun your technological progress all the way from sticks and stones to interstellar space flight. Does anything like that exist in reality? If not, it's high time we start a project to make one.
Edit (10 October 2009): This is encouraging.
What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new? It'd be nice if there were a non-threaded view. RSS feed?
One of the old standard topics of OB was cryogenics; why it's great even thought it's incredibly speculative & relatively expensive, and how we're all fools for not signing up. (I jest, but still.)
Why is there so much less interest in things like caloric restriction? Or even better, intermittent fasting, which doesn't even require cuts in calories? If we're at all optimistic about the Singularity or cryogenic-revival-level-technology being reached by 2100, then aren't those way superior options? They deliver concrete benefits now, for a price that can'...
Eliezer Yudkowsky and Andrew Gelman on Bloggingheads: Percontations: The Nature of Probability
I haven't watched it yet, but the set-up suggests it could focus a discussion, so should probably be given a top-level post.
A link you might find interesting:
The Neural Correlates of Religious and Nonreligious Belief
Summary:
Religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks, scientists at UCLA and other universities have found. They used fMRI to measure signal changes in the brains of committed Christians and nonbelievers as they evaluated the truth and falsity of religious and nonreligious propositions. For both groups, beli...
I plan to develop this into a top level post, and it expands on my ideas in this comment, this comment, and the end of this comment. I'm interested in what LWers have to say about it.
Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression. For example Marcus Hutter's AIXI purports to identify the inferences a maximally-intelligent being would make, yet it (and efficient approximations) does not have practical application. The reason (I think) is that it works by finding the shortest hypothesis th...
For you non-techies who'd like to be titillated, here's a second bleg about some very speculative and fringey ideas I've been pondering:
What do you think the connection between motivation & sex/masturbation is?
Here's my thought: it's something of a mystery to me why homosexuals seem to be so well represented among the eminent geniuses of Europe & America. The suggestion I like best is that they're not intrinsically more creative thanks to 'female genes' or whatever, but that they can't/won't participate in the usual mating rat-race and so in a Fre...
I have something of a technical question; on my personal wiki, I've written a few essays which might be of interest to LWers. They're in Markdown, so you would think I could just copy them straight into a post, but, AFAIK, you have to write posts in that WSYIWG editor thing. Is there any way around that? (EDIT: Turns out there's a HTML input box, so I can write locally, compile with Pandoc, and insert the results.)
The articles, in no particular order:
...This is just a comment I can edit to let people elsewhere on the Net know that I am the real Eliezer Yudkowsky.
10/30/09: Ari N. Schulman: You are not being hoaxed.
Dual n-back is a game that's supposed to increase your IQ up to 40%. http://en.wikipedia.org/wiki/Dual_n_back#Dual_n-back
Some think the effect is temporary, long-term studies underway. Still, I wouldn't mind having to practice periodically. I've been at it for a few days, might retry the Mensa test in a while. (I washed out at 113 a few years ago) Download link: http://brainworkshop.sourceforge.net/
It seems to make sense. Instead of getting a faster CPU, a cheap and easy fix is get more RAM. In a brain analogy, I've often thought of the "magic number ...
Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.
Ted Williams seems to be having a tough time of it.
Henry Markram's recent TED talk on cortical column simulation. Features philosophical drivel of appalling incoherence.
We need a snappy name like "analysis paralysis" that is focused on people who spend all their time studying rather than doing. They (we) intend to do, but never fell like they know enough to start.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Ea...
I recently realized that I don't remember seeing any LW posts questioning if it's ever rational to give up on getting better at rationality, or at least on one aspect of rationality that a person is just having too much trouble with.
There have been posts questioning the value of x-rationality, and posts examining the possibility of deliberately being irrational, but I don't remember seeing any posts examining if it's ever best to just give up and stop trying to learn a particular skill of rationality.
For example, someone who is extremely risk-averse, and e...
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians....
So, there's this set, called W. The non-emptiness of W would imply that many significant and falsifiable conjectures, which we have not yet falsified, are false. What's the probability that W is empty?
(Yep, it's a bead jar guess. Show me your priors. I will not offer clarification unless I find that there's something I meant to be clearer about but wasn't.)
Movie: Cloudy with a Chance of Meatballs - I took the kids to see that this week-end and it struck me as a fun illustration of the UnFriendly AI problem.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Eac...
The Other Presumptuous Philosopher:
It begins pretty much as described here:
...It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry conside
Bug alert: this comment has many children, but doesn't currently have a "view children" link when viewing this entire thread.
I've only been reading Open Threads recently, so forgive me if it's been discussed before.
A band called The Protomen just recently came out with their second rock opera of a planned triology of rock operas based on (and we're talking based on) the Megaman video game. The first is The Protomen: Hope Rides Alone, the second one is Act II: The Father of Death.
The first album tells the story of a people who have given up and focuses on the idea of heroism. The second album is more about creation of the robots and the moral struggles that occur. I suggest you start with: The Good Doctor http://www.youtube.com/watch?v=HP2NePWJ2pQ
Mini heuristic that seems useful but not big enough for a post.
To combat ingroup bias: before deciding which experts to believe, first mentally sort the list of experts by topical qualifications. Allow autodidact skills to count if they have been recognized by peers (publication, citing, collaboration, etc).
My thought of the day: An 'Infinite Improbability Drive' is slightly less implausible than a faster than light engine.
Is there a complete guide anywhere to comment/post formatting? If so, it should probably linked on the "About" page or something. I can't figure out how to do html entities; is that possible?
I would like to throw out some suggested reading: John Barnes's Thousand Cultures and Meme Wars series. The former deals with the social consequences of smarter-than-human AI, uploading, and what sorts of pills we ought to want to take. The latter deals with nonhuman, non-friendly FOOMs. Both are very good, smart science fiction quite apart from having themes often discussed here.
I'll make my more wrong confession here in this thread: I'm a multiple worlds skeptic. Or at least I'm deeply skeptical of Egan's law. I won't pretend I'm arguing from any sort of deep QM understanding. I just mean in my sci-fi, what-if, thinking about what the implications would be. I truly believe there would be more wacky outcomes in an MWI setting than we see. And I don't mean violations of physical laws; I'm hung up on having to give up the idea of cause and effect in psychology. In MWI, I don't see how it's possible to think there would be cause and ...
Hear ye, hear ye: commence the discussion of things which have not been discussed.
As usual, if a discussion gets particularly good, spin it off into a posting.
(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)
You write fiction, yes? Have you ever studied creative writing, taken a class, read a book on creative writing?
Yes. No. No. No.
Have you ever had an English class with a skilled and passionate teacher that involved analysis of texts that you gained more and more appreciation for after really careful reading and study?
Hell no. I have a completely unbroken track record of hating every single book that I have ever read for the first time as a class assignment, and have never found that a book I already liked was improved by this kind of dissection.
Do you feel that the process of becoming a better writer and/or learning to analyze fiction has increased your appreciation and enjoyment of fiction?
Not one bit! I have mostly become a better writer by learning related skills (I was allowed to make up my own second major in undergrad, and therefore literally have a degree in worldbuilding), practicing, and emulating the good parts of what I read. I now have to turn off my critical faculties entirely to enjoy any works of fiction at all, even those that are overall very good, because detecting small flaws in their settings, characterization, handling of social issues, dialogue, use of artistic license, etc. will throw off my ability to not fling the book at a wall. Works that aren't overall good turn on said critical faculty in spite of my best efforts. I can barely have a conversation about a work of fiction anymore without starting to hate it unless I'm just having a completely content-free squee session with an equally enthusiastic friend!
Most people find that going through those sorts of processes results in much greater enjoyment and appreciation, and they are also able to enjoy fiction that they formerly would have found boring.
I guess I'm a mutant?
Expecting to either just "like it" or "find it boring" and thinking of it as being just another genre like rock or pop is like approaching Dostoevsky with the same background/expectations/skills/patience as you would a Tom Clancy novel.
Although I have never read an entire Dostoevsky novel (my reading list is enormous and I haven't gotten around to it), I have really liked the excerpts I've read - immediately, without having to work for it. This is why I plan to read more of his stuff when I get around to it. I've never tried any Tom Clancy. Is he worth reading?
Some things require considerable experience and skill before it is possible to have an informed judgment about them: the literature classics, for example, and classical music.
Maybe this is just my idiosyncrasy, but I think making the reader work hard when this isn't absolutely necessary - in fiction, nonfiction, or anything else - is a failure of clarity, not a masterstroke of subtlety. This isn't to say that you can't still have a good work that makes the reader do some digging to find all the content, but that's true of any flaw - you can also have a good work with a kinda stupid premise, or with a cardboard secondary character, or that completely omits female characters for no good reason, or has any of a myriad of bad but not absolutely damning awfulnesses.
Hell no. I have a completely unbroken track record of hating every single book that I have ever read for the first time as a class assignment, and have never found that a book I already liked was improved by this kind of dissection.
Maybe I'm the mutant. I know that your reaction is very common, but I attribute it to either the result of bad teaching and/or students being forced against their will to do something that they will therefore be very likely to hate. When I have been in classes with smart, passionate teachers, and the students were there becau...