All of Fadeway's Comments + Replies

Interesting new Pew Research study on American opinions about radical life extension

I did not expect this. And it seems weird, since young people are also more optimistic about their futures. And more likely to want to undergo radical life extension. Plus they haven't suffered the effects of aging (having many loved ones die, illness and pain, etc.).

Didn't predictions for the Singularity follow a similar trend? Older people predicting 30-40 years until the event, and younger predictors being more pessimistic because they're likely to still be alive even if it happens in 60 years?

8gwern8yNot really: http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ [http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/]
Seed Study: Polyphasic Sleep in Ten Steps

Those people may have a better chance of succeeding.

2Risto_Saarelma9yPuredoxyk, who originated the Uberman idea, was suffering from various sleep disorders [http://www.puredoxyk.com/index.php/2006/07/24/about-polyphasic-sleep/] when she developed the technique. I always did wonder why more people with serious insomnia don't try polyphasic sleep. It's a lot nicer to lie in bed for 20 minutes not falling asleep than it is to lie in bed for 120 minutes not falling asleep.
Seed Study: Polyphasic Sleep in Ten Steps

I've failed Uberman twice myself. You have pretty much an optimal plan, except for the naptation.

"Cut your naps down to 6 as quickly as you can without it hurting too much".

From my own knowledge, which may or may not be trustworthy, naptation doesn't need to be ended prematurely - the whole point is to get a huge number of naps in a short timeframe in order to learn to get REM in a 24-minute interval (which dreaming is a sign of). Getting a few more will just decrease your REM dep. The way I would do it is, get 12 naps a day until you find your... (read more)

[anonymous]9y10

Another thing that happened when I tried this was that no alarm could phase me. Every alarm I tried, including one that required typing my computer password, I would figure out how to turn it off in my sleep. I'm sure I could have continued escalating into solving np complete problems before it stopped, but I gave up soon afterward. I pretty much woke up exclusively from other being physically waking me. I even answered the phone while asleep once, no idea what I said.

Noticing the 5-second mindkill

I discovered this issue for myself by reading a similar article, and going through the same process, but with my third thought being "does that guy [the Prime Minister in this story] really believe this thing that I believe [in this case, pro-choice]?" I think he's bad because he broke the rules, then I forgive him because he's on my side, then for one reason on another I start to wonder if he really is on my side...and notice that I'm trying to decide whether to blame him for breaking the rules or not. (I think this is because I myself use irony... (read more)

Solved Problems Repository

Google never fails. The chart shall not allow it.

Schelling Day: A Rationalist Holiday

Sounds like a fun ritual. Makes me wish I were in Boston so I could attend.

Soylent Orange - Whole food open source soylent

I've doubted his process from the start - I remember reading a third person's comment that pointed out he had forgotten to add iron - and his subsequent reply that this mistake was the cause of his feeling bad. I know nothing about nutrition (except that it's not a very good science, if it's science at all), yet iron is obvious even to me. To miss it shows that he didn't really do much double checking, much less cross-referencing or careful deliberation of the ingredient list.

I'm really hopeful about Soylent - I'd even jump in and risk poisoning to test it... (read more)

Recent updates to gwern.net (2012-2013)

I've read a significant amount of your essays/articles and love the stuff. It's kinda hard to track for new stuff since the RSS feed tends to dump dozens of small changes all at once, so this post is much appreciated.

0gwern8yTo help solve this problem for people, I've been posting monthly updates at http://gwern.net/Changelog [http://gwern.net/Changelog] and sending out newsletters (signup [http://eepurl.com/Kc155]; issues: Dec 2013 [http://eepurl.com/LG6OT]/Jan 2014 [http://eepurl.com/NzHiD]/Feb 2014 [http://eepurl.com/PudcL]). Does this work for you?
1Tenoke9yA better (maybe separate?) RSS feed that doesn't do that will be a huge improvement for my experience of the site as well.
Recent updates to gwern.net (2012-2013)

Is it useful to increase reading speed, even if it takes a minimal amount of time (to go from basic level to some rudimentary form of training)? I've always been under the impression that speed increases in reading are paid for with a comprehension decrease - which is what we actually care about. Or is this only true for the upper speed levels?

0moridinamael9yIn the form of speed-reading in which I was trained, you write a one-sentence summary of each paragraph as you're reading, and after you read a chapter or section, you review each of your one-sentence summaries. In theory this allows you to "process" things like textbooks into knowledge stored in your brain very quickly. In practice, speed reading only works for me if the material doesn't contain any concepts that I don't already understand. I find it very useful when I need to get the gist of a paper to decide whether I want to actually read it in detail.
1gwern9yI think it is. As I mention in my footnote, it's been a long time since I was reading up on the topic and I don't have any notes, but I recall the gist being that it's at the upper levels that you forfeit comprehension and that for lower speeds like <400wpm on nontechnical material you may even get better comprehension.
Don't Get Offended

What was the name of that rule where you commit yourself to not getting offended?

I've always practiced it, though not always as perfectly as I've wanted (when I do slip up, it's never during an argument though; my stoicism muscle is fully alert at those points in time). An annoying aspect of it is when other people get offended - my emotions are my own problem, why won't they deal with theirs; do I have to play babysitter with their thought process? You can't force someone to become a stoic, but you can probably convince them that their reaction is hurting them and show them that it's desirable for them to ignore offense. To that end, I'm thankful for this post, upvoted.

3Nornagest9ySounds like you're thinking of Crocker's rules [http://wiki.lesswrong.com/wiki/Crocker's_rules], although there's a bit more to it than that.
9katydee9yCrocker's Rules, [http://wiki.lesswrong.com/wiki/Crocker's_rules]
"What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases?

I agree, you can get over some slip-ups, depending on how easy what you're trying is compared to your motivation.

As you said, it's a chain - the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you're trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).

Eliezer Yudkowsky Facts

I disagree. This entire thread is so obviously a joke, one could only take it as evidence if they've already decided what they want to believe and are just looking for arguments.

It does show that EY is a popular figure around here, since nobody goes around starting Chuck Norris threads about random people, but that's hardly evidence for a cult. Hell, in the case of Norris himself, it's the opposite.

8IlyaShpitser9yhttp://www.overcomingbias.com/2011/01/how-good-are-laughs.html [http://www.overcomingbias.com/2011/01/how-good-are-laughs.html] http://www.overcomingbias.com/2010/07/laughter.html [http://www.overcomingbias.com/2010/07/laughter.html] I find these "jokes" pretty creepy myself. The facts about Chuck Norris is that he's a washed up actor selling exercise equipment. I think Chuck Norris jokes/stories are a modern internet version of Paul Bunyan stories in American folklore or bogatyr stories in Russian folklore. There is danger here -- I don't think these stories are about humor.
"What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases?

If you want to get up early, and oversleep once, chances are, you'll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.

Small failures always becoming huge ones also helps as a deterrent - if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.

1handoflixue9yIt seems to scale to willpower: For some people, "a single small failure once per month" is going to be an impossible goal, but "multiple small failures OR one big failure" is an option. If and only if one is dealing with THAT choice, it seems like a single big failure does a lot less damage to motivation. If you've got different anecdotes then I think we'll just have to agree to disagree. If you've got studies saying I'm wrong, I'm happy to accept that I'm wrong - I know it worked, since I used this to help fix my spouse's sleep cycle, but that doesn't mean it worked for the reasons I think. :)
Why Politics are Important to Less Wrong...

God. Either with or without the ability to bend the currently known laws of physics.

2Rukifellth9yNo, really.
Strongmanning Pascal's Mugging

This was my argument when I first encountered the problem in the Sequences. I didn't post it here because I haven't yet figured out what this post is about (gotta sit down and concentrate on the notation and the message of the author and I haven't done that yet), but my first thoughts when I read Eliezer claiming that it's a hard problem were that as the number of potential victims increases, the chance of the claim being actually true decreases (until it reaches a hard limit which equals the chance of the claimant having a machine that can produce infinit... (read more)

Think Like a Supervillain

The point is that a superhero can't take preemptive action. The author can invent a situation where a raid is possible, but for the most part, superman must destroy the nuke after it has been launched - preemptively destroying the launch pad instead would look like an act of aggression from the hero. And going and killing the general before he orders the strike is absolutely out of the question. This is fine for a superhero, but most of us can't stop nukes in-flight.

A dictatorship is different because aggression from the villain is everywhere anyway - and ... (read more)

9Elithrion9yIt really depends on what we mean by "superhero". If we stick to the archetypal Western examples, you're probably right. But things becomes less clear if we consider something like Watchmen, V for Vendetta (V is pretty super), or the many gray area types Marvel and DC have (I'm leaving that vague because I'm not too familiar with the canon), not to mention various manga and anime heroes (Lelouch?) But maybe we wouldn't call them superheroes precisely because they don't fit the "only act in response to clear, certain evil". Mostly, I think this points to the fact that, to no one's surprise, {Supervillain, Superhero} is not a comprehensive summary of thinking styles, nor are they sharply defined categories.
[SEQ RERUN] Three Worlds Decide (5/8)

I can definitely agree with 5, and to some extent with 3. With 4, it didn't seem to me when I read this months ago that the Superhappies would be willing to wait; it works as a part of 3 (get a competent committee together to discuss after stasis has bought time), but not by itself.

I found it interesting on my first reading that the Superhappies are modeled as a desirable future state, though I never formulated a comprehensive explanation for why Eliezer might have chosen to do that. Probably to avoid overdosing the Lovecraft. It definitely softens the blo... (read more)

0Rukifellth9yHeh, in retrospect, I think I'd make a terrible space diplomat. My alternate solutions involved self-mutilation aboard the bridge of The Impossible Possible Word to demonstrate the temporary nature of bodily pain, and appeal to the idea that the pain threshold experienced by children isn't actually high enough to make invasion of the human starline worth going through.
Three Axes of Prohibitions

What do you mean, specifically? "Having fun" aside, being emotional about a game is socially harmful/uncool in the same way a precommitment can be.

0FeepingCreature9yI'm not sure how to formalize this, but I think you can be emotional in a playful way that sort of transparently emulates real emotional responses. I haven't played many social board games so I can't speak from experience, it's just that imagining the situation my first response is "transparently act betrayed and insulted, so people understand both that you don't mean it seriously and why you don't help them".
What are your rules of thumb?

-Hanlon's razor - I always start from the assumption that people seek the happiness of others once their own basic needs are met, then go from there. Helps me avoid the "rich people/fanatics/foreigners/etc are trying to kill us all [because they're purely evil and nonhuman]" conspiracies.

-"What would happen if I apply x a huge amount of times?" - taking things to the absurd level help expose the trend and is one of my favourite heuristics. Yes, it ignores the middle of the function, but more often than not, the value at x->infinity is all that matters. And when it isn't, the middle tends to be obvious anyway.

The Virtue of Compartmentalization

When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize - which I consider a rationalistic sin, not a virtue.

Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example "the virtue of determinism"?

2b1shop9yTo contrast my intentions, the linked post is about compartmentalizing map-making from non-map-making while mine is compartmentalizing different maps. Your association is a good data point, so I'll think about a better name. Perhaps the virtue of focus, abstraction or sequestration? Nothing's jumping out at me right now.
A Little Puzzle about Termination

Doesn't this machine have a set of ways to generate negative utility (It might feel unpleasant when using up resources for example, as a way to prevent a scenario where the goal of 32 paperclips becomes impossible)? With fewer and fewer ways to generate utility as the diminishing returns pile on, the machine will either have to terminate itself (to avoid a life of suffering), or seek to counter the negative generators (if suicide=massive utility penalty).

If there's only one way to generate utility and no way to lose it however, that's going to lead to the behavior of an addicted wirehead.

My simple hack for increased alertness and improved cognitive functioning: very bright light

At night F.Lux is usually great. Except when waking up or doing polyphasic (where you actually treat night and day as the same thing). I discovered the program a week after I started Uberman, and a shortly after installing it, I started having trouble staying up during the early morning hours between 3am-7am, where previously I had no issue at all. I am no longer doing polyphasic, so it's awesome - I never get blinded by my monitor etc. I only wish I could make it so that it uses the daylight setting if I turn the PC on at night - so it helps me wake up. As it stands, I get two hours of "you should be in bed" lighting before it finally gives up on sending me for a nap.

2Pablo9yUse the 'disable for an hour' option, and repeat as needed.
Open Thread, January 1-15, 2013

From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.

As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.

Hopefully I can ... (read more)

Open Thread, January 1-15, 2013

I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.

All of this worries me, and I've started making a list of everything I know as far as pros/cons go ... (read more)

6Vladimir_Nesov9yHarder Choices Matter Less [http://lesswrong.com/lw/th/harder_choices_matter_less/]. Unless you expect that there is a way of improving your understanding of the problem at a reasonable cost (such as discussing the actual object level problem), the choice is now less important, specifically because of the difficulty in choosing.
Morality is Awesome

For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn't change the fact that it also means I'd eat a ton of babies, which makes the option a huge s... (read more)

Morality is Awesome

I attach negative utility to getting my utility function changed - I wouldn't change myself to maximize paperclips. I also attach negative utility to getting my memory modified - I don't like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I'd prefer being fed negative information to having my memory modified to being made to stop cari... (read more)

0TheOtherDave9yFair enough. If you have any insight into why your preferences rank in this way, I'd be interested, but I accept that they are what they are. However, I'm now confused about your claim. Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you'd like us to treat you? Or are you assuming that other people have the same preferences you do?
Morality is Awesome

Don't we have to do it (lying to people) because we value other people being happy? I'd rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I'm not about to wirehead you though)

Do you mean to distinguish this from believing that you have flown a spaceship?

Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can't imagine... (read more)

1TheOtherDave9yRegarding the first bit... well, we have a few basic choices: * Change the world so that reality makes them happy * Change them so that reality makes them happy * Lie to them about reality, so that they're happy * Accept that they aren't happy If I'm understanding your scenario properly, we don't want to do the first because it leaves more people worse off, and we don't want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don't know, but I'll accept that it is.) But why, on your view, ought we lie to them, rather than change them?
Morality is Awesome

No, this is more about deleting a tiny discomfort - say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I'll rely on the pill and thereby the AI to make the decision what shouldn't be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn't necessary.

Intellectually, I wouldn't care whether I'm the only drugged brain in a world where everyone is flying real spaceship... (read more)

0MugaSofer9ySo you're saying that your utility function is fine with the world-as-it-is, but you don't like the sensation of knowing you're in a vat. Fair enough.
Morality is Awesome

Can't I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the "true" purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I'd gladly do so - yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.

Now that I think about it...if, without an awesomen... (read more)

-1MugaSofer9yI don't understand this. If your utility function is being deceived, then you don't value the true state of affairs, right? Unless you value "my future self feeling utility" as a terminal value, and this outweighs the value of everything else ...
1TheOtherDave9yThis is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve -- and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff -- then sure, why not wirehead? It's not like there's anything useful I could be doing instead. But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get. And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that's the case), though it's not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it's not clear why we ought to do that, either. Do you mean to distinguish this from believing that you have flown a spaceship?
Morality is Awesome

I can't bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can't solve it.

1NancyLebovitz9yMy first thought was that an awesomeness pill would be a pill that makes ordinary experience awesome. Things fall down. Reliably. That's awesome! And in fact, that's a major element of popular science writing, though I don't know how well it works.
3[anonymous]9yI have no doubt that you could make a pill that would convince someone that they were living an awesome life, complete with hallucinations of rocket-powered tyrannosaurs, and black leather lab coats. The trouble is that merely hallucinating those things, or merely feeling awesome is not enough. The average optimizer probably has no code for experiencing utility, it only feels the utility of actions under consideration. The concept of valuing (or even having) internal experience is particular to humans, and is in fact only one of the many things that we care about. Is there a good argument for why internal experience ought to be the only thing we care about? Why should we forget all the other things that we like and focus solely on internal experience (and possibly altruism)?
Group rationality diary, 12/25/12

I started doing the same thing a few days ago, in an attempt to get back my habit of waking early (polyphasic experimenting got my sleep schedule out of whack). Something I did differently was, I write in the same box twice - once before I go to bed, something like committing to waking up early, and once after I get up. This solved my problem of getting up, making up some reason to postpone the habit-formation process (or even cancel it to start anew later), and going back to bed. My symbols are a bit more complex, so that I can mark a failure on top of th... (read more)

Brain-in-a-vat Trolley Question

Don't blame yourself, blame the author. (which you kinda sorta did but then didn't)

[SEQ RERUN] True Sources of Disagreement

I remember that when I went through all of the Sequences a year ago, I was curious about the retina issue that Eliezer keeps referring to, but a cursory search didn't return anything useful. I poked around a bit more just now, and found a few short articles on the topic. Could someone point me to more in-depth information regarding the inverted retina?

2MinibearRex9yThe wikipedia page [http://en.wikipedia.org/wiki/Blind_spot_(vision\]) on the blind spot contains a good description, as well as a diagram of vertebrate eyes alongside the eye of an octopus, which does not have the same feature.
Playing the student: attitudes to learning as social roles

For pep talks, I dislike them because they rely on the "I have this image of you" approach. The motivator is trying to get you to think they think you're great - if you don't agree, you will want to live up to the expectation regardless, as the alternative is disappointment, and disappointment hurts. For me, this gets me thinking about ways to win, which gets me back to my thoughts about not being very good, and thus the cycle is reinforced. I might try harder, but I won't feel good about it, and I'll feel paralyzed quickly, once it becomes appar... (read more)

0NancyLebovitz9yMy model of pep talks is quite different. I assume that the pep talker is trying to give an infusion of motivation so that they can wind me up and not need to push any more.
Playing the student: attitudes to learning as social roles

I share similar behaviors, although with key differences, and you just alerted me - I should be careful with my failure mode. It's gotten to a point where I don't want to try improving particular skillsets around my parents. I've already shown them that I'm bad at them, and that I'm not interested; trying to improve through my usual all-or-nothing approach would feel very awkward, a 180 personality turn.

Cryonics as Charity

I find a hundredfold cost decrease quite unlikely, but then, I'm not familiar at all with the costs involved or their potential to be reduced. If the idea of cryonics were accepted widely enough for it to be an acceptable alternative to more expensive treatment though, freezing old people with negative utility to society until a potential technological Singularity or merely the advent of mind uploading would not be far off - and that would be efficient even without cheap cryonics.