Previous Open Thread:

(oops, we missed a day!)

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

New Comment
153 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm starting to maybe figure out why I've had such difficulties with both relaxing and working in the recent years.

It feels that, large parts of the time, my mind is constantly looking for an escape, though I'm not entirely sure what exactly it is trying to escape from. But it wants to get away from the current situation, whatever the current situation happens to be. To become so engrossed in something that it forgets about everything else.

Unfortunately, this often leads to the opposite result. My mind wants that engrossment right now, and if it can't get it, it will flinch away from whatever I'm doing and into whatever provides an immediate reward. Facebook, forums, IRC, whatever gives that quick dopamine burst. That means that I have difficulty getting into books, TV shows, computer games: if they don't grab me right away, I'll start growing restless and be unable to focus on them. Even more so with studies or work, which usually require an even longer "warm-up" period before one gets into flow.

Worse, I'm often sufficiently aware of that discomfort that my awareness of it prevents the engrossment. I go loopy: I get uncomfortable about the fact that I'm uncomfortable, an... (read more)

I recognize this in myself and it's been difficult to understand, much less get under control. The single biggest insight I've had about this flinching-away behavior (at least the way it arises in my own mind) is that it's most often a dissociative coping mechanism. Something intuitively clicked into place when I read Pete Walker's description of the "freeze type". From The 4Fs: A Trauma Typology in Complex PTSD:

Many freeze types unconsciously believe that people and danger are synonymous, and that safety lies in solitude. Outside of fantasy, many give up entirely on the possibility of love. The freeze response, also known as the camouflage response, often triggers the individual into hiding, isolating and eschewing human contact as much as possible. This type can be so frozen in retreat mode that it seems as if their starter button is stuck in the "off" position. It is usually the most profoundly abandoned child - "the lost child" - who is forced to "choose" and habituate to the freeze response (the most primitive of the 4Fs). Unable to successfully employ fight, flight or fawn responses, the freeze type's defenses develop around classical

... (read more)
I would also like to see more such discussion, but, as with rationality, more from the viewpoint of rising above base level average than of recovering only to that level.
Although on further thought, maybe that sort of discussion would have to happen somewhere other than LessWrong. Who can do for it the equivalent of Eliezer's writings here? Is there anywhere it is currently being done? ETA: Brienne, and here might be answers to those questions.
If people on LW put half the effort in emotional issues they put in rational topics we'd be a whole lot further. Thank you for this quote very much. Any insight explosion books I should read?
Complex PTSD: From Surviving To Thriving by Pete Walker focuses on the understanding that wounds from active abuse make up the outer layers of a psychological structure, the core of which is an experience of abandonment caused by passive neglect. He writes about self-image, food issues, codependency, fear of intimacy and generally about the long but freeing process of recovering. The Drama of the Gifted Child by Alice Miller focuses more on the excuses and cultural ideology behind poor parenting. She grew up in an abusive household in 1920s-'30s Germany. Healing The Shame That Binds You by John Bradshaw is about toxic shame and the variety of ways it takes root in our minds. Feedback loops between addictive behavior and self-hatred, subtle indoctrination about sexuality being "dirty", religious messages about sin, and even being compelled to eat when you're not hungry:
Now that is quite some text to read. Thank you very much. My request was aimed at more general books though this is still useful. You seem very knowledgeable on this specific topic. Am I right in assuming you are knowledgeable about emotional issues more generally? Would you be willing to write a post about these topics?
It's only been about 6 months since I started consciously focusing my attention on the subtle effects of abandonment trauma. Although I've done a fair amount of reading and reflecting on the topic I'm not at the point yet where I can confidently give guidance to others. Maybe in the next 3-4 months I'll write up a post for the discussion section here on LW. What's frustrating is that signs of compulsive, codependent and narcissistic behavior are everywhere, with clear connections to methods of coping developed in childhood, but the number of people who pay attention to these connections is still small enough that discussion is sparse and the sort of research findings you'd like to look up remain unavailable. The most convincing research result I've been able to find is this paper on parental verbal abuse and white matter, where it was found that parental verbal abuse significantly reduces fractional anisotropy in the brain's white matter.
Please do. This seems like an important part of "winning" to some people, and it is related to thinking, therefore it absolutely belongs here.
Interesting, thanks. I had a pretty happy childhood in general, but I was a pretty lonely kid for large parts of the time, and I've certainly experienced a feeling of being abandoned or left alone several times since that. And although my memories are fuzzy, it's possible that the current symptoms would have started originally developing after one particularly traumatic relationship/breakup when I was around 19. Also, meaningful social interaction with people seems to be the most reliable way of making these feelings go away for a while. Also, I tend to react really strongly and positively to any fiction that portrays strong, warm relationships between people. Most intriguing.

This is kind of funny because I came to this open thread to ask something very similar.

I have noticed that my mind has a "default mode" which is to aimlessly browse the internet. If I am engaged in some other activity, no matter how much I am enjoying it, a part of my brain will have the strong desire to go into default mode. Once I am in default mode, it takes active exertion to break away do anything else, no matter how bored or miserable I become. As you can imagine, this is a massive source of wasted time, and I have always wished that I could stop this tendency. This has been the case more or less ever since I got my first laptop when I was thirteen.

I have recently been experimenting with taking "days off" of the internet. These days are awesome. The day just fills up with free time, and I feel much calmer and content. I wish I could be free of the internet and do this indefinitely.

But there are obvious problems, a few of which are:

  • Most of the stuff that I wish I was doing instead of aimlessly surfing the internet involves the computer and oftentimes the internet. A few of the things that would be "good uses of my time" are reading, making di

... (read more)
I've found that having a two computers, one for work and one for play, has helped immensely.
I like this idea. It's difficult to implement; I have enough computers, but my attempt at enforcing their roles hasn't worked so well. I've had better success with weaker, outdated hardware: anything without wireless internet access, for starters. Unfortunately, the fact that it's weaker and outdated means it tends to break, and repairs become more difficult due to lack of official support. Then they sort of disappear whenever things get moved due to being least used, and I'm back to having to put willpower against the most modern bells and whistles in my possession. Generally speaking, the less powerful the internet capabilities, the better. Perhaps a good idea of the optimal amount of data to use would help pick a service plan that disincentivizes wasteful internet use? Or maybe even dialup, if one can get by without streaming video and high-speed downloads. Another possibility is office space without internet access. Bonus points if there's a way to make getting there easier than leaving (without going overboard, of course). Or, a strictly monitored or even public internet connection for work, where anything that is not clearly work-related is visible (hopefully, to someone whose opinion/reaction would incentivize staying on task). If possible, not even having a personal internet connection, and using public locations (Starbucks? Libraries?) when internet is necessary might be another strategy. If work requires internet access, but not necessarily active, one could make lists of the things that need downloading, and the things that do not, and plan around internet availability (this worked pretty well for me in parts of high school, but your mileage may vary). These solutions all have something in common: I can't really implement any of them right now, without doing some scary things on the other end of a maze constructed from Ugh Fields, anxiety, and less psychological obstacles. So my suggesting them is based on a tenuous analysis of past experience.
I have not been able to get rid of internet addiction by blocking or slowing it. Conversely I've had (less than ideal) success with over saturation. I don't think it's a thing I'll get rid of soon, aimless browsing is to much of a quick fix. Lately I've been working on making productivity a quicker fix. Getting a little excited everytime I complete something small, doing a dance when its something bigger, etc.
I think there is some underlying reason for browsing as a default state, maybe conditioning. Should it then be possible to train oneself having a different default state?
Just turning off your network interface for the duration of a work session (maybe do timed Pomodoro bursts) will guard against the mindless reflex of tabbing over to the browser. Then you get the opportunity to actually make a mindful decision about whether to go out of work phase and off browsing or not. If you get legit stuff to search that isn't completely blocking the offline work, write it down on a piece of scratch paper to look up later. Tricks like this tend to stop working though. You'll probably just go into mindlessly bringing up the network interface instead in the long term, but even months or weeks of having a working technique are better than not having one. Maybe you could team up with an Ita who works with the Reticulum and become an Avout who is forbidden from it.
Yeah, I tried this for a while, along with putting Chrome in increasingly obscure places on my hard drive. After these failed, I came upon the flash drive idea, which has the feature that it involves physical activity and therefore can't be done mindlessly. If you need to, you can throw it across the room.
You could just physically unplug your broadband modem while working then, as long as you're the only person using it.
I don't mind having web browsing as a default state but what I've done succesfully in the past is have alarms throughout the day to remind me to exercise, leave the house, do chores, etc.
I used to live someplace close to a church with a bell tower that rang the time every fifteen minutes. I no longer live there, but I have considered writing a program to do the same thing -- I recall it being useful for productivity, and for escaping default-mode (which has a terrible sense of time).
In my experience adderall can ameliorate this problem somewhat.
I've experienced similar feelings, and find some exercises from "The Now Habit" to help. There's an exercise he calls the focusing exercise which involves taking five mintues to get in flow state before starting, which helps, and also found the advice to reframe "How soon can I finish" to "When can I start?" to help as well.
It sounds pretty familiar to me. My version seems to be background anxiety, and it can help to check my breathing and let my abdomen expand when I'm inhaling.

Stratton's perceptual adaptation experiments a century ago have shown that the brain can adapt to different kinds of visual information, e.g. if you wear glasses that turn the picture upside down, you eventually adjust and start seeing it right side up again. And recently some people have been experimenting with augmented senses, like wearing an anklet with cell phone vibrators that lets you always know which way is north.

I wonder if we can combine these ideas? For example, if you always carry a wearable camera on your wrist and feed the information to a Google Glass-like display, will your brain eventually adapt to having effectively three eyes, one of which is movable? Will you gain better depth perception, a better sense of your surroundings, a better sense of what you look like, etc.?

One aspect of perceptual adaptation I do not often hear emphasized is the role of agency. I first encountered it in this passage: --- Moshe Feldenkrais, "Man and World," in "Explorers of Humankind," ed Thomas Hanna
I read about an experiment (no link, sorry) where people wore helmets that gave them a 360 degree view of their surroundings. They were apparently able to adapt quite well, and could eventually do thinks like grab a ball tossed to them from behind without turning around.
from my experience with focusing on the senses I already have the mere availability of the data is not sufficient. You really need to process it. The glass intervention works well because it also takes away the primary way of interacting with the world. If you only add sense most of it can be pretty much ignored as it doesn't bring any compelling extra value except for being cool for a while. Color TV was kinda nice improvement but not many are jumping on the 3D bandwagon. So if you really want to go three eyed it could be a good bet it could be good from sense development perspective to go only new-eye mono for a while. Another one would be have a environment where the new capabilities are difference makingly handy. I could imagine that fixing and taking apart computers could benefit from that kind of sensing. You could also purposefully make a multilayered desk so that simply looking what is on the desk would require hand movement but many more documents could be open at any time. Your brain is already mostly filtering out the massive amount of input it takes, making it quite expensive to make it bother paying attention to yet another sense-datum The sense would also require their own "drivers". I could imagine that managing a moveable eye would be more laboursome than eye focus micro. Having a fixed separation of your viewpoints makes the calculations easy routine. That would have to be expanded into a more general approach for variable separation. There is a camera trick where you change your zoom while simultanously moving the camera in the forward backward dimension keeping the size of the primary target fixed but stretching the perspective. Big variance to the viewpoint separation would induce similar effects. I could imagine how it could be nausea-inducing instead of only cool. Increased mental labour and confusion atleast on the short-term would press against adopting a more expanded sensory experience. Therefore if such transition is wanted it is importa
I've thought about taking this idea further. Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can't see them? Now apply this to firemen. or infantry. This is the startup i'd be doing if I wasn't doing what I'm doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.
What other applications for groups of people can you imagine, apart from having a sense of each other's position?
whatever team state matters. maybe online/offline, maybe emotional states, maybe doing biofeedback (hormones? alpha waves?) but cross-team. maybe just 'how many production bugs we've had this week'.
but if we're talking startups, I'd probably look at where the money is and go there. Can this be applied to groups of traders? c-level executives? medical teams? maybe some other target group are both flush with cash and early adopters of new tech?
Better: put it on your personal drone which normally orbits you but can be sent out to look at things...
I have magnets implanted into two of my fingertips, which extend my sense of touch to be able to feel electromagnetic fields. I did an AMA on reddit a while ago that answers most of the common questions, but I'd be happy to answer any others. To touch on it briefly, alternating fields feel like buzzing, and static fields feel like bumps or divots in space. It's definitely become a seamless part of my sensory experience, although most of the time I can't tell it's there because ambient fields are pretty weak.
There's already some brain plasticity research which does this for people who have lost senses. Can't remember a specific example, but I know there are quite a few in the book "The Brain That Changes Itself"
Well, technologies like BrainPort allow one to "see" with their tongue.
I would guess strongly (75%) that the answer is yes. There are incredible stories about people's brains adapting to new inputs. There is one paper in the neuroscience literature that showed how if you connect a video input to a blind cat's auditory cortex, that brain region will adapt new neural structures that are usually associated with vision (like edge detectors).
This makes me wonder what could be done with, say, a bluetooth earbud and a smartphone, both of which are rather less conspicuous than Google Glass. Not quite as good as connecting straight to the auditory cortex, but still. The first thing that comes to mind is trying to get GPS navigation to work on a System 1 rather than System 2 level, through subtle cues rather than interpreted speech. [Edit: or positional cues rather than navigational. Not just knowing which way north is, but knowing which way home is.]

A qucik search on Google Scholar with such queries as cryonic, cryoprotectant, cryostasis, neuropreservation confirms my suspicion that there is very little, if any, academic research on cryonics. I realize that being generally supportive of MIRI's mission, Less Wrong community is probably not very judgmental of non-academic science, and I may be biased, being from academia myself, but I believe that despite all problems, any field of study largely benefits from being a field of academic study. That makes it easier to get funding; that makes the results more likely to be noticed, verified and elaborated on by other experts, as well as taught to students; that makesit more likely to be seriously considered by the general public and governmental officials. The last point is particularly important, since on one hand, with the current quasi-Ponzi mechanism of funding, the position of preserved patients is secured by the arrival of new members, and on the other hand, a large legislative action is required to make cryonics reliable: train the doctors, give the patients more legal protection than the legal protection of graves, and eventually get it covered by health insurance policies or... (read more)

I think the nearest thing is the Brain Preservation Foundation. If you want to donate money towards that purpose, they are a good address.

Downvoted because if I remember correctly, this is wrong; the cost of preservation of a particular person includes a lump of money big enough for the interest to pay for their maintenance. If I remember incorrectly and someone points it out, I will rescind my downvote.

For those of us who for whatever reason can't make it to a CFAR workshop, what are the best ways to get more or less the equivalent? A lot of the information they teach is in the Sequences (but not all of it, from what it looks like), but my impression is that much of the value from a workshop is in (a) hands-on activities, (b) interactions with others, and (c) personalized applications of rationality principles developed in numerous one-on-one and small-group sessions.

So I'm looking for:

  • Resources for getting the information that's not covered (or at least not comprehensively covered) in the Sequences.
  • Ideas for how to simulate the activities.
  • Ideas for how to simulate the small group interactions. This is mainly what LW meetups are for, but for various reasons I can't usually make it to a meetup.
  • How to simulate the one-on-one personalized training.

That last one is probably the hardest, and I suspect it's impossible without either (a) spending an awful lot of time developing the techniques yourself, or (b) getting a tutor. So, anybody interested in being a rationality tutor?

Find & read good self-help type stuff (relevant books by psychologists, Less Wrong posts, Sebastian Marshall, Getting Things Done, etc.) and invent/experiment with your own techniques in a systematic way. Do Tiny Habits. Start meditating. Watch & apply this video. Keep a log of all the failure modes you run in to and catalogue strategies for overcoming each of them. Read about habit formation. Brainstorm & collect habits that would be useful to have and pair them with habit formation techniques. Try lots of techniques and reflect about why things are or are not working.
Have you asked CFAR whether you could hire one of their instructors to give you one-on-one training over Skype? I expect it would be expensive, but they are flexible with people who are willing to pay thousands of dollars.
One of the best things that happened to me was getting into a tumblr rationalist group on skype. The feeling is a bit like a meetup, except people are available all the time. Yes, but I'm not yet versed enough in the Art to help anyone except a novice. If you have specific things you want to discuss I can probably point you in the right direction. This is also what the skype group (or meetups) are good for. There will always be someone who can help you with a particular issue.

I just discovered a very useful way to improve my comfort and posture while sitting in chairs not my own. If you travel a lot or are constantly changing workstations or just want to improve your current set up – buy contact shelf lining, the one with no-slip grip.

The liner adds grip to chairs that either 1. do not adequately recline or 2. reclines but you may tend to slide off (slippery leather chairs). Recently I was provided with a stiff non-reclining wood chair and it was killing my back. Every time I relaxed into the back rest I started to slide do... (read more)

In a rare case of actually doing something I said I would, I've started to write a utility for elevating certain words and phrases in the web browser to your attention, by highlighting them and providing a tool-tip explanation for why they were highlighted. It's still in the occasionally-blow-up-your-webpage-and-crash-the-browser phase of development, but is showing promise nonetheless.

I have my own reasons for wanting this utility ("LOOK AT THIS WORD! SUBJECT IT TO SCRUTINY OR IT WILL BE YOUR UNDOING!") but thought I would throw it out to LW to see if there are any use-cases I might not have considered.

On a related note, is there a reason why Less Wrong, and seemingly no other website, would suffer a catastrophic memory leak when I try and append a JSON-P script element to the DOM? It doesn't report any security policy conflicts; it just dies.
Ooh, ooh, thought of another cluster of danger phrases (inspired by a recent Yvain blog post, I forget which): "studies have shown", "studies show", "studies find", and any other vague claim that something's corroborated by multiple scientific studies which the writer mysteriously can't be bothered to reference properly, or even give a clear description of.
So is this script somewhere we can try it?
Not without breaking everything horribly (including my debugging tools) in a non-negligible number of cases (including Less Wrong). I did put together a little bookmarklet example, but since it doubles as an "occasionally mangle your page and possibly make this tab explode in a shower of RAM" button, I decided not to share it until I've isolated and fixed this particular problem.

No one's posted about the new Oregon Cryonics yet?

This seems very interesting, maybe someone can look into this in depth. The costs are much more manageable and there are hopefully fewer legal issues with preserving brain only. Not sure why they only talk about "next of kin". Anyway, "chemical preservation" of the brain only for $2500 seems like an interesting alternative to Alcor or CI. It is also more likely to go over better in an argument with people reluctant to opt for the "traditional" cryonics, such as parents of some of the regulars complaining about it. I am not qualified to judge the quality of their "Instructions for Funeral Director":
Someone from OC came by the RationalWiki Facebook group asking if it was of interest to us ... um. I suggested they should definitely say hi on LW.

If you had four months to dedicate to working on a project, what would you work on?

0Ben Pace
Learn all the maths to be able to get a job at MIRI :)
Intern under a specialist in heat-straightening of damaged steel members.
Writing a novel.

Yesterday I posted a Michigan meetup.

My location is set to Michigan.

The "Nearest Meetups" column on the right-hand side suggests Atlanta and Houston, but not Michigan.

Is this a known bug?

It's a feature, not a bug. The friendly algorithm that creates that column assumes you would rationally prefer Atlanta or Houston to anywhere within 40 miles of Detroit.

'Nearest meetups' ignores where your location is set to, and tries to work it out from your IP address. (Source, my location is set to London.) Perhaps your IP address is misleading about that? Two sites that try to work out your location from your IP: (says I'm in Budapest) (says I'm in London) I'm not sure what system LW uses for this, right now it gives me "upcoming" instead of "nearest". (Edit: now that I'm at home instead of work, these respectively say London and Glasgow, and I have "nearest meetups" back.)

Is there any convenient way to promote interesting sub-threads to Discussion-level posts?

c'n'p with link to original, really. (So no.)

I do not understand - and I mean this respectfully - why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible - and if I were ever to find myself in an apparently Newcomblike problem in real life, I'd obviously choose to take both boxes.

I don't think it's physically impossible for someone to predict my behavior in some situation with a high degree of accuracy.
If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem. It's not the same as guaranteeing a win, but it undermines the premise. Certainly, anybody trying to play pseudo-omega against random-decider would start losing lots of money until they settled on always keeping box B empty. And if it's a repeated game where Omega explicitly guarantees it will attempt to keep its accuracy high, choosing only box B emerges as the right choice from non-TDT theories.
It's not a zero-sum game. Using randomness means pseudo-Omega will guess wrong, so he'll lose, but it doesn't mean that he'll guess you'll one-box, so you don't win. There is no mixed Nash equilibrium. The only Nash equilibrium is to always one-box.
The idea that we live in a simulation is not a physical impossibility. At the moment choices can often be predicted 7 seconds in advance by reading brain signals.
Source? How accurate is this prediction?
A quick googling gives me as source.
Even if we live in a simulation, I've never heard of anybody being presented a newcomblike problem. Make a coin flip < 7 seconds before deciding.
Most people don't make coin flips. You can set the rule that making a coin flip is equivalent to picking both boxes.
Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at "convince the brain scanner that you will pick one box". Newcomblike problems reduce to this multi-stage game too.
Brain scanner are technology that's very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. "Mind reading" and predicting choices of other people is a task with a similar difficulty than the AI box challenge. Let's take contact improvisation as an illustrating example. It's a dance form without hard rules. If I'm dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I'm in that state and that means that I touch her breast with my arms that's no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I'm likely to creep her out. There are plenty of people in the contact improvisation field who's awareness of other people is good enough to tell the difference. Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he's supposed to negotiate and there might be instances where that information leaks.
I don't think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't. Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.
It's possible that the channel through which the diplomatic group internally communicates is completely compromised.
I believe the application was how a duplicable intelligence like an AI could reason effectively. (Hence TDT thinking in terms of all instances of you.)
Communication and pre-planning would be a superior coordination method.
This is assuming you know that you might be just one copy of many, at varying points in a timeline.
Do you think that someone can predict your behavior with maybe 80% accuracy? Like, for example, whether you would one-box or two-box, based on what you wrote? And then confidently leave the $1M box empty because they know you'd two-box? And use that fact to win a bet, for example? Seems very practical.
If I bet $1001 that I'd one-box I'd have a natural incentive to do so. However, if the boxes were already stocked and I gain nothing for proving pseudo-Omega wrong, then two-boxing is clearly superior. Otherwise I open one empty box, have nothing, yell at pseudo-Omega for being wrong, get a shrug in response, and go to bed regretting that I'd ever heard of TDT.
So as several people said, Omega is probably more within the realm of possibility than you give it credit for, but MORE IMPORTANTLY, Omega is definitely possible for non-humans. As David_Gerard said, the point of this thought exercise is for AI, not for humans. For an AI written by humans, we can know all of its code and predict the answers it will give to certain questions. This means that the AI needs to deal with us as if we are an Omega that can predict the future. For the purposes of AI, you need decision theories that can deal with entities having arbitrarily strong models of each other, recursively. And TDT is one way of trying to do that.
In general, predicting what code does can be as hard as executing the code. But I know that's been considered and I guess that gets into other areas.
Even if that's the case, when dealing with AI we more easily have the option of simulation. You can run a program over and over again, and see how it reacts to different inputs.
I understood that people here mostly do care about them because of mathematical interest. It's a part of the "how can we design an AGI" math problem.

LessWrong's focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary. I understand that that's how the organization got its start, and it's fine. But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly (I also find the near-obsession with IQ a bit unsettling).

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

Some of the material is essentially millenia old, self-knowledge and self-a... (read more)

For all the emphasis on Slytherin-style interpersonal competence (not so much on the main site anymore, but it's easy to find in the archive and in Methods), LW's historically had a pretty serious blind spot when it comes to PR and other large-scale social phenomena. There's probably some basic typical-minding in this, but I'm inclined to treat it mostly as a subculture issue; American geek culture has a pretty solid exceptionalist streak to it, and treats outsiders with pity when it isn't treating them with contempt and suspicion. And we're very much tied to geek culture. I've talked to LWers who don't feel comfortable exercising because they feel like it's enemy clothing; if we can't handle something that superficial, how are we supposed to get into Joe Sixpack's head?

Ultimately I think we focus on contrarian technocrat types, consciously or not, because they're the people we know how to reach. I include myself in this, unfortunately.

A very fair assessment. I would also note that often when people DO think about marketing LW, they speak about the act of marketing with outright contempt. Marketing is just a set of methodologies to draw attention to something. As a rationalist, one should embrace that tool for anything they care about rather than treating it as vulgar.
A better question is what exactly we are supposed to do inside Joe Sixpack's head? Make him less stupid? No one knows how. Give him practical advice so that he fails less epically? There are multiple shelves of self-help books at B&N, programs run by nonprofits and the government, classes at the local community college, etc. etc. Joe Sixpack shows very little interest in any of those I don't see why the Sequences or some distillation of them would do better.

Nice example of geek exceptionalism there, dude.

To be fair, it might have some merit if we were literally talking about the average person, though I'm far from certain; someone buys an awful lot of mass-market self-help books and I don't think it's exclusively Bay Aryans. But I was using "Joe Sixpack" there in the sense of "someone who is not a geek", or even "someone who isn't part of the specific cluster of techies that LW draws from", and there should be plenty of smart, motivated, growth-oriented people within that set. If we can't speak to them, that's entirely on us.

Nah, just plain-vanilla arrogance :-D I am not quite sure I belong to the American geek culture, anyway. Ah. I read "Joe Sixpack" as being slightly above "redneck" and slightly below "your average American with 2.2 children". So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.
The first group of people I thought of was lawyers, who have both a higher-than-average baseline understanding of applied cognitive science and a strong built-in incentive to get better at it. I wouldn't stop there, of course; all sorts of people have reasons to improve their thinking and understanding, and even more have incentives to become more instrumentally effective. As to what we'd do in their heads... same thing as we're trying to do in ours, of course.
Um. Speaking for myself, what I'm trying to do in my own head doesn't really transfer to other heads, and I'm not trying to do anything (serious) inside other people's heads in general.

The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn't hostile and doesn't provoke instant dismissal by those who most need that alternative.

The hardline secularism (which might be better described as a community norm of atheism, given that some of the community favors creating community structures which take on the role of religious participation,) isn't a prerequisite so much as a conclusion, but it's one that's generally held within the community to be pretty basic.

However, so many of the lessons of epistemic rationality bear on religious belief that not addressing the matter at all would probably smack of willful avoidance.

In a sense, rationality might function as an alternative to mysticism. Eliezer has spoken for instance about how he tries to present certain lessons of rationality as deeply wise so that people will not come to it looking for wisdom, find simple "answers," and be tempted to look for deep... (read more)

Given that Eliezer wrote HPMOR is not really turning away from mysticism and teaching through stories.
One would expect an alternative to a thing to share enough characteristics with the thing to make it an alternative. Turkey is an alternative to chicken. Ice cream is not. Teaching rationality through stories and deep-wisdom tropes is an alternative to teaching mysticism through stories and deep-wisdom tropes. Teaching rationality through academic papers is not.
More simple language, many examples, many exercises. And then the biggest problem would be that most people would just skip the exercises, remember some keywords, and think that it made them more rational. By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn't care enough to really do the rationality stuff. Which means that the book should start with some motivating examples. But even that has limited effect. I believe there is a huge space for improvement, but we shouldn't expect magic even with the best materials. There is only so much even the best book can do. The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time. Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis. The difficult part is to avoid self-deception. These millenia old materials seem quite bad at it. Maybe they were best of what was available at their time. But that's not enough. Archimedes could have been the smartest physicist of his time, but he still didn't invent relativity. Being "best" is not enough; you have to do things correctly.
Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form. And how many people do you think the article on akrasia has actually cured of akrasia? First of all, I'm mainly dealing with the subset of material here that deals with self-knowledge. Even if you disagree with "millenia old", do you disagree with "any decent therapist would try to provide many/most of these tools to his/her patients"? On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either. I do not see what you mean here. Nobody at LW has invented rationality, become a beisutsukai master, built a Friendly AI or Started the singularity. Freud correctly realized the importance the subconscious has in shaping our behavior, and the fact that it is shaped by past experiences in ways not always clear to us. He then failed to separate this knowledge from some personal obsessions. We wouldn't expect any methods of rationality to turn Freud into a superhero, we'd expect it to help people reading him separate the wheat from the chaff.
And also an e-book (which is probably not finished yet, last mention here), that is still just a set of articles, but they are selected, reordered, and the comments are removed -- which is helpful, at least for readers like me, because when I read the web, I cannot resist reading the comments (which together can be 10 times as long as the article) and clicking hyperlinks, but when I read the book, I obediently follow the page flow. A good writer could then take this book as a starting point, and rewrite it, with exercises. But for this we need a volunteer, because Eliezer is not going to do it. And the volunteer needs to have some skills. Akrasia survey data analysis. Some methods seem to work for some people, but no method is universally useful. The highest success was "exercise to increase energy" and even that helped only 25% of people; and the critical weakness seems to be that most people think it is a good idea, but don't do it. To overcome this, we would need some off-line solutions, like exercising together. (Or maybe a "LessWrong Virtual Exercise Hall".) Yes, I do. Therapists don't see teaching rationality as their job (although it correlates), wouldn't agree with some parts of our definitions of rationality (many of them are religious, or enjoy some kind of mysticism), and would consider some parts too technical and irrelevant for mental health (Bayes Rule, Solomonoff Prior, neural networks...). But when you remove the technical details, what is left is pretty much "do things that seem reasonable". Which still would be a huge improvement for many people. That's the theory. Now look at the practice of... say, medicine. How much of it really is evidence-based, and how much of that is double-blind with control group and large enough sample and meta-analysis et cetera? When you start looking at it closely, actually very little. (If you want a horror story, read about Ignaz Semmelweis, who discovered how to save lifes of thousands of people and provided ha
LessWrong activity seems to shift more into meatspace as time goes on. We have the study hall for people with akrasia that provides different help then just reading an article about akrasia. CFAR partly did grow out of LW and they hold workshops.
I don't understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem? If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I'd be interested in why. In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so. Eliezer has responded to this (very common) criticism here I don't know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.
If it creates an exclusionary atmosphere, or prevents people outside that group from reading and absorbing the ideas, or closes this community to outside ideas, then yes. But mostly I think that focusing on presenting these ideas only to that group is unnecessary. I am really thinking of posts like this where many commenters agonize over how hard it would be to bring rationality to the masses. I did say what I have a beef with. The attitude that deliberate application of rationality is only for high-iq people, or that only the high-iq people are likely to make real contributions. It's not a criticism - it's an explanation for why I don't believe it would be that difficult to package the content of the sequences for a general audience. None of it needs to be packaged as revelatory. Instead of calling rationality systematic winning, just call it a laundry list of methods for being clear-eyed and avoiding self-deception. Several responses bring up the "half-a-rationalist" criticism, but I think that's something that can be avoided by presentation. Instead of "here's a bunch of tools to be cleverer than other people", present it as "here's a bunch of tools to occasionally catch yourself before you make a dumb mistake". It's certainly no excuse not to try to think of how a more broadly-targeted presentation of the sequences could be put together. And really, what's the worst case-scenario? That articles here sometimes get cited vacuously kind of like those fallacy lists? Not that bad.
Inclusiveness is not a terminal value for me. Certain types of people are attracted to a community such as LW, as with every other type of community. I do not see this as a problem. Which of the following statements would you endorse if any? 1) LW should change in such a way as to be more inclusive to a wider variety of people. 1 a) LW members should change how they comment (perhaps avoiding jargon?) so as to be more inclusive. 1 b) LW members should talk change the topics that they discuss in order to be more inclusive. 2) LW should compose a rewritten set of sequences to replace the current sequence as a way of making the community more inclusive. 3) LW should compose a rewritten set of sequences and publish it somewhere (perhaps a book or a different website)) to spread the tools of rationality. 4) LW should try to actively recruit different types of people than the ones that are naturally inclined to read it already.
I don't think LW needs to change dramatically (though more activity would be nice), I just think it should be acknowledged that the demographic focus is narrow; a wider focus could mean a new community or a growth of LW, or something else. Mainly #3 and to an extent #4. I'd modify and combine #4 and #1a/1b into: 5) We should have inclusionary, non-jargony explanations and examples at the ready to express almost any idea on rationality that we understand within LW's context. Especially ideas that have mainstream analogues, which is most of them. This has many potential uses including #1 and #4.
What practical steps do you see to make LW less focused on that crowd? What are you advocating?
The book that Eliezer is writing? (What's the state of play on that, btw?)
Link for info? But is he actually planning to change his style? He's more or less explicitly targeted the bay-area/software-programmer/secular/transhumanist, and he's openly stated that he's content with that focus.
I don't have any more information. The book has been mentioned a few times on LW, but I don't know what stage it's at, and I haven't seen any of the text.
It is a selection of 345 articles, together over 2000 pages, mostly from the old Sequences from Overcoming Bias era, and a few long articles from Eliezer's homepage. The less imporant articles are removed, and the quantum physics part is heavily reduced. (I have a draft because I am translating it to Slovak, but I am not allowed to share it. Maybe you could still volunteer as a proofreader, to get a copy.)

Effective animal altruism question: I may be getting a dog. Dogs are omnivores who seem to need meat to stay healthy. What's the most ethical way to keep my hypothetical dog happy and healthy?

Edit: Answers Pet Foods appears to satisfice. I'll be going with this pending evidence that there's a better solution.

I don't have a full answer to the question, but if you do feed the dog meat, one starting point would be to prefer meat that has less suffering associated with it. It is typically claimed that beef has less suffering per unit mass associated with it than pork and much less than chicken, simply because you get a lot more from one individual. The counterargument would be to claim that cows > pigs > chickens in intelligence/complexity to a great enough extent to outweigh this consideration. I'm curious: are there specific reasons to believe that dogs need meat while humans (also omnivores) do not? A quick Google search finds lots of vegetarians happy to proclaim that dogs can be vegetarian too, but I haven't looked into details.
My understanding is that pigs > cows >> chickens. Poultry vs mammal is a difficult question that depends on nebulous value judgments, but I thought it was fairly settled that beef causes less suffering/mass than other mammals.
Huskies love fish (for obvious practical reasons), and fish are just dumb. (Though the way we achieve that is to mix fishy cat food into our husky's dog food, which is random tinned dog food.)
Pigs on top surprises me, given that I thought pigs had more intelligence/awareness than other meat sources (as measured by nebulous educated guessing on our part).
From his last sentence, Ben agrees with you. He has just reversed the meaning of the inequality sign.
You're right, I failed a parse check. Thanks!
Here's a quick citation: tldr: Dogs are opportunistic carnivores more than omnivores. They eat whatever they can get, and they'll probably survive without meat, but they'll be missing a bunch of things their bodies expect to have.

My internship has a donation match. I want to donate to something life-extension related; tentatively looking at SENS, but I have some questions:

  • How can I quantify the expected value of money donated? The relevant metric would be "increase in the probability of me personally experiencing dramatic life extension." I have no idea how to go about estimating this, but this determines whether I want to save the money to spend on myself vs. donate it.
  • What other major reputable organizations are there in the biological life extension sphere? Are there any that could use additional money better than SENS?

Today is election day here in Korea. Although I have voted, I have yet to see a satisfactory argument for taking the time to vote. Does anyone know of good arguments for voting? I am thinking of an answer that

  1. Does not rely on the signalling benefits of voting
  2. Does not rely on hyperrational-like arguments.

Well you see the government comes to you with a closed box that they say they have already filled with either a totalitarian government if they predicted you would not vote, but it is filled with a relatively free republic if they predicted you would vote. They filled the box long ago, however...

Up until 2011 in Canada, Parties would receive by-the-vote subsidies to their budgets. This was strongly defended by the center and left parties as a way to keep big money out of politics and a measure of true democracy in our first past the poll system.
I once saw an argument that if you compare the chance of an election being decided by 1 vote to the benefits of getting your preferred party/candidate in power, which may be billions/trillions, then voting can be worth thousands of dollars - at least if you value those benefits at full rather than only for their affect on you (which is dubious).
That's it, although checking that post and comments again I feel like it may be making an accounting error of some sort. edit: Actually it's probably just positing excessive confidence (inspired by hindsight) in the value of getting your guy compared to the other guy.
Political parties will change their policies to capture more voters. So even though your vote won't change who wins the election, you will still shift the policies of the parties towards your own views.
You don't achieve this by voting -- you achieve this by loudly proclaiming that you will vote on the basis of issues A, B, and C.
I think half an hour to go and vote is probably more effective than half an hour of loudly proclaiming, but I can't think of a test for this. Perhaps look at elections where the vote showed that what people wanted was different from what the media said people wanted, and then see which way the parties moved.
The problem is that the party, when considering whether to change policies, has no idea who voted for/against it for which reason. All it knows is that it gained or lost certain number of voters (of certain demographics) in between two elections. If issue Z is highly important to you and you vote on the basis of the party's attitude to it, how does the party know this if the only thing you do is silently drop your ballot?
Vote for a third party that cares about Z.
Provided that one exists. And provided that it isn't completely screwed up about issues A to Y. And provided you are willing to sacrifice the rest of your political signaling power to a signal about Z.
If you're lucky enough to be in a country with preferential voting, there's usually a handful of 3rd parties with various policies (with published preferences so you know where the vote will 'actually' end up). So you'll at least have the opportunity to cast a few bits of information, rather than a single bit. Obligatory Ken the Voting Dingo comic about how it's not possible to waste your vote: "I'll look into this 'hugs'"
I must say I appreciate the comic which starts with "It's me, your good friend Dennis the Erection Koala" :-D On the other hand if you actually do care about conveying bits of information, there are much more effective ways than voting.
Ah yes, you're right. That clearly weakens the effect of voting substantially.
The only reasons I can think of are your #1 and #2. But I think both are perfectly good reasons to vote...

A handy quote by Alvin Toffler, from his introduction for The Futurists:

If we do not learn from history, we shall be compelled to relive it. True. But if we do not change the future, we shall be compelled to endure it. And that could be worse.

A friend posted this:

anyone know of an online timer that will open a pop-up window at a specified time, with a message I can enter myself?

She's found timers which pop up windows, but none where she can add a message.

What is she ultimately trying to achieve? More aggressive reminders than a normal calendar app can give you? Also: computer or smartphone?
I'm having memory problems which make it hard to function, and I'm trying to work around those. For example, if I have started a process and want to do something else while it runs, I want a reminder to check on it/go back to it in x minutes. I want it to be on my computer instead of phone. In theory I could use the reminder feature on my iPhone, but this is a different kind of reminder and I don't want to dilute the meaning of the reminder sound. Also, I want something that will pop up and interrupt what I'm doing (and not rely on noise that will bother other people), and I want it to be easy to set. Most of the things I can find have an alarm but not a pop-up. This: has a popup and lets me enter a custom message...but the custom message does not show on the popup window. doh. It may be the best I can do, but I imagine I could have several timers running at once, so it would be confusing without a note. this has the popup behavior I want, but doesn't let me add a message.
Google Calendar does this, I think.
Google Calendar puts the message in the Google Calendar window rather than a pop-up in front of whatever you're doing.
Google has what it calls desktop notifications. See e.g. here
I use Apple's iCal, which of course is only for Apple devices. It pops up a notification showing the "description", "location", and time of the event. I put the quotes in because you can use those strings for whatever you like. If the calendar is shared in iCloud then an event entered on any of your devices pops up on all of them.
You can do this in Windows using Task Scheduler.
task scheduler does it, but it's a ton of steps - not something I would want to set up dozens of times a day.
Here's a quick-and-dirty batch file I made to add a reminder to the task scheduler. Copy it into Notepad and save it as something.bat , then make a link to it on your desktop or wherever. @echo off set /p MESSAGE=What do you want to be reminded of?^ ^> set /p ALERTTIME=When do you want to be reminded (hh:mm:ss)?^ ^> set TASKNAME=%DATE:/=_%_%TIME::=_% set TASKNAME=%TASKNAME:.=_% schtasks /create /sc once /tn %TASKNAME% /tr "msg * %MESSAGE%" /st %ALERTTIME% pause EDIT: I can't figure out how to make LessWrong put a blank line in a code block. There needs to be an extra blank line before each ^> It prompts you for the text and the time to pop up the alert. It does have some limitations (you need to specify the exact time rather than e.g., "alert me in 30 minutes", and will only work for the same day), but if people think it's useful I can improve it. It also needs you to enter your password to schedule the task. It's possible to avoid this by putting your username/password into the batch file, but that's obviously a security risk so I wouldn't recommend it. If you want to do so anyway you can modify the second-to-last line of the file to add the following text (replacing 'username' and 'password' with your actual username and password): /ru username /rp password

I think I've figured out a basic neural gate. I will do my best to describe it.

4 nerves: A,B,X,Y, A has it's tail connected to X. B has it's tail connected to X and Y. If A fires, X fires. If B fires, X and Y fire. If A then B fire, X will fire then Y will fire (X need a small amount of time to reset, so B will only be able to activate Y). If B then A fire, X and Y will fire at the same time.

This feels like it could be similar to the AND circuit. Just like modern electronics need AND, OR, and NOT, if I could find all the nerve gates I'd have all the parts needed to build a brain. (or at least a network based computer)

How familiar are you with this area? I think that this sort of thing is already well-studied, but I have only vague memories to go by. As an aside, you only need (AND and NOT) or (OR and NOT), not all three; and if you have NAND or NOR, either of those is sufficient by itself.
I'm a computer expert but a brain newbie. The typical CPU is built from n-NOR, n-NAND, and NOT gates. The NOT gates works like a 1-NAND or a 1-NOR (they're the same thing, electronically). Everything else, including AND and OR, are made from those three. The actual logic only requires NOT and {1 of AND, OR, NAND, NOR}. Notice there are several sets of minimum gates and and a larger set of used gates. The brain (I'm theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.
We don't run on logic gates. We run on noisy differential equations.
No. You can represent logic gates using neural circuits, and use them to describe arbitrary finite-state automata that generalize into Turing-complete automata in the limit of infinite size (or by adding an infinite external memory), but that's not how the brain is organized, and it would be difficult to have any learning in a system constucted in this way.
You might want to look into what's called ANN -- artificial neural networks.
ANNs don't begin to scratch the surface of the scale or complexity of the human brain. Not that they're not fun as toy models, or useful in their own right, just remember that they are oblivious to all human brain chemistry, and to chemistry in general.
Of course, but Cube is talking about "a similar set of basic gates that can be organized into a Turing machine" which looks like an ANN more than it looks like wetware.
[This comment is no longer endorsed by its author]Reply