If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
89 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I once had a system in which I was writing checkboxes on paper for tasks I wanted to do regularly.

Stuff like eating vitamins, or doing backups of my server.

It started with the typical daily/weekly/monthly todos, but it gradually evolved into something much less rigid, and calculated in a (increasingly complex) spreadsheet.

For a long time, I've been working out the balance between this system being forgiving...

(as in, allowing for soft recovery, rather then being hit by "do 12 hours of jogging" after a week of vacation)

and also giving you accountability over a longer period

(as in, avoiding the "I'll skip it this week, and instead definitely do it next week" effect).

I've also recently had the idea to publish some Android apps, and one of the first ideas was to code a cleaner, leaner and meaner version of my old spreadsheet.

As far as productivity apps go, this is very basic stuff, but I haven't actually found anything out there that could replace my system.

So lo and behold.

It's still kinda maybe not feature complete, but I already use it myself (and I've finally retired the spreadsheet :D):

If you like this sorta stuff, give it a try and let me know what you'd like to see improved.

Saying all this without actually seeing the app I have been trying out systems for a while now. So has Regex and various others. The introspective thing that I have noticed, and you mentioned here without clearly identifying it is the iterative development of systems. Which is to say that you started on paper, and moved to spreadsheet and after moved to an app (as well as probably several versions of each). What makes the final version work in the face of potential complexity of starting a new system (and taking a leap) is partly the fact that you lived through the various versions, and know why/how/whatFor different factors have changed to improve the system (such is the pure nature of iterative system development). HOWEVER by publishing only your final version you only publish the (probably very good) system that you are used to, and not all the intermediate steps that made it possible and necessary to get to here. While I imagine that every possible latest system so far developed by many many various people (Productivity Ninja, GTD, FVP to name a few), will have good features and functionality that are neat of themselves, without the iterative stages, you don't really give people the same final system that you have come to be accustomed to. What I am saying is; I'd like to see the whole process to how you got here in the hopes of making sense of your successes/failures of systems to do what you want them to do and following that be better able to apply it to my own systems. On top of that; a dream app would be one that starts as a simple list (like you did), and gradually offers you to add complexity to your system (like you ended up making). But in such a way as to let people progress to the final version when they need//want it. I will look at the app and get back to you.
I like your analysis of this issue, though I think in this particular case the app actually remains very simple. If you only use the "do it every N days" type of tracking, you get pretty much just a list like the ones I used to have on paper. One thing I'm definitely seeing more clearly after reading your comment is that if I ever want to add more complexity to this app, I'll instead make a new app that will be the "next step in evolution". (this doesn't apply to UI improvements of course, which the app still needs a lot) Haha, this calls for a long evening in front of a fireplace :)
I think you're coming on a little strong in ways you don't intend for requesting his process and previous system iterations. This reads as if you should never share any system without also sharing the process of how to get there, and most of the time that is filled with stuff no one really needs to see.
yes. okay. What I mean to say is that there is a whole lot of value in with the rest of the system generation process that is missing here. Value that might help understand better how/why it works the way it does and consequently how to make it work for one's self.
I think your app is great! I am also the kind of person to get really excited about new productivity apps that have that one cool trick that makes it different from other apps, so I might not be a good gauge on how well your app would be received, but yea, I love it. The only other self tracking app I have used is Beeminder. My only gripe about Beeminder is that everything is linear, if you do 10 units more you are 10 ahead, if you miss 10 units you are now 10 units behind. I have always wanted some sort of discounting for being ahead, and some sort of sped up recovery for being behind, and I think your app does this well.
How is it after a week? Do you still use it?
Update: Was I able to use the app successfully to increase my tasks by 50%? No. But I wont blame it on the app. I found that manually clicking next day was something I did not like. The temptation to delay clicking it and catch up the next day is strong. If it were automatic I would have to live with the consequences of getting a bad score. Furthermore if you accidentally clicked next day before before updating other tasks, then too bad, you cant reverse. So for testing I made a few tasks and advanced it several days, but unless I reinstall the app, the date can not roll back for when I want to stop testing and use it for real. There is no way to easily see your progress for the last few days. It would be nice to click on the task and see how you did recently or if I missed a few days to see when was the last time I did the task. Sure there is an export button but the data is hard to read if you just want to know quickly how you did recently.
Thanks a lot; I'll this it into account, and think how to improve this in next versions. Though with the "next day" button, it would be a hard tradeoff - you might not have had this experience, but sometimes you travel and your timezone settings get messed up, or your phone's clock is reset etc. It's possible to design something that would avoid these problems, but it's a pretty big change in the internals of the app. This is surprising to me - the algorithm in the app makes it strictly easier to catch up when you click the button first, and then do the tasks rather than the other way around. Is it not enough incentive to make you want to click the button, rather than "cheat"?
I think it is about the don't break the streak thing. Suppose that you decide to run every day, and you do it in the morning every day from Sunday to Thursday, then sleep in and don't have time for it on Friday. Now on Saturday you can either advance the day before your run and have a one day streak, or you can run twice, once before and once after advancing the day and have a seven day streak.
This perfectly expresses my thoughts
I have not used it since testing it out. No change to how I feel about the app, I just haven't used any self tracking apps recently. I use Trello as a general to do app, which lacks regular occurrence task tracking. I will move meditation and gym tasks to Hastewurm and report back in 2 weeks. Both these tasks are things I wished I did more by about 50%. My commitment to report back will probably result in increased likelihood of me sticking to this goal, but I could nonetheless try to be mindful of my bias, and provide some feedback on efficacy and or improvements.
I wouldn't use an app like that without the app being able to export data.
But it does in fact have this option. (Admittedly, the format of data is not documented, but it's just plaintext K=V.)
Export data....why? Like, what other device are you going to load this data on? You've got your task tracker on your phone...and its records go where else? I mean, more features = more good, but I'm just curious about the use case here.
I carry my phone around a lot. I might lose my phone or it might get stolen. I also don't want to be locked into a single application. Especially when testing a new software. I want to keep my data and not be bound to a single service. For introspection/QS purposes it's also good to have the data in a way where I can analyse it further. For example I log all calls and all pomodoros I do to a Google calendar. Otherwise most of my data goes to Evernote.

Out of curiosity: because rationalists are supposed to win, are we (on average) below our respective national averages for things which are obviously bad (the low hanging fruits)?

In other words, are there statistics somewhere on rationalist or LessWrong fitness/weight, smoking/drinking, credit car debt, etc.?

I'd be curious to know how well the higher-level training effects these common failure modes.

I've wondered this too. In particular, for several years, at least among people I know, people have constantly questioned the level of rationality in our community, particularly our 'instrumental rationality'. This is summed up by the question: "if you're so smart, why aren't you rich?" That is, if rationalists are so rational, why aren't they leveraging their high IQs and their supposed rationality skills to perform in the top percentages and all sorts of metrics of coveted success? Even by self-reports, such as the LW survey(s). However, I've thought of a contrapositive question: "if you're stupid, why aren't you poor?" I.e., while rationalists might not all be peak-happiness millionaires or whatever, we might also ask the question about what the rates of (socially perceived) failure are, and how they compare to other cohorts, communities, reference classes, etc. You're the first person I've seen to pose this question. There might have been others, though.
For many LWers, the answer is "I'm young," but I think there are also a lot of people where the answer is "I am rich."
Also worth noting: LWers should be extracting more utility from their money than non-LWers.
The rationalist community has a lot of independent thinkers, and independent thinkers are more likely than the general population to find the game of amassing wealth to be an obstruction to their freedom of thought and an inefficient path to happiness and life satisfaction. Also many rationalists are quite young, as Vaniver pointed out.
Heh. Maybe I am not a sufficiently independent thinker, but for me the greatest obstruction to freedom of thought and happiness and life satisfaction is having a daily job, especially one that resembles Dilbert comics. My problem with the "game of amassing wealth" is that (1) I am not very good at it, and (2) even when you are smart enough to double your wealth in a few years, if you start with a small amount, all you get is double of small amount, and there is a limited amount of years in your lifetime. I mean, compared to my wealth 10 or 20 years ago, I am significantly richer, but if I would keep the same speed, I would be probably able to retire at 60, which feels a bit late.
We don't want "are you rich, do you smoke" because the selection effect (we are rich because we were born upper middle class, and we're not powerful because powerful people have better things to do than explore the internet until they land on odd forums). Otherwise the value of an idea is judged by the types of people who happen to stumble upon them. What we want is "After being exposed to the ideas, did you get richer", "did you quite smoking", etc. Before after. IQ is just another selection effect confound to control for. Priors say there is absolutely no way rationality training will alter your IQ (and besides the IQ data is mostly from standardized test scores taken in high school anyhow) If high IQ people land up here that just means high IQ people crawl the internet more and stick around more.
Thanks for being sane.
Some random thoughts about the questions: Not everyone who participated in the survey is a regular LW reader; it was open to the whole diaspora. Not everyone who reads LW regularly is also working on their own rationality. Some people are here for the insight porn; some people simply enjoy being in the community of other smart people. Not everyone who tries to become more rational is doing it correctly. For example, some people may go for the applause lights, or still compartmentalize in something important. Now, assuming that you are trying to do the rational thing (but of course you are not perfect at it)... Also, assuming you have high intelligence (LW already selects for it), and you are mostly healthy (just a base-rate assumption)... There are essentially two ways to become more rich: get a high income, or multiply the existing wealth. The second option is not available for those who don't have any significant "existing wealth". For those who do, I guess investing in the passively managed index funds is the standard LW advice. Assuming that (feel free to adjust the numbers if they feel wrong) you can comfortably live on $2,000 a month, and you believe that index funds will reach at least 4% yearly increase in long term... all you need is to get $600,000, once, and then you can play the rest of your life in the "easy mode". On the other hand, if you start from zero, and you are able to save less than $1,000 a month, the bad news is that you are never going to get there. And if you want to get there in, say, 20 years, you better save about $3,000 a month. So I guess the answer is that even for smart people, saving $3,000 a month is a difficult task, and 20 years is a long time (LW does not even exist for 20 years yet). In other words, yes, it's true that most LW rationalist are not smart enough to make half a million dollars overnight. But after unpacking "so smart" and "rich", it shouldn't surprise many people. Anasûrimbor Kellhus would probably be able t
The LW surveys contain questions about whether people are regular LW readers and allow us to see how people who are regular readers differ.
Your math is a bit off -- you're forgetting that your savings also grow at 4%/year while you're accumulating them. So if you save $2,000 / month and can get stable 4% return (after taxes), in 20 years you will have $612K. The whole calculation, though, is based on guaranteed returns and if your returns are actually volatile (say, the mean is 4% with noticeable standard deviation), the situation changes.
And of course all these calculations are ignoring inflation. If inflation is, say, 2%, then * to get out $2k/month with 4% nominal returns you need $1.2M rather than $600k; or * to get out $2k/month with $600k, you need 4% real returns or about 6% nominal. And * the equivalent of $2k/month now is about $3k/month in 20 years. On the other hand, * your savings can reasonably be expected to increase in line with inflation too.
Yep, so far we've been talking about nominal sums without considering their real purchasing power. The proper question of what is the sum of money that one can live off as a rentier to maintain a certain standard of living and how much needs to be saved for how long is... complicated.
Yup. The most sophisticated approach I've seen, which is clearly not actually sophisticated enough, is to guess at possible trajectories of future investment growth by some process along the lines of random sampling of past stock market returns, and then choose a sum that leads to you not running out of money in, say, at least 99% of those trajectories.
It's a better start than simple compounding interest calculations :-) To approach this from another side, one can buy an annuity (which provides a stream of income for the rest of your life). You need to save as much as is needed to buy such an annuity and then you're good (mostly). However I understand that these annuities are not... attractively priced, especially if you want one which adjusts your income stream for inflation.
That is also my understanding, and I doubt the annuity market has the properties required to make its prices reflect any sort of reality.
Have you looked into the census numbers?
I've skimmed them, but I don't remember seeing these kinds of statistics. I'll take another look though. Thanks.

I don't want to live forever myself, but I want people who want to live forever to live forever. Does that make me a transhumanist?

Probably, if you're spending time thinking about the possibilities and consequences. I challenge your statement though, and suspect you've got a near-far conflict in your wants. Unless you state the conditions under which you'll want to die, and think those conditions are inevitable or desirable, you want to live forever. I predict you'll always want to live for at least another few years, and only induction failure is making you say you don't want to live forever.
"Unless you state the conditions..." That is not true. You can want to live a finite life without wanting to die at any particular time. If you were offered the deal, "Choose the number x and you will receive that much utility, but if you do not choose, you will not receive any," then you will want to choose some finite number, despite the fact that you would prefer a greater number to any particular number. Those desires are consistent, not inconsistent. The problematic issue is in the territory, not in your map of it.
Ok, maybe you don't have to state the conditions, but you have to predict that there will be an actual time that you want to die. I don't follow your utility comparison. I don't think of utility as a number in this way, but even if so, that's not the deal being offered. In order to not want immortality, you have to want to die. I think this is pretty straightforward. The deal being offered is "you expect some utility amount every moment you experience. some of these may be negative. You have some influence, but not actual control, over future experiences" If you predict that the sum of future experiences is negative, you would be better off dieing now. If you predict positive, you should continue living. Unless you can predict a point at which you want to die, you should predict that you'll want to live.

A thought occurred to me on a divide in ethical views that goes frequently unremarked, so I thought I'd ask about it: How many of you think ethics/morality is strictly Negative (prohibits action, but never requires action), a combination of Both (can both prohibit or require action), or something else entirely?

ETA: First poll I've used here, and I was hoping to view it, then edit the behavior. Please don't mind the "Option" issue in the format.


I answered a slightly different question. I don't think all ethics or moral systems do either or both of these things. My preferred ruleset (consequentialist personal regret-minimization) both prohibits and requires action, and in fact doesn't distinguish between the two.
I'd classify it loosely as Both; nothing requires an ethical system to distinguish between the two cases, but I think it's a substantial divide in the way people tend to think about ethics. I'm starting to think "ethics" is an incoherent concept. I'm a strict-negative ethicist - yet I do have an internal concept of a preference hierarchy, in terms of what I want the world to look like, which probably looks a lot like what most people would think of as part of their ethics system. It's just... not part of my ethics. Yes, I'd prefer it if poor people in other countries didn't starve to death, but this isn't an ethical problem, and trying to include it in your ethics looks... confused, to me. How can your ethical status be determined by things outside your control? How can we say a selfish person living in utopia is a better person, ethically, than a selfish person living in a dystopia? Which isn't to say I'm right. More than half the users apparently include positive ethics in their ethical systems.
People in other countries (note: I'm anti-nationalist, and prefer to just say "people", or if I need to distinguish, "people distant from me") starving is not under my control, but I can have a slight influence that makes it a small amount better for a lot of them. To me, this absolutely puts it in bounds for ethical consideration Put in decision-making terms as opposed to ethical framing, "my utility function includes terms for the lives of distant strangers". For me, ethics is about analyzing and debating (with myself, mostly) the coefficients of those terms.
Okay. Imagine two versions of you: In one, you were born into a society in which, owing to nuclear war, the country you live in is the only one remaining. It is just as wealthy as our own current society owing to the point this hypothesis is leading to. The other version of you exists in a society much more like the one we live in, where poor people are starving to death. I'll observe that, strictly in terms of ethical obligations, the person in the scenario in which the poor people didn't exist is ethically superior, because fewer ethical obligations are being unmet. In spite of their actions being exactly the same. Outside the hypothetical: I agree wholeheartedly the world in which poor people don't starve is better than the one in which they do. That's the world I'd prefer exist. I simply fail to see it as an ethical issue, as I regard ethics as being the governance of one's own behavior rather than the governance of the world.
Hmm. You're getting close to Repugnant Conclusion territory here, which I tend to resolve by rejecting the redistribution argument rather than the addition argument. In my view, In terms of world-preference, the smaller world with no poverty is inferior, as there are fewer net-positive lives. If you're claiming that near-starving impoverished people are leading lives that are negative value, I understand but do not agree with your position.
What's your reason for not agreeing with that position? I ask because my own experience is that I feel strongly inclined to disagree with it, but when I look closer I think that's because of a couple of confusions. Confusion #1. Here are two questions we can ask about a life. (1) "Would it be an improvement to end this life now?" (2) "Would it be an improvement if this life had simply never been?". The question relevant to the Repugnant Conclusion is #2 (almost -- see below), but there's a tendency to conflate it with #1. (Imagine tactlessly telling someone that the answer to #2 in their case is yes. I think they would likely respond indignantly with something like "So you'd prefer me dead, would you?" -- question #1.) And, because people value their own lives a lot and people's preferences matter, a life has to be much much worse to make the answer to #1 positive than to make the answer to #2 positive. So when we try to imagine lives that are just barely worth having (best not to say "worth living" because again this wrongly suggests #1) we tend to think about ones that are borderline for #1. I think most human lives are well above the threshold for saying no to #1, but quite a lot might be below the threshold for #2. Confusion #2. People's lives matter not only to themselves but to other people around them. Imagine (ridiculously oversimple toy model alert) a community of people, all with lives to which the answer to question 2 above is (all things considered) yes and who care a lot about the people around them; let's have a scale on which the borderline for question 2 is at zero, and suppose that someone with N friends scores -1/(N^2+1). Suppose everyone has 10 friends; then the incremental effect of removing someone with N friends is to improve the score by about 0.01 for their life and reduce it by 10(1/82-1/101) or about 0.023. In other words, this world would be worse off without any individual in the community -- if what you imagine when assessing that is
There are two problems. In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you're ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics. In the second scenario, in which ethics are additive (you're not a worse person for not doing good, but instead, the good you do adds to some sort of ethical "score"), your ethical standing is limited by how horrible the world you are in is - that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people's suffering is also, to me, contrary to the nature of ethics. It's not a matter of which world you'd prefer to live in, it's a matter of how the world you live in changes your ethical standing. ETA: Although the "additive" model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn't be good.
I suspect I'm more confused than even this implies. I don't think there's any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions - which future state, conditional on current choice, is preferable. I'm not trying to compare a current world with poverty against a counterfactual current world without - that's completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.
From my perspective, we have a word for that, and it isn't ethics. It's preference. Ethics are the rules governing how preference conflicts are mediated. Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven't made decisions to alleviate pain they don't know about? Does informing them of the pain change their ethical status - does it make them ethically worse-off?
Absolutely agreed. But it's about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states. If they're unaware because there's no reasonable way for them to be aware, it's hard for me to hold them to blame for not acting on that. Ought implies can. If they're unaware because they've made choices to avoid the truth, then they're ethically inferior to the version of themselves which do learn and act.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine. Insofar as your internal preferences conflict, I'm not certain ethics are the correct approach to resolve the issue. This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people's suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I'm making ethically superior people, is it worth the ethical cost to me? Once you start treating ethics like utility - that is, a comparable, in some sense ordinal, value - you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You're still treating ethical values as external summable properties. You just can't compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation. If there's no suffering, that doesn't make people more or less ethical than if there is suffering - that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such. You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable - to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%. In fact, I think the same about utility - it's bizarre and incoherent to treat it as comparable or additive. It's ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian - those guys are crazy.

After reading a Facebook post by Kaj Sotala about MessagEase I switched to the keyboard because it's much better one than the default Android keyboard I was using the default keyboard.

It allows faster typing. It allows typing beautiful unicode that's hard to type even on a PC. It has macros that allow me to save commonly typed string such as facebook birthday greetings and my email address. It has easy gestures for going to the top or the buttom of a document. You have a copy-paste history.

I still use the default App launcher. Does somebody have a case why I should use a specific different launcher?


What do you think are good ideas for moonshot projects that have not yet been adequately researched or funded?

Software for automatically playing hypnosis audio via headphones to people who get operated in addition to standard anesthesia.
Leaving aside the bloody obvious things (universal basic income or other form of care, global internet access, etc) Prediction market. They tried but it's dead due to gambling laws. Someone should give it a second try.
How's global internet access not funded? Google and Facebook both have programs for it. On another front SpaceX plans to launch the next Iridium satillites. How is it dead? There PredictIt. There's also Augur (currently in Beta).
A friend feed that's like Facebooks friend feed but audio instead of text or images to give people a clear alternative to hearing talk radio.
A pay-for-performance online marketplace for medical services.

Sometimes, things happen that feel subjectively significant in a way, things that seem to throw earlier estimates out of the window and lead to recalculations - at least it feels like that - like an event happened that requires an answer. But it doesn't really condense in words, at least in my case, it seems like a sheet of sure belief in different things than I have actually learned of, in some unspecified ramifications.

How would one uphold rationality in the face of such a, well, learning experience?

Wait a few months to a year. It usually goes away.
Okay, I was wrong to be so vague somewhere where I'm anonymous enough. My father-in-law is a retired General Practitioner (approximately), but people keep coming to him for help now and again. Recently he was asked to resuscitate a child, but his efforts were too late. The parents drove to our house in the evening, when we were putting our kid to bed, and he (the kid) became quite excited with having unfamiliar people bursting in and asking for help. I told him he has to behave and not interrupt his grandfather's work, and we went to read a book. My mother-in-law was very upset, and recounted details of the work going on in the yard, and I remember thinking that she needed to compartmentalize more. Then my father-in-law came back, washed his face, picked my kid and rocked him to sleep, totally composed. I had known he's a professional, but usually his professionalism was accompanied by, uh, loud noises (he has a carrying voice). This time... It was a perfectly normal evening. And I find that I respect him so much more. My model of doctors' professional behavior had been ruined by fiction (think McCoy from StarTrek, etc.), and now it seems just such a simple and hard thing. So...I didn't mean 'learning experience' in a bad way.
That sounds like a meaningful experience. Can you be more specific about the paradigm shift it caused and the questions you have about "upholding rationality"?
I guess it set the concepts of ruthlessness and cruelty more apart in my mind than they used to be. Before, when I had cause to be ruthless, I would always think to myself "but normal people do not interfere with other people selling rare flowers; I have to exercise kindness as a virtue, otherwise see Crime and Punishment for the logical conclusion". (C&P is my father's favourite book, which he used most often to talk to us about morality.) Time and time again I run into the problem of "do I have a right to do this" and gradually decided that yes, I would just have to be cruel. And here my father-in-law made something which did shake him badly to look like a trivial occurrence with which other people besides him simply did not have to engage, for all that my mother-in-law clearly saw it ours to share in. They both belong to the more normal people I know, and I don't really like him, but his brand of ruthlessness is one I had tried to develop and never could. It reset our boundaries, somehow; before, I think I demanded of him to follow the same C&P guidelines, now I'm trying not to. And I really truly believed them the consistent and rational approach to, er, life, even when I didn't behave accordingly, and now I don't have to. There's something which 'normal people' do which doesn't require or invite this kind of moral questioning. And I wonder what else they can do which I cannot, and what of it I really should be doing.
Attempting to resuscitate a child, failing, and then going about one's day is neither ruthless nor cruel, but I think I understand what you mean. It can be jarring for some people when doctors are seemingly unaffected by the high intensity situations they experience. Doing good does sometimes require overriding instincts designed to prevent evil. For instance, a surgeon must overcome certain natural instincts not to hurt when she cuts into a patient's flesh and blood pours out. The instinct says this is cruelty, the rational mind knows it will save the life of the patient. There are hazards involved in overriding natural instincts, such as in C&P where the protagonist overrides natural instincts against murder because he is convinced that it is in the greater good, because instincts exist for good reason. There are also hazards involved in following natural instincts. Humans have the capacity for both. Following instincts vs. overriding instincts, both variants are appropriate at different times. Putting correctly proportioned trust in reasoning vs. instinct is important. You need to consider when instincts mislead, but you also need to consider when reasoning misleads. It would be a mistake to take a relatively clear cut case of the doctor's override of natural sympathetic instinct (for which there is a great deal of training and precedent which establishes that it is a good idea) and turn it into a generalized principle of "trust reason over moral instinct" under uncertainty. There is no uncertainty in the doctors case, the correct path is obvious. Just because doctors are allowed to override instincts like "don't cut into flesh" and "grieve when witnessing death" in a case where it has already been predecided that this is a good idea doesn't mean they get free license to override just willy nilly whenever they've convinced themselves it's for a greater good, they still have to undergo the deliberative process of asking whether they've rationalized themselves
I agree, although, given the same training you speak of, I think in their cases it is almost "instinct vs. reasoning", and so is not as hard a choice as it could be. (I also might be less unwilling to cut into flesh than other people, having had surgery myself and retained a mild interest in zootomy since my school years, so there's that.) And in C&P, as I recall, Svidrigaylov blackmailed Raskol'nikov quoting Raskol'nikov's own words that the prostitute's younger sister would go the same way...which might have been the first instance when I learned that people should care not to leak information, whether it be a statement of facts or a statement of their attitude to facts, however morally good it is. So now I take my observations with a grain of salt; and I want to trust my eyes, but that's about it...
It's often useful in cases like that to put your thoughts into writing.
Too confidential.
Even if you do not share your written thoughts with anyone, writing them down can help to organize your thoughts into a form that can be more easily analyzed and evaluated (by you).
You can always write it down into a well encrypted file on your computer.

File under "we're not as rich as we think we are", this Wiki page shows that economic-basket-case Greece has higher median net worth than the US. Australia is astoundingly rich, +$60k higher than the US average (which includes the megawealthy) and $175k higher than the US median. Even econo-sluggard Italy has $100k higher median than the US.

You're reading the data wrong. Australian median = $225K, US average = $244K. Overall, I have doubts about their methodology. The source publication is here and there are some... non-intuitive numbers in there. For example, page 92 shows changes in household wealth between 2012 and 2013. According to their estimates, the Swedes became richer by 15.5% and the Japanese poorer by over 20% in a single year. That looks fishy to me. But yeah. Australians made out like bandits (ahem) selling ore to China.
Fixed. I was using the mean wealth instead of net mean wealth. It's still amazing to me that the Aussie average exceeds the US average, given that the averages include megawealthy tech and finance billionaires. And amazing that Greece and Italy have higher median wealth than the US.
US culture is extraordinarily spendy, which arguably is good for GDP but bad for individual wealth except for those whose individual wealth benefits from corporate gain (who are mostly well above the median and therefore don't affect the median wealth).
I think apples-to-apples comparison is tricky here. Things like the age structure of the population can matter a lot here. A country with an average age of 50 should have a higher level of net worth than one with an average age of 30. In any case I'm not sure net worth is the valid way to think about "how rich we are" compared to income or consumption or quality of life or whatever.

I collected some social statistics from the internet and computed their correlations: https://drive.google.com/open?id=0B9wG-PC9QbVERHdiTi1uTlFMMlU My sources were: http://pastebin.com/ERk1BaBu

But I'm not sure how to proceed from there: https://drive.google.com/open?id=0B9wG-PC9QbVEWlRZSG9KM0ZFeVk ?? Dotted lines represent positive correlations and arrowed lines negative correlations.

I obtained that confusing chart by following this questionable method: https://drive.google.com/open?id=0B9wG-PC9QbVEVHg1T1lQNE1ZTk0 First, drop some of the trivial correlatio... (read more)

What is it that you want to do? Just looking at correlations and nothing else can lead to funny results.
I'm trying to get at least a vague handle on what I can legitimately infer from what using data that might, and probably does, contain circular causation. I'm looking for statistical tools that might help me do that. Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields? Does the Spurious Correlations book have ideas on that? (No, it just seems to be an awesome set of correlations. Thanks, BTW.) (Also notice that these are not just any correlations. These are the strongest correlations that pertain among a large number of variables relative to each other. I mean, I computed all possible correlations among every combination of 2 variables in hopes that the strongest I find for each variable might show something interesting.)
That's not a very well-defined goal. You are engaging in what's known as a spaghetti factory analysis: make a lot of spaghetti, throw it on the wall, pick the most interesting shapes. This doesn't tell you anything about the world. Sure, you can start with correlations. But that's only a start. Let's say you've got a high correlation between A and B. The next questions should be: Does it make sense? Is there a plausible mechanism underlying this correlation? Is it stable in time? Is it meaningful? And that's before diving into causality which correlations won't help you much with. You still need a better goal of the analysis. Nooooo! You don't understand basic stats, trying to (mis)use complicated tools will just let you confuse yourself more thoroughly.
Sure, I can always offer my own interpretations, but the whole idea was to minimize that as much as possible. I can rationalize anything. Watch: Milk consumption is negatively correlated with income inequality. Drinking less milk leads to stunted intelligence, resulting in a rise in income inequality. Or income inequality leads to a drop in milk consumption among poor families. Or the alien warlord Thon-Gul hates milk and equal incomes. What conditions must my goal satisfy in order to qualify as a "well-defined goal"? Have I made any actual (meaning technical) mistakes so far? (Anyway, thanks for reminding me to check for temporal stability. I should write a script to scrape the data off pdfs. (Never mind, I found a library.))
I believe this idea to be misguided. The point of the process is to understand. You can't understand without "interpretation" -- looking for just the biggest numbers inevitably leads you astray. The issue isn't what you can rationalize -- "don't be stupid" is still the baseline, level zero criterion. A specification of what kind of answers will be acceptable and what kind will not. Are you asking whether your spaghetti factory mixes flour and water in the right ratio?
Not being stupid is an admirable goal, but it's not well-defined. I tried Googling "spaghetti factory analysis" and "spaghetti factory analysis statistics" for more information, but it's not turning up anything. Is there a standard term for the error you are referring to? Can't I have my common sense, but make all possible comparisons anyway just to inform my common sense as to the general directions in which the winds of evidence are blowing? I don't see how informing myself of correlations harms my common sense in any way, and the only alternative I can think of is to stick to my prejudices, but whenever some doubt arises as to which of my prejudices has a stronger claim, I should thoroughly investigate real world data to settle the dispute between the two. As soon as that process is over, I should stop immediately because nothing else matters. Is that the course of action you recommend?
It's not a goal. It is a criterion you should apply to the steps which you intend to take. I admit to it not being well-defined :-) In statistics that used to be called "data mining" and was a bad thing. Data science repurposed the term and it's now a good thing :-/ Andrew Gelman calls a similar phenomenon "garden of the forking paths" (see e.g. here). Basically the problem is paying attention to noise. You can. It's just that you shouldn't attach undue importance to which comparison came the first and which the second. You're generating estimates and at the very minimum you should also be generating what you think are the errors of your estimates -- these should be helpful in establishing how meaningful your ranking of all the pairs is. And you still need to define a goal. For example, a goal of explanation/understanding is different from the goal of forecasting. I'm not telling you to ignore the data. I'm telling you to be sceptical of what the data is telling you.
Thank you! Those data mining algorithms are exactly what I was looking for. (Personally, I would describe the situation you are warning me against as reducing it "more than is possible" rather than "as much as possible". I am definitely in favor of using common sense.)

Incidentally, do we have anybody about who can answer a very specific question about meditation practice? (And if you don't know exactly why I'm asking this question, instead of asking the question I want to ask, you shouldn't volunteer to try to answer.)

I do meditate for a long time and I'm learning it from qualified people. I think I can answer a wide range of question but there might be questions arrising from techniques that I don't know where I can't give good answers.

In liueu of a media thread

How much time would it take you to write a short description of what is in the linked article (I assume you have read it) and why could anyone be interested to read it? Compare it with the time spent together by all the people who click the link and then feel confused. (On the other hand, if no one clicks the link, what's the point of posting it?)