Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn't seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?
Re: "civilizational incompetence". I've noticed "civilizational incomptence" being used as a curiosity stopper. It seems like people who use the phrase typically don't do much to delve in to the specific failure modes civilization is falling prey to in the scenario they're analyzing. Heaven forbid that we try to come up with a precise description of a problem, much less actually attempt to solve it.
(See also: http://celandine13.livejournal.com/33599.html)
Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.
I remember lukeprog used to recommend Bermudez's Cognitive Science over many others. But then So8res reviewed it and didn't like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven't really seen anyone say much about.
There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn't seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).
No, it's not up to date. (It's on my list of things to fix, but I don't have many spare cycles right now.) I'd start with a short set theory book (such as Naive Set Theory), follow it up with Computation and Logic (by Boolos), and then (or if those are too easy) drop me a PM for more suggestions. (Or read the first four chapters of Jaynes on Probability Theory and the first two chapters of Model Theory by Chang and Keisler.)
Edit: I have now updated the course list (or, rather, turned it into a research guide) that is fairly up-to-date (if unpolished) as of 6 Nov 14.
Luke's IAMA on reddit's r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?
The outside view.... (The whole link is quoted.)
...Yesterday, before I got here, my dad was trying to fix an invisible machine. By all accounts, he began working on the phantom device quite intently, but as his repairs began to involve the hospice bed and the tubes attached to his body, he was gently sedated, and he had to leave it, unresolved.
This was out-of-character for my father, who I presumed had never encountered a machine he couldn’t fix. He built model aeroplanes in rural New Zealand, won a scholarship to go to university, and ended up as an aeronautical engineer for Air New Zealand, fixing engines twice his size. More scholarships followed and I first remember him completing his PhD in thermodynamics, or ‘what heat does’, as he used to describe it, to his six-year-old son.
When he was first admitted to the hospice, more than a week go, he was quite lucid – chatting, talking, bemoaning the slow pace of dying. “Takes too long,” he said, “who designed this?” But now he is mostly unconscious.
Occasionally though, moments of lucidity dodge between the sleep and the confusion. “When did you arrive?” he asked me in the early hours of this morning, having woken up wanting water. Onc
Today I had an aha moment when discussing coalition politics (I didn't call it that, but it was) with elementary schoolers, 3rd grade.
As a context: I offer an interdisciplinary course in school (voluntary, one hour per week). It gives a small group of pupils a glimpse of how things really work. Call it rationality training if you want.
Today the topic was pairs and triple. I used analogies from relationships: Couples, parents, friendships. What changes in a relationship when a new element appears. Why do relationships form in the first place? And this revealed differences in how friendships work among boys and among girls. And that in this class at this moment at least the girl friendships were largely coalition politics: "If you do this your are my best friend," or "No we can't be best friends if she it your best friend." For the boys it appears to be at least wquantitatively different. But maybe just the surface differs.
I the end I represented this as graphs (kind of) on the board. And the children were delighted to draw their own coalition diagrams, even abbreviating names by single letters. You wouldn't have bet that these diagrams were from 3rd grade.
You may be interested in "Chimpanzee Politics", by Frans de Waals (something like that), which is about exactly that (observing a group of Chimps in a zoo, and how their politics and alliances evolves, with a couple coups).
Recently, I started a writing wager with a friend to encourage us both to produce a novel. At the same time, I have been improving my job hunting by narrowing my focus on what I want out of my next job and how I want it. While doing these two activities, I began to think about what I was adding to the world. More specifically, I began to ask myself what good I wanted to make.
I realized that writing a novel was not from a desire to add a good to the world (I don't want to write a world changing book), but just something enjoyable. So, I looked at my job. I realized that it was much the same. I'm not driven to libraries specifically by a desire to improve the world's intellectual resources; that's just a side effect. I'm driven to them out of enjoyment for the work.
So, if I'm not producing good from the two major productions of my life, I thought about what else I could produce or if I should at all. But I couldn't think of any concrete examples of good I could add to the world outside of effective altruism. I'm not an inventor nor am I a culture-shifting artist. But I wanted to find something I could add to the world to improve it, if only for my own vanity.
I decided, for the time b...
Yes, take the Invisible Hand approach to altruism, by pursuing your own productive wellbeing you will generate wellbeing in the worlds of others. Trickle down altruism is a feasible moral policy. Come to the Dark Side and bask in Moral Libertarianism.
How communities Work, and What Wrecks Them
One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.
Behavior patterns that grind communities down: endless contrarianism, axe-grinding, persistent negativity, ranting, and grudges.
I posted a link to the 2014 survey in the 'Less Wrong' Facebook group, and some people commented they filled it out. Another friend of mine started a Less Wrong account to comment that she did the survey, and got her first karma. Now I'm curious how many lurkers become survey participants, and are then incenitivized to start accounts to get the promised karma by commenting they completed it. If it's a lot, that's cool, because having one's first comment upvoted after just registering an account on Less Wrong seems like a way of overcoming the psychological barrier of 'oh, I wouldn't fit in as an active participant on Less Wrong...'
If you, or someone you know, got active on Less Wrong for the first time because of the survey, please reply as a data point. If you're a regular user who has a hypothesis about this, please share. Either way, I'm curious to discover how strong an effect this is, or is not.
Someone has created a fake Singularity Summit website.
(Link is to MIRI blog post claiming they are not responsible for the site.)
MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.
Laundry (plus ironing, if you have clothes that require that - I try not to), washing up (I think this is called doing the dishes in America), mopping, hoovering (vacuuming), dusting, cleaning bathroom and kitchen surfaces, cleaning toilets, cleaning windows and mirrors. That might cover the obvious ones? Seems like most of them don't involve much learning but do take a bit of getting round to, if you're anything like me.
I'd add, not leaving clutter lying around. It both collects dust, and makes cleaning more of an effort. Keep it packed away in boxes and cupboards. (Getting rid of clutter entirely is a whole separate subject.)
It's really hard to estimate that accurately, because for me something like 90% of cleanliness is developing habits that couple it with the tasks that necessitate it: always and automatically washing dishes after cooking, putting away used clothes and other sources of clutter, etc. Habits don't take mental effort, but for the same reason it's almost impossible to quantify the time or physical effort that goes into them, at least if you don't have someone standing over you with a stopwatch.
For periodic rather than habitual tasks, though, I spend maybe half an hour a week on laundry (this would take longer if I didn't have a washer and dryer in my house, though, and there are opportunity costs involved), and another half hour to an hour on things like vacuuming, mopping, and cleaning porcelain and such.
Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.
Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”
You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.
But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?
I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.
I had a conversation with another person regarding this Leslie's firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It's pretty clear - from the outside - that the cavemen which do probability correctly and don't do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.
I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?
To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.
The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.
Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a pro...
I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.
(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.