From "A Bitter Ending":
...At a conference back in the early 1970s, Danny [Kahneman] was introduced to a prominent philosopher named Max Black and tried to explain to the great man his work with Amos [Tversky]. "I’m not interested in the psychology of stupid people," said Black, and walked away.
Danny and Amos didn’t think of their work as the psychology of stupid people. Their very first experiments, dramatizing the weakness of people’s statistical intuitions, had been conducted on professional statisticians. For every simple problem that fooled undergraduates, Danny and Amos could come up with a more complicated version to fool professors. At least a few professors didn’t like the idea of that. "Give people a visual illusion and they say, ‘It’s only my eyes,’ " said the Princeton psychologist Eldar Shafir. "Give them a linguistic illusion. They’re fooled, but they say, ‘No big deal.’ Then you give them one of Amos and Danny’s examples and they say, ‘Now you’re insulting me.’ "
In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards [former teacher of Amos] wrote to complain. In what would be the first of m
We don't have an open quotes thread on the main page, but this made me chuckle:
"mathematician thinks in numbers, a lawyer in laws, and an idiot thinks in words." from Nassim Taleb in
I think this paper implies that rare harmful genetic mutations explains lots of the variation in human intelligence. Since it will soon be easy for CRISPR to eliminate such mutations in embryos, I think this paper's results if true means that genetic engineering for super-intelligence will be relatively easy.
Is it currently legal to run a for-money prediction market in Canada? I assume the answer is "no," but I was surprisingly unable to find a clear ruling anywhere on the Internet. All I can find is this article which suggests that binary options (which probably includes prediction markets) exist in a legally nebulous state right now.
Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.
The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get...
(some previous discussion of predictionbook.com here)
[disclaimer: I have only been using the site seriously for around 5 months]
I was looking at the growth of predictionbook.com recently, and there has been a pretty stable addition of about 5 new public predictions per day since 2012 (that is counting only new predictions, not including additional wagers on existing predictions). I was curious why the site did not seem to be growing, and how little it is mentioned or linked to on lesswrong and related blogs.
(sidebar: Total predictions (based on the IDs of ...
Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.
Looks like the 'RECENT ON RATIONALITY BLOGS' section on the sidebar is still broken.
Is this a difficult fix?
What advice would you give to a 12-years old boy who wants to become great at drawing and painting?
(Let's assume that "becoming great at drawing and painting" is a given, so please no advice like "do X instead".)
My thoughts: There is the general advice about spending "10 000 hours", for example by allocating a fixed space in your schedule (e.g. each day between 4AM and 5AM, whether I feel like doing it or not). And the time should be best spent learning and practicing new-ish stuff, as opposed to repeating what you are already...
Does anyone have a backup of that one scifi short story from Raikoth about future AGI and acausal trade with simulated hypothetical alien AGI? The link is broken. http://www.raikoth.net/Stuff/story1.html
"Why Boltzmann Brains Are Bad" by Sean M. Carroll https://arxiv.org/pdf/1702.00850.pdf
Two excepts: " The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological...
Consistency in Arithmetic
Double the debt: 2 -1 = -2 *Ok
But: -2 -1 = 2 *Ok?
Who will allow you to multiple your debt with another's debt to get rid of it?
2 -1 + -2 - 1 = (2 - 2) -1 = 0 -1 = 0
But...
2 -1 + -2 -1 = -2 + -2 * -1 = 0
Therefore...
-2 * -1 = 2
Ian Stewart, Professor Stewart’s Cabinet of Mathematical Curiosities, Profile Books, 2008, pages 37-38;
So mathematics is mentally-created, it looks objective because of primordial choices we have made? As a form of a subconscious of the Species and we've created computers because we think that way ...
Reposting this from last week's open thread because it seemed to get buried
Is Newcomb's Paradox solved? I don't mean from a decision standpoint, but the logical knot of "it is clearly, obviously better two one-box, and it is clearly, logically proven better to two-box". I think I have a satisfying solution, but it might be old news.
It's solved for anyone who doesn't believe in magical "free will". If it's possible for Omega to correctly predict your action, then it's only sane to one-box. Only decision systems that deny this ability to predict will two-box.
Causal Decision Theory, because it assumes single-direction-causality (a later event can't cause an earlier one), can be said to deny this prediction. But even that's easily solved by assuming an earlier common cause (the state of the universe that causes Omega's prediction also causes your choice), as long as you don't demand actual free will.
"Why Boltzmann Brains Are Bad" by Sean M. Carroll https://arxiv.org/pdf/1702.00850.pdf
Two excepts: " The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological model we have constructed that predicts we are likely to be random fluctuations, has randomly fluctuated into our heads. There is certainly no reason to trust that our knowledge is accurate, or that we have correctly deduced the predictions of this cosmological model.” - my thought in https://arxiv.org/pdf/1702.00850.pdf
"If we discover that a certain otherwise innocuous cosmological model doesn’t allow us to have a reasonable degree of confidence in science and the empirical method, it makes sense to reject that model, if only on pragmatic grounds”
My opinion: I agree with idea that BB can’t know is he BB or not, and wrote about it on LessWrong, but it is not the basis to conclude that BB-theory has zero probability. We can’t put zero probability to theories if we don’t like them, because it is great way to start to ignore any cognitive biases.
My position: There is no problem to be BB:
1) If nothing else exist, different BB states are connected with each other like digits in natural set, and this way of their connection create almost normal world, and it may have some testable predictions. (Dust theory)
2) If special type of BB, called BB-AIs exist and dominate landscape, such BB-AIs create simulations which are full of human minds, so we are probably in one of them. (The idea is that superintelligent computers are more probable than messy human minds and so are more often type of BB; Or if any BB-AI create more human simulations than random BB appear)
3) If real world exist and BB exist, each BB correspond to some state in real world. As any observer should think as of all sets of similar observers under UDT, it means that I can’t be BB, but I am number of BB plus some real me. And I could ignore BB-part of me, because some form of “quantum immortality”, every second transfer dead BBs into the “real me”. In short: “Big world immortality” completely neutralise BB problem.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "