Ben Goertzel has a rather long psi-related article in Humanity Plus Magazine, apparently prompted by the recent precognition study to be published in Journal of Personality and Social Psychology. He's arguing that psi is real and we should expect to see the results of this study replicated.

I grew up very skeptical of claims of psychic power, jaded by stupid newspaper astrology columns and phony fortune-tellers claiming to read my future in their crystal balls for $20.  Clearly there are many frauds and self-deluded people out there, falsely claiming to perceive the future and carry out other paranormal feats.  But this is no reason to ignore solid laboratory evidence pointing toward the conclusion that, in some cases, precognition really does exist.

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 3:03 AM

This pattern has played out many, many times. Unreplicated studies claiming to show psychic powers are not uncommon. They always either have flaws in their methodology that explain the results, or fail to replicate (either because it was an artifact of publication bias and a low p-value, because of a methodological flaw that was not recorded, or because of cooked data). I am four-nines confident (p=0.9999) that either this study will fail to replicate, that someone will point out a flaw in its methodology, or that when I look for a flaw in its methodology myself I'll find one.

And quite frankly, both Ben Goertzel and the editors of H+ Magazine should've reached the same conclusion. Reporting on an unreplicated study claiming precognition is stupid and irresponsible, and doing so dramatically lowers Ben Goertzel's and H+ Magazine's respect in my eyes.

Ben has been promoting psi research since 2008. See his review of Damien Broderick's book on psi.

Robin would suggest betting, I think.

The obvious way this game is played is to offer specific bets, with specific odds and amounts. Then wait for a counter offer, or the absence thereof.

One of the comments in the article links to a paper which describes the failure to replicate Bem's results: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1699970

Good find! To expand slightly: They replicated one of Bem's nine studies and failed to replicate the result. The study they replicated did not involve erotic images.

[-]Kevin13y120

I've been seriously considering setting up a web application to try and massively replicate the results of this study.

~55% of natural science college professors believe in some kind of ESP, as do ~35% of psychology college professors.

I can't really form a coherent response to that.

My instinctive response to that chart was "YOU LIE!" I suspect the data is either based on a skewed sample, an incredibly broad definition of ESP, or simply wrong.

I expect the true number to be far higher than I'd like to believe, but 55% just doesn't make sense.

I expect the true number to be far higher than I'd like to believe

You'd like to be true, not you'd like to believe (I do so hope).

"I would like to believe it, and I would like the reason for that belief to be that it is true."

I would like to believe that the true figure isn't very high. But I like having beliefs that correspond with reality much more.

Do you like liking to believe things for reasons other than their truth?

I should rephrase: I would like to accurately believe that the figure isn't very high.

I'm actually really uncertain about whether there's a meaningful distinction between "I want to sincerely and rationally believe x" and "I want x to be true" for a perfect Bayesian. For humans, of course, the distinction is enormous.

The article is available here as a 14M download. It is from the Zetetic Scholar, which Marcello Truzzi founded, after first founding CSICOP, but then discovering that his partners did not want to involve pro-paranormal people.

The linked article, second image. Though, I don't know the credibility of the study it was based on.

Thank goodness, because I was starting to wonder whether I should be worried about Ben Goetzel's AGI project. This puts my mind at ease, at least for a while.

Thank goodness, because I was starting to wonder whether I should be worried about Ben Goetzel's AGI project. This puts my mind at ease, at least for a while.

It shouldn't (not as a general rule; Ben's case might have other valid reasons to come to the same conclusion). Being confused in one area doesn't necessarily make you confused in another (or prevents from being capable despite confusion). Not getting the problem of FAI doesn't prevent you from working towards AGI. Believing in God or Santa Claus or flying yogi doesn't prevent you from working towards AGI. Evolution didn't even have a mind.

Being confused in one area doesn't necessarily make you confused in another (or prevents from being capable despite confusion).

One doesn't need necessity to draw comfort. Being confused in one area is Bayesian evidence that one is confused in other areas, and being confused in a given area is Bayesian evidence for lack of sufficient capability in that area.

Well, I've already observed what looks like confusion in the area of AGI. We can imagine this new evidence shows susceptibility to biases that would hinder his work.

But until now I've tentatively assumed Ben did not plan for interesting AI results because on some level he didn't expect to produce any. More precisely, I concluded this on the basis of two assumptions: that he didn't want to die, and that expecting interesting results would make him worry somewhat about death.

I specifically said he did not make his decision based on the arguments that I saw him present -- in part because he distinguished claims that count as logically equivalent if we reject the possibility of researchers unconsciously restricting the AI's actions or predicting them by non-rational means. If he actually assigns significant probability to that last option, then maybe we should worry more!

That article lost me at the word "pseudoskepticism", which has various denotative definitions but in practice usually means "but you're being nasty to my interest."

Interesting. I'm a little disappointed at the incorrect application of the "file drawer" effect by the article cited in the introduction, which should assume a normal distribution rather than all real studies being perfectly neutral. In general I would have liked to see a mention of more powerful statistical techniques - check out funnel plots, they're sweet.

Still, the experiments cited look very interesting. You'd think that, if it were that easy, the world would look different. But I guess we'll see. I'll go test myself for latent psychic powers after lunch - the guy didn't put his programs up on his website, which is pretty negligent in the computer age.

He did, actually. Some of it anyway.

Funnel plots do look cool. Thanks for the pointer :-)

I read Goertzel's recent paper on "Morphic Pilot Theory", which sketches a possible framework for PSI phenomena of the inexplicable synchronicity type.

As far as I could understand, the idea is that the seemingly causally unconnected phenomena are mutually affected by nonlocality from the Bohmian interpretation of quantum physics. The anomalous cognition part comes in as some kind of conservation of algorithmic information, where the Bohmian configuration state tends towards having a low Kolmogorov complexity, this shows up as the same pattern acausally showing up in several places at once. I guess human and animal brains are then assumed to have been evolved to make what use they can of this phenomenon.

I can't really evaluate the paper. I've never looked into Bohmian QM in any detail and would have to work up my physics to get there. I do get that the paper is very speculative, but it is interesting in positing zero ontologically basic woo to work. On the other hand, PSI with quantum physics is a well-deserved crackpot indicator, and I'd really need to know more about the generally physicist-approved version to tell if this stuff is off the deep end or not.

I know a little about quantum physics. Under any interpretation of quantum theory equivalent to the standard ones, this won't work without woo.

My account of the paper's argument is likely to be quite inaccurate and incomplete, so it's best to aim critiques at the paper itself. Does Bohmian interpretation count as one of the standard ones?

As far as I can tell, the conservation of algorithmic information was the big speculative thing in the paper. It's not ontologically basic woo, as in mental intentions irreducible to basic physics, but I"m guessing it's not standard QM either.

Bohm is one of the standard interpretations. It's more complex than MWI (the wave function, plus an independent "pilot wave" to make only the visible part of the wavefunction "real"), and it involves faster-than-light time travel, but is supposed to prevent said time travel and FTL from ever allowing information to be communicated thusly.

Gary Drescher has a section in his book, Good and Real, in which he presents the arguments for MWI and notes the frequency with which the other interpretations are used to justify woo (FTL and time travel that conveniently never affect us, mysterious forces that coincidentally annihilate the rest of the wave function beyond what we can see, claims of consciousness as having magical powers, etc).

I'm in no position to analyse it either, but if psi exists and can be selected for by evolution, doesn't this imply that an AI (or even just a brute force algorithm on the right track) can optimise for it too?

So that's something to consider if there turns out to be anything substantial behind all this.

Would seem to follow, if it's the case that PSI exists and PSI is a physical phenomenon. Goertzel's got something on this.

Depending on exactly what kind of interaction of physics and computing would be going on, the algorithm might need to search through different physical configurations of its sensors or substrate. I'm reminded of this experimental result, where a circuit evolution process that was supposed to make an oscillator component came up instead with a circuit that couldn't produce anything by itself, but did act as a radio receiver that could pick up a suitable oscillating signal from a nearby computer.

It doesn't appear to me that the article sufficiently addresses the very specific issues raised in Jaynes, Chapter 5--which is an odd thing for an AGI researcher to ignore.

Apparently experimenters had a chance to "correct" spelling errors, in non-blind fashion.

https://richardwiseman.wordpress.com/2010/11/18/bems-esp-research/

So I took the test. (thanks Morendil) And then I said "huh," and got some friends to take the test.

I got 50%/36%, but my friends got normal numbers, one quite below chance, around 30%/42%.

So this generates some hypotheses: The sensible hypothesis: Due to extra steps and human interaction, the variance of the the results is higher than I or the author anticipated, leading to normal fluctuations getting called "statistically significant."

The fun hypothesis: I'm psychic and my friends aren't.

The obligatory hypothesis: The computer program is flawed, either accidentally or intentionally.

Testing time!

The computer program is flawed, either accidentally or intentionally.

This is clearly a case where I'd want to see the source code. (ETA: seems to be in one of the sub-folders, if I can figure out how what app to open it with.)

But you can fool around with an interesting question: if it was you writing the program with the explicit intent of producing results seeming to clinch the psi hypothesis, by exploiting ambiguities in the verbal description of the experimental setup, how would you do it?

(ETA: one interesting observation, on re-running the program, is that the order of presentation of words the first time through seems not to be randomized.)

When I took the test I got 50/50. My first thought was - "how lucky that I should happen to get, by chance, a result that so clearly reinforces my original beliefs".

How about doing a Bayesian analysis of the experiment?

Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.

You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17

So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.

Try doing analyses for the other experiments if you're interested!

...I don't think this calculation would be right even if we actually factored in all the Psi studies that didn't achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn't show any result is madness.

To be more specific, first of all you didn't know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1/100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated... which by virtue of my current prior odds I confidently predict most surely won't happen.

Very interesting. I've thought for a long time that the boringness of the Rhine cards might influence the results.

If the experiment is tracking a real effect, it should be stronger with the subjects' preferred sort of erotic pictures.

Yup. The paper notes that initially they got a statistically significant effect in women, but nothing in men. They were using a standard library of erotic images intended to be used in scientific experiements. They then started using modern hardcore internet pornography instead of the standard scientific library of pornography and the results were then comparable.

In our first retroactive experiment (Experiments 5, described below), women showed psi effects to highly arousing stimuli but men did not. Because this appeared to have arisen from men’s lower arousal to such stimuli, we introduced different erotic and negative pictures for men and women in subsequent studies, including this one, using stronger and more explicit images from Internet sites for the men. We also provided two additional sets of erotic pictures so that men could choose the option of seeing male–male erotic images and women could choose the option of seeing female–female erotic images.

That isn't exactly methodologically reassuring. If you keep fiddling with the parameters you'll eventually get an outlier result even if the effect is nonexistent.

http://www.ruudwetzels.com//articles/Wagenmakersetal_subm.pdf

Does psi exist? In a recent article, Dr. Bem conducted nine studies with over a thousand participants in an attempt to demonstrate that future events retroactively affect people’s responses. Here we discuss several limitations of Bem’s experiments on psi; in particular, we show that the data analysis was partly exploratory, and that one-sided p-values may overstate the statistical evidence against the null hypothesis. We reanalyze Bem’s data using a default Bayesian t-test and show that the evidence for psi is weak to nonexistent. We argue that in order to convince a skeptical audience of a controversial claim, one needs to conduct strictly confirmatory studies and analyze the results with statistical tests that are conservative rather than liberal. We conclude that Bem’s p-values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.

So, a year and a half later, has this experiment been replicated?

Freud once said that Jung was a great psychologist, until he became a prophet.