2) Belief and Experimenter Effect
In psi studies as well as in all other parts of science, researchers mostly find what they are looking for. Enthusiasts find positive effects; doubters find null results. This is probably pure confirmation bias, however subtle (as was discussed at length in “The Science of Precognition: Cosmic Habituation versus Decline Effect”). The general, interdisciplinary decline effect for example in drug efficacy studies is mostly due to confirmation and publication bias. The proper decline effect however may conceivably be due to blunted emotions and occurs usually in psi studies (this has all been discussed at length before).
Is there a fundamental influence of the researchers’ inclination on the outcome of experiments? There may be contributions from the conceivable scenario that belief is more involved in the construction of reality than we allow us to consider, and maybe this is what the British psychologist Richard Wiseman and the psi researcher Marilyn Schlitz call the “Experimenter Effect”. In 1997, both did experiments with the same pool of subjects. Wiseman, a psi-doubter, found no effects while Schlitz clearly obtained the records that show an effect in two out of the three experiments . Two studies showed this experimenter effect, the last one did not , which is however consistent with the decline effect.
Also the other way around?
Obviously, any experimenter effect is a problem for the scientific community at large if as I wrote in the first sentence last time: “Precognition is under scientific investigation, often with the aim to show null-results in order to discredit such ideas.”
If the world is just a directly real box in which stuff happens to happen, there is of course no such influence of belief (none in the physical sense discussed previously – this is not about your belief motivating you for example). That there is no effect is obvious anyway you think about it. What on earth could the actual mechanical motions of particles possibly be that make an experiment move differently according to what the brain of the experimenter wishes? Such is nonsense for the simple reason alone that a different belief, say being convinced a Higgs boson cannot be found, is not even in any way fundamentally differently put down in the brain. Experiment and belief are classically decoupled.
However, modern science has firmly established that we are not living “inside” of some naïve realism that just exists ‘out there’. Quantum mechanics has experimentally proven that quantum modal realism is required to explain our world consistently. Most scientists knowledgeable about the debate talk about structural realism. Anyway you see it, we do not live inside a directly real space where stuff just happens, where the probability of outcomes measures the ways real things can depart after bumping into each other. Scientific significance is always based on statistical measures, experimental outcomes are always empirical records, and with naive reality gone, probability has become a highly complex issue that actually nobody understands.
In the Classical interpretation, probability is the ratio of favorable outcomes to possible outcomes. This is circular, because it must assume that the different possibilities are equiprobable (or if not, at least have some probability assigned already by symmetry arguments like that one side of a coin is almost the same as the other when tossing it, at least as far as the mechanics of the falling and landing of the coin is concerned).
In the Frequentist approach, probability is strictly counting the favorite outcomes of many trials. If you did not practically count yet, you assume the system in question, say a coin, to be similar to one that you experimented with before.
The Bayesian approach (pronounced BAYZ-ee-un) is a mixture of both. Classical probability enters in the beginning and is often expressed in terms of subjective degree of belief in some proposition. Actual counting then feeds into the “Bayesian updating”.
‘The probability of finding myself in a certain future’ has been rendered suspect. Every possible future self finds itself in its world. If including all future you’s, “you” will find heads and tails both with 100%. This makes probability somewhat frequentist. At least in certain many worlds (MW) models, and they are after all applicable and valuable at times, the number of parallel worlds is deciding. But careful – classically there may be only two, namely heads or tails. A MW model however may have a billion worlds where you find heads and three billion where you find tails.
In this way one can talk about the ‘probability of finding myself in a certain future’. Probability is in the end always a frequency in a record, like what you remember, and thus it is Bayesian! The question is then “in how many worlds do you find yourself with a certain type of record?” This makes sense in certain toy models, and that is the best we have for now. The number of parallel worlds influences the empirical probability. We do not yet know what exactly influences that number and such knowledge wouldn’t be very useful anyway because the branch counting concept will ultimately fail (we just don't have nothing better yet). But we do know two things for sure:
1) Extra branching is what allows quantum probabilities in the first place! Inside a MW model of the Einstein Podolsky Rosen setup for example, there is no way to get quantum probabilities without extra world branching. (“Extra branching” is especially disliked by e.g. David Wallace, but his objections make only sense after having already thrown out branch counting! Here I stay with MW models in order to be able to have an intuitive picture to communicate with.)
2) The number of branches ensures that the quantum probability (say photon polarization being 50/50 up or down) is consistent with the physics of the classical world observed (polarized light absorption at a polarizer being proportional to a square of the sine of the relative angle of vectors.) This may seem like an upside down interpretation (deriving quantum from the classical), but it is nevertheless so that the sine dependence of classical light absorption is precisely what implies the violation of the Bell inequality in the Einstein Podolsky Rosen setup! In other words: The consistency of our classical world phenomena that we are conscious of in our records/memory/consciousness is what ultimately demands quantum probabilities. How else but with cosines could a vector project?
Thus, in the above sense, all fundamentally underlying is the consistency of records – our records. It is more fundamental than just physics: There is no non-consistent physics. Quantum mechanics is all about self-consistency, for example via constructive interference of self-consistent histories. Inconsistent ones simply destructively interfere with themselves. The more consistent a story is, the “more it exists” (in terms of MW frequency or, in David Deutsch’s opinion, in terms of a rational agent expecting it more).
In fear of repeating too much, let me nevertheless stress once more: The consistency is that of our classical phenomena. As any theoretical physicist knows, down in the rabbit holes of mathematical quantum theory, you can make all kinds of theories self-consistent. If this were not the case, finding correct theories applicable to reality would not be such a huge problem. Which one gives a self-consistent observed world is the all important question.
Silly example (very silly – but memorable): If I did not previously believe in an unforgiving god who demands to be believed in, finding myself in his heaven is inconsistent. The probability to quantum tunnel into that situation is zero, because it is an inconsistent situation. The worrying aspect here: In this silly example at least, my sheer belief affects probability. What is the mechanism? How does my belief do it? Well, it doesn’t do anything at all; there is no mechanistic mechanism whatsoever. Nevertheless, in all futures where I am with an unforgiving god full of vanity, one that does not resurrect non-believers, I remember having believed in him! My future records are consistent with my belief. (Ok – I know this is a silly example – but developing the whole issue about Boltzmann freak brains and what I call “terrible states”, which are possible microstates of brains observing macroscopically contradictory scenarios, goes way beyond the scope of this post.)
The general suggestion is thus: In terms of MW models, the “belief” of the experimenter, meaning in how far the result fits in with the rest of his own, always partially self-constructed world, may conceivably influence how many parallel worlds with certain records there are, dependent on how consistent the records are with the belief.
This is highly speculative and can of course be criticized: Should such not mean that the experimenter that believes in precognition finds herself inside worlds where also the doubters found evidence for precognition? The latter would be inconsistent for the doubters. Belief influence must therefore be a rather “local” effect, extending through the believing experimenter’s world (Not lab) for as long as he does not communicate with the doubter. When they compare records, the worlds mix (as they did for Richard Wiseman and Marilyn Schlitz). As long as all who communicate believe, Kuhn’s paradigms stabilize themselves.
The effect of emotions would be falling right out of these speculations. Belief is emotional and emotional belief is strong. Belief is what we hold against all reason, based on emotions.
 Wiseman, R.,&Schlitz, M.: “Experimenter effects and the remotedetection of staring.” Journal of Parapsychology, 61, 197–207 (1997)
 Schlitz, M., Wiseman, R., Watt, C., Radin, D.: “Of two minds: Skeptic-proponent collaboration within parapsychology.” British Journal of Psychology 97, 313–322 (2006)