Free to Decide

Over at Michael Nielsen’s blog, Michael has a post telling us that he won’t be posting again until August. Personally Michael’s lack of posting scares the bejebus out of me: if he’s not posting, he must be working on some grand research which will make everything I do look even more trivial than before. Michael, you’re scaring me!
Anyway, along with the post Michael posts a comment by UW’s John Sidles trying to stir up some debate by asking about a paper by Conway and Kochen, “The Free Will Theorem”, quant-ph/0604069. Actually I had heard about this paper a while ago, via some non-arxiv channel (where I can’t remember, exactly) and had basically guessed from the brief description I had heard what the paper was about. This is how you know that you are getting old and curmudgeony when you can hear a title to a paper and a description of the results and can guess the way in which those contents were prove (There are rumors, which I myself have never verified, that at a certain well known quantum computing research group, the days starts as follows. A little before lunch, the researchers wander in, check their email and look at the day’s postings on the arxiv. Now they don’t do anything more than read the titles. The research group then proceeds to go to lunch. At the lunch they discuss, with great debate, the most interesting papers posted that day. Having never ever even read the papers! There is a similar story about a certain researcher in quantum computing, who, if you tell that researcher a new result, (s)he will, within a day, almost always be able to rederive the result for you. Of course, my personal nickname for this person is “The Oracle” and it is tempting to tell “The Oracle” that a certain open problem has been solved, when it has not been solved, and see if (s)he can come up with the answer!)
(A note: throughout this post I will use the words “free will” to describe something which, you may or may not agree is related to “free will” as you imagine it. In particular if an object is said to not have free will if its future evolution can be predicted from information in the past lightcone of the object. If it cannot be so predicted with certainty it is then said to possess free will. In fact, I find this definition already interesting and troublesome: can we ever predict anything by only knowing information in our past light cone? How do we know that in the next instance of our evolution a light ray will hit us and burn us up? Certainly we cannot see such a light ray coming, can we? We can, of course, use physics to explain what happened: but can we use it to predict our future behavior? Of course for the electromagnetic field, we could sheild ourselves from such radiation and reasonably assume that we can predict what is occuring. But what about gravity, which can’t be sheilded? For an account of this type of argument I recommend Wolfgang’s comments here here.)
Okay, back to the story at hand. What is Conway and Kochen’s free will theorem? The basic idea is quite simple. I will explain it in the context of Bell’s theorem and the Kochen-Specker theorem, since the author’s don’t describe it in this manner. Bell’s theorem, we known, tells us that there is no local hidden variable theory explaining what quantum theory predicts. The Kochen-Specker theorem is less well known (which leads, in my opinion, the proponents of this different result to suffer a severe inferiority complex in which they constantly try to argue that the KS theorem is more important than Bell’s theorem.) What the Kochen-Specker theorem says is that if there is a hidden variable theory of quantum theory, it must be contextual, i.e. the Kochen-Specker theorem rules out non-contextual hidden variable theories. The way I like to think about the Kochen-Specker theorem is as follows: suppose that there are some hidden variables associated with some quantum system. Now if you make a measurement on this system you will get some outcomes with differing probabilities. Now sometimes you get outcomes with certainty. You’d like to say that when you perform this measurement, this outcome is actually associated with the value of some real hidden variable. But what the KS theorem tells you is that this is not possible: there is no way that those measurement outcomes are actually associated with the hidden variables in a nice one to one manner. What does this have to with contextuality/non-contextuality? Well the “context” here is what other measurement outcomes you are measuring when you measure along with the outcome associated with a particular hidden variable. In non-contextual hidden variable theories, what those other measurement results are doesn’t matter: it is those types of theories that the KS theorem rules out.
(Note: From my personal perspective, I find the KS theorem fascinating, but not as disturbing at Bell’s theorem: that “what you measure” determines “what you can learn” is a deep insight, and one that tells us something about the way reality can be described. However it is not that difficult to imagine the universe as a computer in which accessing the memory of the computer depends on the context of your input: i.e. to get ahold of memory location which holds the value 01001010, you need to query the machine and it seems perfectly reasonable to me that the machine is set up in a manner such that I can’t get all of those bits, since my measurement will only get some of them and the context of the measurement will change some of the other bits. This was basically John Bell’s reaction to the Kochen-Specker theorem. Interestingly there is a claim in this Conway and Kochen theorem that this loophole has been filled! I have a bit to say about this below. Of course no matter where you come out in this arguement, there is no doubt that the KS is DEEP: it tells us that the universe is not a computer whose memory we can gain total access to. And if we can’t gain access to this memory, then does the memory have any “reality”?!!)
Well I’m rambling on. Back to the subject at hand, the free will theorem. In the free will theorem, Conway and Kochen set up an experiment in which you take two spin-1 particles and perform measurement on these spins. (Now for those of you in the know you will already be suspicious that a spin-1 particle was used (the 3 dimensional irrep of SU(2)) as well as an entangled quantum state…sounds like both KS and Bell doesn’t it?)) The free will theorem is then:

If the choice of directions in which to perform spin 1 experiments is not a function of the information accessible to the experimenters, then the responses of the particles are equally not functions of the information accessible to them

In other words if we have free will, then particles have free will! How does the theorem get proven? Well basically the proof uses the KS theorem as well as the perfect correlations arising from maximally entangle spin-1 systems. First recall that the KS theorem says that hidden variable theories must be contextual, i.e. if I give you just the measurement directions involved in a measurement, there is no way to map this onto yes/no outcomes in a manner consistent across a set of possible measurements. But suppose, however, that your map to yes/no outcomes (i.e. the particles response) also depends on a hidden variable representing information in the particles past light cone, i.e. that the particles have no free will (contray hypothesis.) Now because we are dealing with a maximally entangled spin-1 system, two spacelike separated parties, A and B, will obtain the same outcomes for their measurement results for measurement directions for which they measure along the same direction. So for fixed values of the information in the past of both parties, the particle response should be identical and can only depend on local measurement direction. But this is not possible when one chooses an appropriate set of directions corresponding to the Kochen Specker proof. One can thus conclude that we cannot freely choose the measurements directions, i.e. that not all choices of measruements are possible: there must be hidden variables associated with the measurement choice as well. Thus we have shown that particles having dependence on information in the past light cone implies that the measurement choice must have dependence on information in the past light cone. Having shown the contrapostive, we have shown the free will theorem.
Now the interesting thing about the free will theorem is that doesn’t tell us whether the universe alows us to have free will or not. It simply says that if we assume some form of free will, then the particles we describe will also have free will. Of course the “free will” we describe here is “independence of (classical) information in the past light cone,” so some would object to this definition of “free will.” In particular, by this definition, a system which is totally random has free will. But is seems to me that the interesting question about free will is not whether one can have such random systems, but whether one can have a mixture of determined and undetermined evolutions. I mean the fundamental paradox of free will seems to me to be that free will involves a lack of cause for an action, but we want this action to itself have causes. In this respect, the above theorem suffers a bit, in my opinion, for a simplistic version of free will which is too absolutist for my tastes. What I find fascinating is whether we can “quantify” different versions of free will and what such quantifications would tell us about our real world.
Well it seems that I’ve had the free will to ramble on quite a bit in this post. Hopefully you might decide that the subject is interesting enough to choose to read the paper on your own 😉

U.S. News and World Distort

The other day I was browsing the magazine rack when I notice that the U.S. News and World Report Graduate school rankings. A sucker for elitism, I, of course, checked out the rankings. Even if they mean nothing they sure taste good going down my ego pipe 😉 But what was interesting to me was the in the Physics rankings, they rank seven specialties: Atomic/Molecular/Optical, Condensed Matter, Cosmology/Relativity/Gravity, Elemenary Particles/Fields/String Theory, Nuclear, Plasma,……and Quantum! Can you believe that? Here is the U.S. News rankings for the Physics subcategory of Quantum:

1. Massachusetts Institute of Technology
2. Harvard University (MA)
3. California Institute of Technology
4. Stanford University (CA)
5. University of California–Berkeley
6. University of California–Santa Barbara
University of Michigan–Ann Arbor
8. Princeton University (NJ)
9. Yale University (CT)
10. Cornell University (NY)
University of Colorado–Boulder
University of Illinois–Urbana-Champaign
University of Maryland–College Park

Okay, well I don’t agree with these rankings, mostly because they depend on a lot of different factors (theory? experiment? etc.) and in “Quantum” you lose a lot by not having international rankings (especially in theory, but also in experiment.) It seems that a few of the schools (*ahem, you know who you are*) make the list only because of their prior reputation in physics and have only small quantum programs (and in my book there are also some glaring obmissions.) But anyway, interesting, that at least the U.S News and World Report thinks “Quantum” is a valid subdiscipline. Now let’s just hope this means that we don’t become old and crusty.

Forward To the Past

Daniel sends me this Physorg article titled: “Professor Predicts Human Time Travel This Century.” (Rule of thumb: never believe any prediction whose time span streches beyond the retirement age of the person making the prediction!) The physicist in this title is University of Connecticut’s Ronald Mallett and the prediction is based, in some ways, on his paper , “The gravitational field of a circulating light beam,” Foundations of Physics 33, 1307 (2003). In this paper the Mallett solves Einstein’s equation for an infinitely long cylinder of rotating light and finds that this solution contains closed timelike curves. A similar construction of an infinite cylinder of rotating dust also produces closed timelike curves (this solution actually predates Godel’s universe historically, although the closed time like curves were not pointed out until after Godel constructed his crazy universe.) I’m always skeptical about these solutions as it is generally the case that there is something unphysical about these solutions. In the above papers case this appears to be the fact the solution is not assymptotically flat. It is also not clear that the solution is robust to perturbations or will be valid for a cyllinder of finite length (although apparently for the dust case the finite length solution has closed time like curves.) But what is fascinating to me is how simple it sometimes appears to be to make general relativistic systems which have closed time like curves. Okay, simple is perhaps not the best word, and of course the real question is whether it is possible to make solutions which don’t have physically bad properties. For some excited crazy optimism along these lines, check out gr-qc/0211051.
Of course in any popular science article about time travel, the question of grandfather violence comes up. Interestingly Mallett deals with the issues of causality for systems with closed time like curves a la David Deutsch:

“The Grandfather Paradox [where you go back in time and kill your grandfather] is not an issue,” said Mallett. “In a sense, time travel means that you’re traveling both in time and into other universes. If you go back into the past, you’ll go into another universe. As soon as you arrive at the past, you’re making a choice and there’ll be a split. Our universe will not be affected by what you do in your visit to the past.”

Which makes me think that this process should not be called “time travel” but should be called rewind, as in “pushing rewind on the VCR and start recording a new program over the old one.” When I was growing up I used to wonder what made us think that if we traveled back in time whether there would actually be anything back there. What a bummer to build a time machine, take a trip into the past, and find that you are the only thing in existence in this past!
Finally, the article ends in a very sad manner:

In light of this causal “safety,” it’s kind of ironic that what prompted Mallett as a child to investigate time travel was a desire to change the past in hopes of a different future. When he was 10 years old, his father died of a heart attack at age 33. After reading The Time Machine by H.G. Wells, Mallett was determined to find a way to go back and warn his father about the dangers of smoking.

Which should convince you that physicists too are only human.

Best Movie Ever

Oh my god, oh my god, oh my god!

Homer Simpson: Oh my god, oh my god, oh my god!
Bart: Dad, you can’t eat all those free samples. We’ve gotta get Lisa’s present.
Homer Simpson: Watch and learn.
[Homer sprints around the mall’s food court, devouring all of the sample trays.]
Homer Simpson: More … free … samples!
Bart: Dad, you ate all the free samples. Now you’re eating men’s slacks.
Homer Simpson: Eh, it’s still better than Indian food.

Mostly Right With a Little Bit of Wrong

Seth’s Lloyd’s book, Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos has been reviewed in the New York Times. Okay, so for my blood presure, I just should not be allowed to read passages like this

Ordinary desktop computers are a flawed model of the physical world, Lloyd argues, because they handle everything as clear “yes” or “no” commands, while the universe operates according to the rules of quantum physics, which inherently produce fuzzy results. But Lloyd happens to be one of the world’s experts in a new kind of computing device, called a quantum computer, which can produce similarly vague answers like “mostly yes but also a little bit no.” …

Ahhhhhhh! Yeah, I should avoid those passages.

Oldschool, Contradiction, and a Strawman

Today, Robert Alicki and Michale Horedecki have an interesting preprint on the arxiv, quant-ph/0603260: “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems.” (Now where have I seen that word “quantum hard drive” before?) As you can imagine, the paper cuts closely to my own heart, so I thought I’d put some comments on the paper here.
The question the paper addresses is right there in the title: is it possible to store quantum information in a system in a way similar to how a hard drive stores its information in ferromagnetic magnetic domains? The authors clame to show that this is not possible (On a personal note, I cannot read the words “no-go theorem” without thinking of John Bell’s famous quote: “what is proved by impossibility proofs is lack of imagination.” Of course, this quote is also ironic: Bell is most famous for proving the impossibility of using local hidden variable theories to simulate quantum correlations! And of course I in no way think that Alicki or Hordecki suffer from a lack of creativity, that’s for sure!)
The paper is structured into three parts, which I will call oldschool, contradiction, and strawman.
In the oldschool section of the paper the authors discuss why it is possible to store classical information in a ferromagnetic domain. They use the Curie-Weiss (CW) model of a ferromagnet, which is not exactly realistic, but captures (for ferromagnetics of high enough dimension) much of the essential ideas of real ferromagnets. In the Curie-Weiss model, we have classical spins which point either up or down. The total magnetization is then the sum of all of the spins, counting +1 for spin up and -1 for spin down. The energy of a particular configuration in the CW model is given by negative of the quantity of the total magentization squared times some arbitrary coupling constant. This means that in the CW model, each spins is coupled to every other spin via a ferromagnetic Ising interaction. Ferromagnetic Ising interactions produce an energetically favorable condition to alligning the two spins involved in the interaction. Thus the ground state of the CW model is two-fold degenerate and is the all spins pointing up and the all spins pointing down states. Further there is a big barrier of energy for flipping between these two ground states. In their paper Alicki and Horodecki present the old school argument about why information related to the total magnetization is resistent to the effects of interacting with a thermal environment. They provide a heuristic argument which basically says that the time it will take to mix between the two ground states (or, more precisely, to wash out the magnetization), if you are below the critical temperature is suppresed like one over the number of spins. The argument is very old school: one simply looks at a microscopic dynamics where spins are flipped at a rate proprotional to the Boltzman factor coming from flipping the spin. In such a setting the relaxation looks like a random walk of the magnetization on an energy landscape given by the free energy of the system. One then finds that below a cricital temparture the free energy has two minima and when you do an anlysis of the diffusion on this free energy the time it takes to diffuse between the two minima is suppresed by the number of spins. Okay, all is good and well. This is (basically) the reason why if I encode classical information into a cold enough ferromagnetic system (in high enough dimensions) then the time it will take to destroy this information will be huge. Of course, one might also note that the time is huge, but finite: the states are metastable. Note, however, that this finite time means that eventually the information stored in the magnetization will be thermalized (or so close to thermalized that you can’t tell the difference.)
Okay, so much for the oldschool part of the paper. Now onto the meat of the paper. The authors next consider whether similar constructions can be performed for quantum information. Are there quantum systems with metastable states which can be used to store quantum information (a “quantum hard drive”?) Now, of course, given the above argument what one would like to see is an argument that the time it takes to disorder quantum information in a system scales with the size of the system. Unfortunately, the authors do not take this approach but instead go a different route. Their approach is straightforward: they attempt to analyze the mathematical structure of the metastable states. They base their argument on the second law of thermodynamics. One way to phrase the second law of thermodynamics is that which is done by Kelvin: it is impossible to construct an engine which has no effect other than extracting heat from a reservoir and converting this to an equivalent amount of work. Fine. The authors then rephrase this in what is called “passitvity.” A quantum state is passive when it is not possible to extract energy from the system at a given state by means of an engine’s cycle. Now the short of the story about pasivity is that for finite systems, completely passive (meaning that for all n, any n-fold tensor product of the states is passive with respect to n-fold local cyclic evolutions) states are Gibbs states. In other words, for finite systems, there is only one completely passive states: the Gibbs state [tex]$exp[-beta H] /Z$[/tex]. Fine. The authors then discuss the case for infinite systems, but I will ignore this argument (1) because even in the oldschool part of the paper, the argument was not about infinite lifetimes, only about metastable states, (2) I’ve never seen an infinite system, nor, unless I become God, do I think I will ever in the future see an infinite system, (3) even at this stage, I object to their argument, and indeed everything about my objection carries over to the infinite case. Okay, I’ll mention the infinite system arguement, just for completeness, but really I think the above objections should be kept in mind. The infinite system arguement is that one can show that completely passive states in an arbtrary (finite, infinite) system are a simplex. So now the authors argue that since these states are a simplex, i.e. a convex combination of a finite number of states, then these completely passive states (finite or infinite) cannot be used to store quantum information. So you see, even for the finite systems, where there is just one completely passive state, the Gibbs state, the argument will be the same.
So what is my objection? Well notice that the wool has been pulled over your eyes in this argument. In the oldschool section the authors argued that if you try to store information into the CW system, then the amount of time it takes for this information to be destroyed grows with the system size. In other words, if you start a sytem out in the non-Gibbs states of stored information, then it will take a long time to thermalize into the Gibbs state. This is what is important for storing information. Notice in particular that despite the fact that thermodynamic equilibrium is the end result of the dynamics considered, the process described is from a non-equilibrium system to the equilibrium system. In the contradiction section of the paper the authors argue that the Gibbs state is the only state which does not violate the second law (or in the infinite case, states which form a symplex.) But what does this have to do with the process where we attempt to store quantum information in the system and then ask how long it takes to reach the thermal equilbrium where the information is destroyed? As far as I can tell it doesn’t have anything to do with this. In fact, if you think about their argument for a while, you will notice that for finite systems, they are arguing that the only allowable states are the completely passive state of the Gibbs state. In other words if you apply their argument for finite systems to the system described in the old school section, the conclusion is that there is only one state and one state cannot store information, i.e. classical hard drives, if they are finite, are not possible!
So what was the falacy in the argument? It was that the question of whether you can store information (classical or quantum) in a system is one that can be analyzed from equilbrium alone. Certainly the second law, as stated in the Kelvin formulation, holds for states in thermal equilibrium, but we are explicitly not asking about this situation: we are asking about what are the states in moving from a nonequilibrium situation to an equilibrium situation. And we are interested in timescales, nota bout final states.
So what is the final section of their paper about? It is about Kitaev’s toric code model for storing quantum information. This is the section I call the strawman. In particular, as I have argued before (see my paper here or any of the many powerpoints of my talks where I talk about self-correcting quantum computers), it is well known that Kitaev’s model, in eqiulibrium, cannot store quantum information. The reason for this is the same reason as for the one dimensional Ising model. But we don’t care about equilbrium, remember. We care about how long it takes to move to equilibrium. Here is where the energy gap comes into play: Kitaev’s basic argument is that the creation of free thermal anyons will be supressed like the Boltzman factor [tex]$exp[-beta g]$[/tex] where [tex]$g$[/tex] is the gap energy. Thus, if we start the system away from equilibrium with the encoded quantum information, the probability that we will obtain a real thermal excitation is exponentially supressed if you temperature is below the gap. Of course, every single qubit will be interacting with a bath, so the total rate of production of thermal anyons needs to be below one over the size of the system. Of course for infinte systems this can never be fullfilled. But for finite sized systems, this be fullfilled: note that the temperature only needs to be decreased in the logarithm of the system size (which, even for macroscopic systems is rarely more than a factor of a hundred.) In the preprint, the authors basically point out that in the equilibrium state there is a constant fraction of real thermal excitations and these can diffuse around the torus to disorder the system: but again the question is what happens when you start out of equilbrium and how long does it then take to return to equilbrium. So I believe that in this section they are attacking basically a strawman. Of course I have my own personal worries about the toric code model and in particular about its fault tolerant properties. You can read about this in my paper on self-correcting quantum memories.

Quantum Key Distribution: "Ready for Prime Time"

What should we make of this press release from MagiQ Technologies? The press release cites two advances:

The first breakthrough demonstrates there are no distance limitations to MagiQ’s quantum cryptography solution. Utilizing Verizon’s commercial fiber infrastructure, MagiQ successfully bridged two separate spans of 80km by cascading a number of MagiQ’s QPN 7505 devices. The commercial fiber network is made up of spans, typically 80km, linked together to complete metro area networks and long haul networks. Cascading of quantum cryptography devices enables the deployment of quantum cryptography throughout the telecommunications network.

and

The second breakthrough demonstrates that quantum keys can be mixed with other optical signals on one strand of fiber. Previously, quantum cryptography devices required a dedicated dark fiber strand, which is costly and not always available, for the transmission of quantum keys. The multiplexing of quantum
keys with data on one strand significantly reduces the cost of deploying quantum cryptography.

The first item sounds like they are performing some sort of key infrastructure and not quantum repeaters, right? I wonder how many of these MagicQ systems are actually in use right now? Ah the world of cryptography, where every sentence I write is a question?

Gravitomagnetic London Moment?

Two papers (experiment: gr-qc/0603033, theory: gr-qc/0603032), an ESA press release, and blog posts (Uncertain Principles, Something Similar, and Illuminating Science) today are all about a recent experiment performed by Martin Tajmar (ARC Seibersdorf Research GmbH, Austria) and Clovis de Matos (ESA-HQ, Paris) which they claim shows a gravitomagnetic effect in a laboratory experiment. Not only do they claim that they observe a gravitomagnetic effect, but that the effect comes not from the standard general relativity mechanism, but instead from their own theory which has massive photons and massive graviphotons (which is what the authors call the carriers of the force which arises when one linearlizes gravity and obtains equations which resemble Maxwell equations, i.e. spin 1 instead of spin 2)! Color me skeptical.
The experiment is actually fairly straightforward. Just take a superconducting ring and spin it up. The authors then look for a gravitomagnetic field which can be measured by nearby accelerometers. Now the normal gravitomagnetic field strength measured in their experiment is about 30 orders of magnitude lower than what standard general relativity predicts. But when they run this experiment they indeed do find accelerations in their accelerometers for superconducting rings (and none for non-superconducting rings.) The authors then intepret this effect as confirming evidence of their theory which invokes “gravitophotonic” masses. If this is correct, then this is an astounding result: not only does it detect a gravitomagnetic field, but it also is a field which is not arising from standard general relativity. Wowzer.
Of course you can color me skeptical. As Chad Orzel points out, the signal they are talking about is only about 3 times as strong as their noise. Now when you look at one of their runs, i.e. figure 4 of gr-qc/0603033, the peaks look pretty good, no? Well figure 4b is a little strange: the gravitomagnetic effect appears to occur before the acceleration. Okay a bit strange, but a single run proves nothing, right? Okay, what about figure 5? Ignore the temperature dependence now, but would you have picked out the peaks that they picked out? Okay so these things make me a little uneasy. Okay, so well certainly they did a lot of runs and tried to get some statistics on the effect. Indeed, they did something like this. This is figure 6. And this is what makes the paper frustrating: “Many measurements were conducted over a period from June to November 2005 to show the reproducibility of the results. Fig. 6 summarizes nearly 200 peaks of in-ring and above-ring tangential accelerations measured by the sensor and angular acceleration applied to the superconductors as identified e.g. in Fig 4 with both electric and air motor.” Why is this frustrating? Well because I have no clue how they analyzed their runs and obtained the tangential accelerations. Were the peaks extacted by hand? (And what is the angular acceleration? Is it the average acceleration?) Argh, I can’t tell from the paper.
Well certainly this is a very interesting set of papers (the theory paper is pretty crazy: if I get a chance I’ll try to go through it in some detail and post on it.) I would personally love to know more about the experiment. I spent this afternoon pondering if I could think up a way in which they get the effect they do, but I’m certainly no expert on this stuff, and might be missing something totally obvious.
(Note also that there is another claim of interesting gravitational effects around superconductors made by Podkletnov. For a history of this “anti-gravity” effect see Wikipedia. Since the Podkletnov experiment is considered controversial, the authors of the above article make sure to point out that they are not observing the effect Podkletnov claimed.)

Praying to Entangled Gods

From the Washington Post, in a fair and balanced (*ahem*) article on the effect of prayer on healing:

But supporters say that much about medicine remains murky or is explained only over time. They say, for example, that it was relatively recently that scientists figured out how aspirin works, although it has been in use for centuries.
“Yesterday’s science fiction often becomes tomorrow’s science,” said John A. Astin of the California Pacific Medical Center.
Proponents often cite a phenomenon from quantum physics, in which distant particles can affect each other’s behavior in mysterious ways.
“When quantum physics was emerging, Einstein wrote about spooky interactions between particles at a distance,” Krucoff said. “That’s at least one very theoretical model that might support notions of distant prayer or distant healing.”

Well yeah, it might support the notions of distant prayer or distant healing, except that it explicitly doesn’t support those notions since entanglement can’t be used to signal and hence can’t be used to influence distant objects in the way distant prayer or distant healing would. Argh!