I’d never seen this quote from Richard Feynman on the measurement problem:
When you start out to measure the property of one (or more) atom, say, you get for example, a spot on a photographic plate which you then interpret. But such a spot is really only more atoms & so in looking at the spot you are again measuring the properties of atoms, only now it is more atoms. What can we expect to end with if we say we can’t see many things about one atom precisely, what in fact can we see? Proposal,
Only those properties of a single atom can be measured which can be correlated (with finite probability) (by various experimental arrangements) with an unlimited no. o f atoms.
(I.e. the photographic spot is “real” because it can be enlarged & projected on screens, or affect large vats of chemicals, or big brains, etc., etc. – it can be made to affect ever increasing sizes of things – it can determine whether a train goes from N.Y. to Chic. – or an atom bomb explodes – etc.)
This is from a set of notes dating to 1946 as detailed in Silvan S. Schweber, “Feynman and the visualization of space-time processes” Rev. Mod. Phys. 58, 449 – 508 (1986).
This is an interesting statement because it is very reminiscent of how some of the founders of QM approached the problem. In my head I have often “visualized” it much like someone might visualize some sort of cascade-like detector.
The really intriguing part is that one could read a hidden variables-like assumption into this statement. Personally I’m agnostic on the issue, though if tortured would probably deny their existence. But his explanation does leave the door open to such a possibility while, at the same time, making quite a bit of logical sense. Indeed, I can’t see anything wrong with the argument in general, though I’ve thought about it for a grand total of ten minutes.
Correct me if I’m wrong (I’m not a physicist):
He seems to be saying that if you start inferring stuff from the very first observations, your level of accuracy is going to decrease with every layer of further inferred conclusions. Therefore, at every level you have to test rigorously, and test many, many times, in order to maintain any reasonable assurance that your conclusions are based on what is actually happening with the phenomena you are observing.
The danger of inferring wrong conclusions is especially high when using indirect means for your observations (such as photographing shadows produced by events or particles).
This sounds a lot like what Zurek calls “Quantum Darwinism”.
Well, since Matt said it first, I get to agree with him — Feynman’s proposal is pretty close to the idea that motivates the QD program. QD goes further, by asking not just “Can a given property become arbitrarily redundant?” but “Does a given property become redundant?” and “Which properties become redundant?” in a given physical situation.
Nonetheless, I think Feynman’s proposal is the most important part. Well, more precisely, the question that it suggests (“What properties can be correlated with an unlimited # of atoms?”) is a really good one. AFAIK, the first complete answer was by Lindblad in Lett. Math. Phys. 47, 189-196 (1999). Lindblad’s paper is really beautiful, but I can’t but assume that if Feynman had taken the question seriously he would have figured out the answer.
Incidentally, Rob Spekkens pointed out this quote to me about a year ago… but from a different source, “QED and the men who made it“. Now Dave brings it up. Serendipity, or common causation?
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
I wonder what Feynman would have thought of the farcical movement that “decoherence” effectively explains collapse/why not macroscopic superpositions of states (ie, explains away the Schrodinger Cat problem.) There are so many things wrong with “decollusion”:
1. It is a circular argument, since the probabilities (from moduli squared) are entered into the density matrix to start with. But they are the very thing that is supposed to be derived from the behavior of the wave amplitudes without an inexplicable intervention. If you want to explain how A leads to B, you put “A” up into a model and see if the model generates B, you don’t stick B in there at first and then pretend you explained the appearance of B.
2. A density matrix can be considered, a set combining various possible wave functions. Well, that is excusable if you are trying to model an actual situation you don’t know everything about, but a modeler making a theoretical point has no excuse to do such a thing. The modeler is supposed to stipulate a state of affairs to show what would happen if indeed things were like that, it is created by the modeler and not an attempt to fit to some particular real example. But the density matrix helps to obfuscate and hides the fallacy of circular argument and improperly combining the probabilities of having various WFs with the probabilities from squared moduli – two completely different sources of probability.
3. Decoherence advocates talk of the condition of the WFs in a given single case becoming a “mixture.” But a mixture is supposed to be an actual concurrent combination of various particles or states (like mixing together photons of various polarizations and phases, not just a sloppy way of talking about how the phases of a given case (or particle, like a single photon ready to be measured) are unpredictable or vary from case to case.
4. They say, that the states after decoherence “can’t interfere with each other” as if that was either a legitimate way to talk about a single superposition, or if it would somehow magically make one state dominate and the other one fade away anyway. Note that “interference” is a global way to talk about the effects of superposition, which simply applies regardless of the phase. There is no specific phase relation which constitutes “interference” or lack of it, the concept means that given numerous examples of superposition there is a regular pattern in time and/or space. It is silly to even say that about e.g one wave compared to another.
BTW I thrashed this out pretty fast so it may be a bit crinkly and not totally with the point put on, but you should see that decollusion (as I call it) is questionable. I’m not the only one who says it’s a circular argument, and much of my complaint is similar to Penrose’s take.
Neil,
The discussion of decoherence — and just what it accomplishes (or may accomplish in the future) w/r.t. the measurement problem — is a library shelf or two, so please forgive my terseness here.
Decoherence doesn’t solve the measurement problem. Unfortunately, the idea that it does (or that its serious proponents think that it does) has taken root. It also doesn’t explain Born’s Rule, although Zurek claims to have derived Born’s Rule using closely related physics in quant-ph/0405161. Whether it’s valid is still being debated.
Moving on, decoherence theory does provide some resolution to Schroedinger’s cat. Specifically, it shows (within orthodox quantum theory) that you won’t be able to observe any evidence that anything weird and quantum is going on with the cat. This is an operational statement, and fairly hard to disagree with. Where people get into trouble is in applying ontological language to it — “The wavefunction is converted into a mixture,” etc, etc. As Bohr pointed out, quantum theory weirds language.
As a concluding note, though, I’ll point out that you can do decoherence theory (e.g., to satisfy yourself that Schroedinger’s cat doesn’t have observable quantum consequences) without ever resorting to a density matrix. You do have to invoke Born’s Rule, though.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
BTW (and I’ll figure three in a row is enough for a long time) what bothered me about e.g. Chad Orzel’s attempt to justify decoherence (Uncertain Principles: “Many-Worlds and Decoherence …”), was his use of an ensemble of experiments of photon hits, with varying phases from one case to the other (not to be confused with the status of any one run by itself!) to pretend to explain how “decoherence” somehow makes the experiment end up with photon hits at one or the other detector, instead of superposed at both. Well, it just utterly, utterly failed, and no matter now many times I explained it was a circular argument etc and implied that ensemble results don’t tell us why it turns out “A” detector sometimes and “B” others, he just couldn’t accept my criticism and kept repeating his point (and I mine.) Sad, it was a waste of dialog and the same mad dance came up again. But like I said, comparing across trial runs is not the same as saying what happens in any one case, which is where the true “problem” of measurement resides.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Thanks Robin, I’ll look into that. Nevertheless, the issue seems to remain, that there were two or more “states” and then later we only find one of them, hence somehow one was “lost” or removed and the issue of which one and why requires either a deterministic process going on below which made one or other outcome inevitable, or a genuinely and mysterious “choice” that does not follow by necessity (true randomness.) How and when does this happen? And, look at situations like decay of a presumably structureless (?!) muon at some unpredictable moment, there aren’t even things inside to mark time, to interfere with each other or an environment, etc.
As for Born’s Rule (squared moduli for probability, right?) I think there is a justification for the form of it, in that intensity of a classical wave must be amplitude squared and number of photons must be consistent (rough take). It is what converts a wave function having two or more potentia into one or the other that’s the problem, and I don’t see that muddled phase relationships would make that happen or keep one or other potential outcome hidden somehow instead of both just mucked up there together, to use crude middle-brow language.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Slight correction: Born’s is basically the projection rule, not just the inherent amplitude formula. So instead of simply a photon with a|R> + b|L> having |a|^2 chance of being found RH etc., it means deriving from projection the chance that e.g a 40 degree linear photon will pass a 20 degree linear filter – but the basic principle is the same and still “mysterious” from a deterministic standpoint.
Neil,
You’re right that collapse is problematic. We observe single irreversible outcomes, but quantum theory doesn’t seem to have a natural mechanism to explain this (Copenhagen is, arguably, not natural).
However, at the risk of repeating myself, decoherence theory doesn’t even attempt to explain this. That’s not its purpose. Chad’s long post is pretty good (e.g. “The only thing the interactions with the environment do is to obscure the interference…”), but the 3rd paragraph is a bit carelessly worded. It’s very very hard to get language right in this context. I suggest not relying on that paragraph.
What is going on with collapse remains an open question in quantum foundations. Half of us claim it’s not even a problem, and the other half disagree violently about how to solve it. I won’t try to tell you how to think about it, but the following broad insight may be useful.
The distinction between ontological and epistemic statements is valuable in discussing QM.
Ontological: “The system is in state Psi.”
Epistemic: “I describe the system by Psi.”
Ontological: “Detector #2 clicked.”
Epistemic: “I saw Detector #2 click.”
Ontological statements are tempting. However, if you take unitary evolution seriously, they are hard to justify. They tend to require collapse. “I saw Detector #2 click,” however, does not necessarily require objective collapse. That may just be what it feels like to be entangled with a photon.
Hi Robin,
This is my first post on the Bacon Blog. I wish to support your entry just above, with the offering:
The “quantum measurement problem” is the search for a mechanism by which quantum amplitudes are converted into experimental actualities. The QMP is, depending on one’s point of view, is either unsolvable or is not a problem.
That is, if you want a physical mechanism, it is unsolvable. But, if you want a logical mechanism, there is no problem.