94 Replies to “Quantum Information Science Workshop Report”

  1. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  2. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  3. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  4. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  5. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  6. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  7. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  8. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  9. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  10. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  11. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  12. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  13. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  14. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  15. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  16. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  17. Thanks for the link Dave. Perusing the PDF, I note reference to my old nemesis “decoherence.” As I’ve said here and elsewhere, that process just can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH. Briefly, I charge that deco is a circular argument and has other flaws. Yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. I say it accomplishes nothing worthwhile at all, even instructionally. This is in a context of ragging on the MWH (more on that later.) Now of course decoherence can affect the patterns or usefulness etc of hits and the interaction of waves. It has a role. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.
    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They (like Chad Orzel) tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.
    Well, that argument is unsuitable for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)
    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.
    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! There wouldn’t be any “hits” to even be trying to “explain.”
    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  18. Neil,
    You seem to be going off the deep end a little here. It seems that you are misunderstanding how decoherence actually comes about. Let’s imagine two systems, A and B, which represent the system of experimental interest (A) and so local enviroment which we do not monitor (B). If the systems are in some initial states rho_A and rho_B respectively, which we can assume to be pure states, then the state of the system is given by rho_A \otimes rho_B. Now, let’s imagine some interaction Hamiltonian H, which couples systems A and B. Then the state after time t will be given by: exp(-iHt) rho_A \otimes rho_B eps(iHt), which is still of course a pure state. However since we do not monitor system B we can’t actually measure it, and so the state we have access to is given by Tr_B(exp(-iHt) rho_A \otimes rho_B eps(iHt)). Tracing out the environment means that if the two systems have become entangled, then although the compound system is in a pure state, the reduced density matrix for system A at time t, rho_a(t), does not satisfy rho_a(t)^2 = rho_a(t), and so cannot be written as a pure state. So what is happening isn’t really a degradation of coherence but rather a delocalising of coherence. Since we cannot really measure the state of the environment, when the two systems become entangled it leads to an effective degradation of the coherence. So, while the global state remains pure, the local reduced density matrices become mixed.

  19. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  20. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  21. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  22. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  23. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  24. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  25. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  26. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  27. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  28. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  29. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  30. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  31. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  32. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  33. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  34. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  35. Joe – deep end, really? It’s well possible I don’t adequately understand the decoherence scheme, but in turn I’d like it’s followers to at least try and appreciate my criticisms of it. They’re based on fundamental logical flaws IMHO, and that can’t be fixed by yet further elaboration of the sort you’re giving. I note you never actually said how or why e.g., the states being entangled or losing local coherencene; that results in a detection event one place instead of another, with immediate loss of any chance of it happening elsewhere (the classic collapse problem.)
    (BTW, the graphical conventions of yours look confusing – I know it’s hard to do right in a comment box.)
    I already explained why the DM is flawed, but here’s another tack: DMs are about ensembles and/or chances of WFs being this or that, and thus are inappropriate to model a given instance. Also, the detection probabilities (ie, collapses) are “put in by hand” by squaring the various amplitudes in the matrix. That entices to circular arguments seeming to help explain collapse. Otherwise we’d just leave the sets of amplitudes as is (as they are in old quantum evolution) and would have to do the interventionary work of picking and choosing among them.
    Also, not monitoring the rest of the system is irrelevant to the inherent issue of why the waves don’t remain extended and combined together (somehow, coherent or not.) I notice you talk in terms of how we describe the system in terms of certain particular constructs – maybe they’re inadequate because they confuse or misframe quantities? If a theory is good, we should be able to use various techniques and get agreement. Whether we can “write” the system as a pure state doesn’t really tell me how the distributions of WFs changes with time, how they behave in the universe. Look at my comment at http://scienceblogs.com/pontiff/2009/07/like_space_camp_but_quantized.php and see how we can test the meaningfulness of schemes by requiring a graphic representation in space and time.
    Here’s a practical test: start with a simple half-silvered mirror, get a photon WF to “split” there as per standard theory (before a measurement.) Have the split wave approach two detectors, mapping out its progress in space and time. Then, have a detector experience a “hit” and tell me what you’re going to do with the WF representing the photon. Tell me what it actually does in space and time, not through circumambulation.

  36. Neil, why are you posting your arguments under random posts on Dave’s blog? It seems a little impolite to hijack a comment section in that way.
    I didn’t specify the exact physical interaction that occurs because it depends strongly on the system. Various systems couple to the environment in different ways. For solid state systems, magnetic dipole interactions with the environment (usually in the form of neighbouring nuclear spins) tend to dominate, while for photons the case is different, as photons aren’t charged, so decoherence usually occurs through coupling into other modes. This isn’t meant to be a definitive list, as real physical systems are messy and there are lots of degrees of freedom to couple to.
    By the way, I know my notation looked ugly but it is basically Latex without most of the leading slashes, so tends to be understood.
    I should also point out that I was not using the density matrix to model a probability distribution. In fact I went to some length to point out that it was representing a pure state at all stages. It is entirely appropriate for such a calculation, and you get exactly the same answer as if you work it out with wavefunctions.

  37. Well Joe, then maybe I don’t know what the point of decoherence is – but I’ve heard it said (if not from you), it “tells us why we don’t find macroscopic superpositions.” I explained why it doesn’t tell us that, in response to the arguments I often hear. My critique of their typical use of the DM closely follows the misgivings of Roger Penrose, who isn’t God but should be taken seriously. So, the decoherence happens, but then the point is “so what” – we still have a superposition of extended states until some collapse removes one and shrinks up the other (or the one by itself, if it’s just a particle going from sharp momentum & wide position to wide momentum & sharp position.) I guess I’m waiting for the “so what” to be applied to the classic case of collapse.
    You wrote: It is entirely appropriate for such a calculation, and you get exactly the same answer as if you work it out with wavefunctions. OK, and if you work out anything with WFs then you get “wavefunctions” that stay WFs, unless and only if a supervention like a collapse process occurs – that’s the whole point, right? But how nature ends up in a collapsed condition is the problem we can’t understand or make fit. Decoherence enthusiasts claim (whether you do or not) that deco really does help us get there (or drift off into MWI weirdness, or evasion.)
    As for threadjacking, well – I think you see too much in the bare fact I mentioned the same subject, three threads running. If it could be plausible anyway, then I lucked out. No, not “random.” (BTW, this is a QM blog!) In the humor case it wasn’t really on topic, but such a subject doesn’t lend itself to clear topic discipline anyway does it? And the quantum camp thread: the issue of collapse and how to get little sparks out of spread out waves is a fundamental issue that students will have to confront – no way that’s off-topic. Decoherence is heavily marketed as either a pseudo explanation of collapse, or as a setup for MWI, and I think it’s a snow job. Penrose does too, although more diplomatically. It’s a controversial “hot topic” and worth talking about.

  38. Well Joe, then maybe I don’t know what the point of decoherence is – but I’ve heard it said (if not from you), it “tells us why we don’t find macroscopic superpositions.” I explained why it doesn’t tell us that, in response to the arguments I often hear. My critique of their typical use of the DM closely follows the misgivings of Roger Penrose, who isn’t God but should be taken seriously. So, the decoherence happens, but then the point is “so what” – we still have a superposition of extended states until some collapse removes one and shrinks up the other (or the one by itself, if it’s just a particle going from sharp momentum & wide position to wide momentum & sharp position.) I guess I’m waiting for the “so what” to be applied to the classic case of collapse.
    You wrote: It is entirely appropriate for such a calculation, and you get exactly the same answer as if you work it out with wavefunctions. OK, and if you work out anything with WFs then you get “wavefunctions” that stay WFs, unless and only if a supervention like a collapse process occurs – that’s the whole point, right? But how nature ends up in a collapsed condition is the problem we can’t understand or make fit. Decoherence enthusiasts claim (whether you do or not) that deco really does help us get there (or drift off into MWI weirdness, or evasion.)
    As for threadjacking, well – I think you see too much in the bare fact I mentioned the same subject, three threads running. If it could be plausible anyway, then I lucked out. No, not “random.” (BTW, this is a QM blog!) In the humor case it wasn’t really on topic, but such a subject doesn’t lend itself to clear topic discipline anyway does it? And the quantum camp thread: the issue of collapse and how to get little sparks out of spread out waves is a fundamental issue that students will have to confront – no way that’s off-topic. Decoherence is heavily marketed as either a pseudo explanation of collapse, or as a setup for MWI, and I think it’s a snow job. Penrose does too, although more diplomatically. It’s a controversial “hot topic” and worth talking about.

  39. What I have been trying to point out is that if you do not have access to the whole system you get something that looks like collapse of the wave function, but is actually just a unitary operation on a larger (though partially inaccessible) Hilbert space. So just because a state acts likes it’s being weakly measure, doesn’t mean that there is any collapse of the wave function going on. This is true in all interpretations of quantum mechanics, as it is simply what emerges from the Schroedinger equation. The difference between many worlds and something like the Copenhagen interpretation is that in the many worlds interpretation this accounts for all loss of coherence, while the copenhagen interpretation allows for special “measurement” events.
    But before you start arguing with me over this, let me point out that decoherence is regularly observed in experiments. While you might suggest that there is actually some kind of dynamical collapse going on, you would be wrong. How do we know? Well, we actually have a pretty good idea of the different interactions that systems undergo with their environment. As I mentioned before, dipole coupling to environmental spins is often responsible for the bulk of decoherence in spin qubits. Well, it turns out that if you continuously rotate a spin about a particular axis the dipole interaction completely cancels its self out. Thus we can use dynamic decoupling to cancel out the coupling term between the system of interest and enviromental spins. This has been demonstrated in two regimes: i) where the system is manipulates, and ii) where the system is left alone but the environmental spins are pulsed (as is often done with H nucleii in NMR). In both cases it is possible to dramatically reduce the rate of decoherence. This is hard evidence that he observed loss of coherence occurs through a unitary coupling with the environment.

  40. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  41. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  42. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  43. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  44. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  45. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  46. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  47. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  48. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  49. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  50. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  51. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  52. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  53. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  54. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  55. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  56. Thanks for the well-intentioned efforts Joe. I think I learn something from it but I still question some things. E.g., risky phrases like “looks like” (you mean, it doesn’t “really happen”?) and you also indulge that weird talk of some things being “inaccessible.” Uh, that’s not really a rigorous idea is it, if they are still “in our universe.” I think you’re confusing some, well, confusing elements of misleading math structures with actual properties and implications. And MWI is just a dope fantasy IMHO, it says nothing intelligible. The measurement would still have to determine a special moment for “separation” [instead of “removal of the lost wave”], or you’re back to having the states all jumbled on top of each other anyway.
    Here’s a quick jab I made against MWI: how many “worlds” do you need to express unequal probabilities, like 70/30? Not “70 of one world and 30 of the other” – that reifies arbitrary base 10. To avoid the arbitrary, then “infinite number of each”? – but then how do you get proportions out of infinite (aleph null) sets, per Cantor? (I know this better than the QM.)
    Like I should have said pithy, lots of what you think are implications etc. of decoherence are IMHO artifacts of misleading or otherwise inappropriate analytical structures (like when Penrose said in SOTM p. 317:
    “But with a density matrix there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and [versus] the quantum mechanical probabilities that would result from the R-procedure [RP’s shorthand for state-Reduction – ie, COTWF.)
    Yeah, that’s not “deliberately deceptive” but it fools people, it misleads in a manner akin to a circular argument. Conflating two different things means trouble.
    Also, you say that we really do observe “decoherence” – well sure we do, the loss of coherence itself between WFs can happen. But that doesn’t lead to *consequences* like shrinkage/localization of waves, or the “appearance” (whatever does that mean?!) of same. I note you continue to avoid directly resolving the points I made, and you don’t frame parallel enough to what I’m getting at – it’s like fighting apples with oranges. Your apples may have merit, but don’t address my oranges.
    tyrannogenius
    (well, it’s a clever handle IMHO…)

  57. Hi Neil,
    Perhaps if we intend to continue this discussion Dave’s comment section isn’t the ideal place. You can find me email address via my website if you want to discuss things further.
    Let me just address the points you raise. I used “looks like” since this is a somewhat informal discussion. What I actually meant was that the two cases (nonunitary evolution vs. evolution on a larger hilbert space) are locally indistinguishable. There is no measurement you can make which distinguishes them. When I talked about part of the system being inaccessible I meant that we can not access it either because it is physically impractical (as when there is coupling to local environmental spins) or when the other system follows a path which is outside of our future lightcone (as when a photon is spontaneously emitted), in which case it is physically impossible to access. It is of course possible for me to make this explanation very formal, but I think that will simply result in very opaque comments.
    Essentially we are constrained to make measurements on only a subsystem, rather than on the full Hilbert space. The subsystem on which measurements are possible I deem to be ‘accessible’. Any observable which is mutually unbiased with respect to observables on this ‘accessible’ subsystem I will deem to be ‘inaccessible’ (we can call observables with some bias wioth respect to local observables on the accessible subsystem partially accessible if the need arises).
    This is really nothing to do with interpretations. We need only agree a few things to guarantee decoherence: 1) The Schroedinger equation is correct, or a sufficiently good approximation to reality to predict behaviour in the systems we discuss, 2) There exists some enivronmental system which we cannot make measurements on for what ever reason and 3) There exists a coupling between the experimental system and the environmental system. All 3 points are satisfied in all experimental systems we know of. But given these prerequisites it is easy to show that decoherence will almost always occur. The only case where it won’t occur is when the coupling between the environment and the experimental system results in a periodic evolution. In practice this will never occur in real systems due to the size of the environmental system, but I raise it for completeness.
    If we agree on these three points, then as I explained in my initial comment there is loss of purity of the accessible system (this is what decoherence is) as time increases. If you disagree that one or more of the prerequisites are not met, or my assertion that you tend not to get periodic Hamiltonians between the environment and the system, then I would be shocked, but we would at least have identified where we are diverging. If you accept all four, but insist it never leads to reduced purity (i.e. decoherence), then you are simply making a mistake in your calculations.
    Anyway, as I say, here is probably an inappropriate forum to continue such a discussion.
    (Dave, sorry for the sideshow.)

  58. I’m also slightly concerned that posting an explanation late at night I will end up saying something stupid which will be immortalized for ever in your comments section and google’s cache!

  59. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  60. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  61. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  62. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  63. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  64. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  65. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  66. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  67. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  68. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  69. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  70. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  71. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  72. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  73. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  74. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

  75. Heh, Joe, I know what you mean! It’s happened to me – I recently posted a quick mis-take about polarized beam splitters and time reversal. Thanks for your efforts and your forbearance! We still largely talk at cross-purposes, but some mutual appreciation is possible (uh, don’t you think I made *some* worthwhile criticisms in my mess here?) I’m too middle-brow to follow this with complete rigor anyway – I look at the logic and semantics of the argument more than the harder math.
    Maybe some more comments, but will take the hint if you don’t reply so I can email etc. – *But don’t forget to check or add to my blog thread at name link.* The results could be put up somewhere to help resolve people’s disagreements and confusion about “decoherence” which is IMHO tricky quicksand, even worse than the original “straight up” collapse dilemma.
    PS REM to Dave B, you once wrote:

    🙂
    I do not believe the decoherence solves the measurement problem, no.
    Ack, my ride just showed up so I have to run.

    (Posted by: Dave Bacon | March 3, 2009 10:03 PM)
    Good for you!

Leave a Reply

Your email address will not be published. Required fields are marked *