Codes, Geometry and Random structures: Day 3

Codes, Geometry and Random Structures

Pranab Sen,
Johnson-Lindenstrauss dimension reduction using unitary k-designs

Since the abstract isn’t on the website, I’ve reproduced it here:
Abstract:

The celebrated Johnson-Lindenstrauss lemma tells us that a random projection onto k-dimensional Euclidean space almost preserves the ell_2-length of any set of 2^{Theta(k)} vectors with high probability. The lemma has no dependence on the dimension of the ambient space in which the vectors originally lie. One way to implement a random projection is to take a Haar-random unitary operator on the ambient space, apply it to a vector and then project the resulting vector onto its first k coordinates. We show that a unitary poly(k)-design can be used for this purpose instead of a Haar-random unitary. This allows us to perform the dimension reduction quantumly in time polylogarithmic in the dimension of the original space and the number of vectors whose length is to be preserved.
We give two proofs of this result. The first proof digs into the Chernoff-style tail bound of a chi-square distribution, and the second proof is by a finer analysis of the so-called k-moment method originated by Bellare and Rompel and applied previously to unitary designs by Low.
Finally, we present an application of our result to private information retrieval where the sets stored have small size.

Here’s the protocol: Start with a state |psirangleinmathbb{C}^{d_1}. In some cases, it is helpful to imagine that it is drawn from a known set of K states, but this is not necessary for all the applications.
Apply a random unitary from a t-design, divide into d_1/d_2 blocks, each of size d_2, and measure which block the resulting state is in. This is meant to mimic a classical J-L transform which would send d_1 dimensions to d_2 dimensions. It turns out that t= O(d_2) is necessary to get good enough concentration. (And to figure out what the right moments of the underlying Haar distribution are, it turns out that using Levy’s Lemma as Richard Low does is not enough: Pranab needs some stronger dimension-independent concentration to avoid paying another d_1 cost.)
Patrick asked whether this would give a good dimension-reduction scheme if simulated classically. But using e.g. random circuits would require poly(t) gates for a total (classical) cost of d_1 {rm poly}(d_2), whereas even a trivial Gaussian rectangular matrix only takes time d_1d_2 to apply (and more sophisticated methods require time more like d_1log(d_1)).
One application (that Pranab called “toy” because of the parameters achieved) is to private information retrieval. The scenario is as follows:

  • Alice stores a subset S subseteq [m], |S|<n, n ll frac{log m}{loglog m}.
  • Bob has x in [m], wants to know if x in S.
    One solution is for Alice to send her entire database at cost O(n log m). Can achieve O(n (log n + log log m)) using a set system due to Buhrman et al. But there is an Omega(n) lower bound if Bob doesn’t speak.

The quantum solution is to use J-L. Let |Srangle = frac{1}{sqrt{|S|}} sum_{xin S} |xrangle. If we have a compressed version of |Srangle, and want to determine whether langle x | Srangle is zero or geq 1/sqrt{n}. This level of accuracy requires O(n log m) dimensions.
Here is the quantum protocol.

  1. Bob makes n^2 projections of x.
  2. He sends Theta(n^2) block names.
  3. Alice projects |Srangle onto these blocks and sends the resulting states.
  4. The total communication is $latex O(n^2(log n + log log m))

A recurrent difficulty is the problem that the dimension-reduction procedure requires a “which block” measurement, which gives a random answer. This means that two parties cannot cheaply compare a state by applying this J-L procedure and a swap test, unless they get very lucky and happen to land in the same block. Indeed, this phenomenon in which it is easy to sample from a distribution, but presumed hard to reproduce any individual state, even given its label, is related to the conjectured security of some quantum money schemes. This difficulty was also explored in the related paper I wrote with Ashley Montanaro and Tony Short: 1012.2262. There we found mostly negative results about the difficulty of faithfully compressing quantum state J-L-style; Pranab circumvents some of them by including a large classical register specifying the “which block” information, but for many applications this need to condition on the classical register is deadly.

Beth Ruskai, Some old and new results on quantum marginals and reduced density matrices

The quantum marginal problem concerns the m-partite reduced density matrices of an N-body quantum system. Given set of reduced density matrices, the N-representability problem asks whether there exists a N-party density matrix consistent with them. If we could decide this efficiently, then we could solve the QMA-complete local Hamiltonian problem efficiently. Despite this, a constant factor approximation, since the reduction only is known to work for 1/poly(N) accuracy.
Often we restrict to the case of N fermions. In this case, anti-symmetry plays a key role, for example, for m=1, rho_1 is N-representable iff all eigenvalues are leq 1/N. (This is essentially the Pauli exclusion principle.) I said “for example,” but really for m=2, it’s already not fully understood, and it is known that the full criteria do not depend only on eigenvalues.
There are a number of nice technical results, which I did not fully follow in part because I was typing Pranab’s talk still. (Yes, I am aware of the irony.)
For example, if R_m denotes the set of m-partite marginal states, then we know that {rho_N : rho_N mapsto R_m {rm extreme}} increases with m.
Many important results were proven in a 1972 paper of Erdahl, although one of the key proofs (that every extreme point is exposed) turns out to have a bug in it.
One useful theorem is that for 2m geq N, the preimage of an extreme point is unique. This implies that
The intuition behind the theorem is that if a point has a nonunique preimage, then by a suitable averaging over a phase, we can show that this point is a mixture of two different valid m-partite density matrices.
Intuition behind thm:
m-body density matrix
rho_J = sum_k mu_k^2 |chi_jranglelangle chi_j|
Purify
|psirangle = sum_j e^{iomega_j} mu_j |chi_jrangle |phi_jrangle
If the $omega_j$ are not unique, then we can write
|psirangle= x_1 |psi_1rangle + e^{iomega} x_2 |psi_2rangle,
where any $omega$ gives the same marginal $rho_J$.
In this case, we can average over $omega$ and get the mixture
x_1^2 |psi_1ranglelangle psi_1|+ x_2^2 |psi_2ranglelangle psi_2|, implying that $rho_J$ is not extreme.
In general can we say that the pre-image of an exposed point is unique? This open question dates back at least to Erdahl’s paper, and the answer is no: QECCs serve as a counter-example. The proof is in 1010.2717. I think this is a nice example of how quantum information tools with operational goals (e.g. fighting decoherence) turn out to have interesting foundational interpretations.

Omar Fawzi, Almost Euclidean sections of L1 and quantum uncertainty relations

Metric uncertainty relation:
Here is the main idea: given a state |psirangle apply U_k where k is drawn randomly from {1,…,t}. (Here t is much smaller than the dimension of the state.) Then we trace out a small subsystem B, leaving a larger quantum state A, say with n qubits. If K is the “key” state, then we hope that the state on AK is approximately maximally mixed.
It turns out that what we need here is a low distortion embedding of ell_2 into ell_1(ell_2) embedding. That is, if our state is mapped to sum_{k,a,b} alpha_{k,a,b} |k,a,brangle then we would like the following inequality to hold:
sqrt{dim AK} (1-epsilon) leq sum_{k,a} sqrt{sum_b |alpha_{k,a,b}|^2}leq sqrt{dim AK}
The ell_2 system is the one we discard, and we would like it to be small.
What can we achieve?
For random unitaries, we get t = O(log(1/epsilon)/epsilon^2) and $dim B = O(1/epsilon^2)$.
But what we can achieve explicitly (i.e. efficiently on a quantum computer)? When t=2, MUBs guarantee that the state of AK has entropy of at least n/2 (due to entropic uncertainty relations), but not high enough to guarantee high fidelity with the maximally mixed state. Nor does it work to take more MUBs.
Unitary 2-designs work, but are very costly.
The construction is a variant of one proposed by Indyk. The idea is to first apply a MUB (using O(1) random bits), which guarantees creating entropy at least n/2, and then a permutation extractor, which uses O(log n) bits to extract a constant fraction of this entropy. We put these extracted bits to the side and continue, each time multiplying the number of remaining qubits by a constant strictly less than one (something like 7/8). Eventually we are left with log(n) qubits that we just refer to as the ell_2 part; i.e. the B system.
This has a number of nice consequences. The relations between these different consequences are also nice, and in my opinion, underappreciated. Unfortunately, I won’t do them justice here, but here is a brief sketch:

  • An almost-Euclidean subspace of ell_1(ell_2).
  • Metric uncertainty relations based on explicit unitaries
  • The first construction of efficient locking
  • Efficient quantum identification codes using n classical bits and O(log^2(n/epsilon)) qubits.

Unfortunately, I had to catch an early flight, and missed the rest of the workshop. It’s a pity, since the talks seemed like they’d be pretty exciting.

4 Replies to “Codes, Geometry and Random structures: Day 3”

  1. Aram,
    What a wonderful job of typing up summaries of the talks at the conference! Thanks for doing this—I’m sure it will be beneficial to those who could not attend.

  2. I second Mark’s comment. This is a pretty amazing service you and Steve have provided with this LiveBlogging business. Now, if only we could get Scirate 3.0 going 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *