Top arXiv papers

sign in to customize
  • PDF
    Symmetries are of fundamental interest in many areas of science. In quantum information theory, if a quantum state is invariant under permutations of its subsystems, it is a well-known and widely used result that its marginal can be approximated by a mixture of tensor powers of a state on a single subsystem. Applications of this quantum de Finetti theorem range from quantum key distribution (QKD) to quantum state tomography and numerical separability tests. Recently, it has been discovered by Gross, Nezami and Walter that a similar observation can be made for a larger symmetry group than permutations: states that are invariant under stochastic orthogonal symmetry are approximated by tensor powers of stabilizer states, with an exponentially smaller overhead than previously possible. This naturally raises the question if similar improvements could be found for applications where this symmetry appears (or can be enforced). Here, two such examples are investigated.
  • PDF
    Expander graphs are fundamental in both computer science and mathematics, with a wide array of applications. With quantum technology reshaping our world, quantum expanders have emerged, finding numerous uses in quantum information theory. The classical expander mixing lemma plays a central role in graph theory, offering essential insights into edge distribution within graphs and aiding in the analysis of diverse network properties and algorithms. This paper establishes the quantum analogue of the classical expander mixing lemma and its converse for quantum expanders.
  • PDF
    Bell nonlocality is a fundamental phenomenon of quantum physics as well as an essential resource for various tasks in quantum information processing. It is known that for the observation of nonlocality the measurements on a quantum system have to be incompatible, but the question which incompatible measurements are useful, remained open. Here we prove that any set of incompatible measurements on qubits leads to a violation of a suitable Bell inequality in a multiparticle scenario, where all parties perform the same set of measurements. Since there exists incompatible measurements on qubits which do not lead to Bell nonlocality for two particles, our results demonstrate a fundamental difference between two-particle and multi-particle nonlocality, pointing at the superactivation of measurement incompatibility as a resource. In addition, our results imply that measurement incompatibility for qubits can always be certified in a device-independent manner.
  • PDF
    This paper delves into a fundamental aspect of quantum statistical mechanics -- the absence of thermal phase transitions in one-dimensional (1D) systems. Originating from Ising's analysis of the 1D spin chain, this concept has been pivotal in understanding 1D quantum phases, especially those with finite-range interactions as extended by Araki. In this work, we focus on quantum long-range interactions and successfully derive a clustering theorem applicable to a wide range of interaction decays at arbitrary temperatures. This theorem applies to any interaction forms that decay faster than $r^{-2}$ and does not rely on translation invariance or infinite system size assumptions. Also, we rigorously established that the temperature dependence of the correlation length is given by $e^{{\rm const.} \beta}$, which is the same as the classical cases. Our findings indicate the absence of phase transitions in 1D systems with super-polynomially decaying interactions, thereby expanding upon previous theoretical research. To overcome significant technical challenges originating from the divergence of the imaginary-time Lieb-Robinson bound, we utilize the quantum belief propagation to refine the cluster expansion method. This approach allowed us to address divergence issues effectively and contributed to a deeper understanding of low-temperature behaviors in 1D quantum systems.
  • PDF
    Passive error correction protects logical information forever (in the thermodynamic limit) by updating the system based only on local information and few-body interactions. A paradigmatic example is the classical two-dimensional Ising model: a Metropolis-style Gibbs sampler retains the sign of the initial magnetization (a logical bit) for thermodynamically long times in the low-temperature phase. Known models of passive quantum error correction similarly exhibit thermodynamic phase transitions to a low-temperature phase wherein logical qubits are protected by thermally stable topological order. Here, in contrast, we show that constant-rate classical and quantum low-density parity check codes have no $\textit{thermodynamic}$ phase transitions at nonzero temperature, but nonetheless exhibit $\textit{ergodicity-breaking}$ dynamical transitions: below a critical nonzero temperature, the mixing time of local Gibbs sampling diverges in the thermodynamic limit. We conjecture that the circuit complexity of preparing extensive-energy states may diverge without crossing any thermodynamic transition. Fault-tolerant passive decoders, inspired by Gibbs samplers, may be amenable to measurement-free quantum error correction and may present a desirable experimental alternative to conventional quantum error correction based on syndrome measurements and active feedback.
  • PDF
    Spins associated to solid-state colour centers are a promising platform for investigating quantum computation and quantum networks. Recent experiments have demonstrated multi-qubit quantum processors, optical interconnects, and basic quantum error correction protocols. One of the key open challenges towards larger-scale systems is to realize high-fidelity universal quantum gates. In this work, we design and demonstrate a complete high-fidelity gate set for the two-qubit system formed by the electron and nuclear spin of a nitrogen-vacancy center in diamond. We use gate set tomography (GST) to systematically optimise the gates and demonstrate single-qubit gate fidelities of up to $99.999(1)\%$ and a two-qubit gate fidelity of $99.93(5) \%$. Our gates are designed to decouple unwanted interactions and can be extended to other electron-nuclear spin systems. The high fidelities demonstrated provide new opportunities towards larger-scale quantum processing with colour-center qubits.
  • PDF
    The zero-error capacity of a channel (or Shannon capacity of a graph) quantifies how much information can be transmitted with no risk of error. In contrast to the Shannon capacity of a channel, the zero-error capacity has not even been shown to be computable: we have no convergent upper bounds. In this work, we present a new quantity, the zero-error \em unitary capacity, and show that it can be succinctly represented as the tensor product value of a quantum game. By studying the structure of finite automata, we show that the unitary capacity is within a controllable factor of the zero-error capacity. This allows new upper bounds through the sum-of-squares hierarchy, which converges to the commuting operator value of the game. Under the conjecture that the commuting operator and tensor product value of this game are equal, this would yield an algorithm for computing the zero-error capacity.
  • PDF
    Decoherence and einselection have been effective in explaining several features of an emergent classical world from an underlying quantum theory. However, the theory assumes a particular factorization of the global Hilbert space into constituent system and environment subsystems, as well as specially constructed Hamiltonians. In this work, we take a systematic approach to discover, given a fixed Hamiltonian, (potentially) several factorizations (or tensor product structures) of a global Hilbert space that admit a quasi-classical description of subsystems in the sense that certain states (the "pointer states") are robust to entanglement. We show that every Hamiltonian admits a pointer basis in the factorization where the energy eigenvectors are separable. Furthermore, we implement an algorithm that allows us to discover a multitude of factorizations that admit pointer states and use it to explore these quasi-classical "realms" for both random and structured Hamiltonians. We also derive several analytical forms that the Hamiltonian may take in such factorizations, each with its unique set of features. Our approach has several implications: it enables us to derive the division into quasi-classical subsystems, demonstrates that decohering subsystems do not necessarily align with our classical notion of locality, and challenges ideas expressed by some authors that the propensity of a system to exhibit classical dynamics relies on minimizing the interaction between subsystems. From a quantum foundations perspective, these results lead to interesting ramifications for relative-state interpretations. From a quantum engineering perspective, these results may be useful in characterizing decoherence free subspaces and other passive error avoidance protocols.
  • PDF
    We delve into the use of photonic quantum computing to simulate quantum mechanics and extend its application towards quantum field theory. We develop and prove a method that leverages this form of Continuous-Variable Quantum Computing (CVQC) to reproduce the time evolution of quantum-mechanical states under arbitrary Hamiltonians, and we demonstrate the method's remarkable efficacy with various potentials. Our method centres on constructing an evolver-state, a specially prepared quantum state that induces the desired time-evolution on the target state. This is achieved by introducing a non-Gaussian operation using a measurement-based quantum computing approach, enhanced by machine learning. Furthermore, we propose a novel framework in which these methods can be extended to encode field theories in CVQC without discretising the field values, thus preserving the continuous nature of the fields. This opens new avenues for quantum computing applications in quantum field theory.
  • PDF
    We extend Bloch Sphere formalism to pure two qubit systems. Combining insights from Geometric Algebra and analysis of entanglement in different conjugate bases we identify Two Bloch Sphere geometry that is suitable for representing maximally entangled states. It turns out that relative direction of coordinate axes of the two Bloch Spheres may be used to describe the states. Moreover, coordinate axes of one Bloch sphere should be rignt-handed and of the other one - left-handed. We describe and depict separable and maximally entangled states as well as entangling and non-entangling rotations. We also offer graphical representation of workings of a CNOT gate for different inputs. Finally we provide a way to also represent partially entangled states and describe entanglement measure related to the surface area of the sphere enclosing the state representation.
  • PDF
    Perception plays a crucial role in various robot applications. However, existing well-annotated datasets are biased towards autonomous driving scenarios, while unlabelled SLAM datasets are quickly over-fitted, and often lack environment and domain variations. To expand the frontier of these fields, we introduce a comprehensive dataset named MCD (Multi-Campus Dataset), featuring a wide range of sensing modalities, high-accuracy ground truth, and diverse challenging environments across three Eurasian university campuses. MCD comprises both CCS (Classical Cylindrical Spinning) and NRE (Non-Repetitive Epicyclic) lidars, high-quality IMUs (Inertial Measurement Units), cameras, and UWB (Ultra-WideBand) sensors. Furthermore, in a pioneering effort, we introduce semantic annotations of 29 classes over 59k sparse NRE lidar scans across three domains, thus providing a novel challenge to existing semantic segmentation research upon this largely unexplored lidar modality. Finally, we propose, for the first time to the best of our knowledge, continuous-time ground truth based on optimization-based registration of lidar-inertial data on large survey-grade prior maps, which are also publicly released, each several times the size of existing ones. We conduct a rigorous evaluation of numerous state-of-the-art algorithms on MCD, report their performance, and highlight the challenges awaiting solutions from the research community.
  • PDF
    In this study, we introduce a novel framework called Toast for learning general-purpose representations of road networks, along with its advanced counterpart DyToast, designed to enhance the integration of temporal dynamics to boost the performance of various time-sensitive downstream tasks. Specifically, we propose to encode two pivotal semantic characteristics intrinsic to road networks: traffic patterns and traveling semantics. To achieve this, we refine the skip-gram module by incorporating auxiliary objectives aimed at predicting the traffic context associated with a target road segment. Moreover, we leverage trajectory data and design pre-training strategies based on Transformer to distill traveling semantics on road networks. DyToast further augments this framework by employing unified trigonometric functions characterized by their beneficial properties, enabling the capture of temporal evolution and dynamic nature of road networks more effectively. With these proposed techniques, we can obtain representations that encode multi-faceted aspects of knowledge within road networks, applicable across both road segment-based applications and trajectory-based applications. Extensive experiments on two real-world datasets across three tasks demonstrate that our proposed framework consistently outperforms the state-of-the-art baselines by a significant margin.
  • PDF
    In this paper, we formulate the colorization problem into a multinomial classification problem and then apply a weighted function to classes. We propose a set of formulas to transform color values into color classes and vice versa. To optimize the classes, we experiment with different bin sizes for color class transformation. Observing class appearance, standard deviation, and model parameters on various extremely large-scale real-time images in practice we propose 532 color classes for our classification task. During training, we propose a class-weighted function based on true class appearance in each batch to ensure proper saturation of individual objects. We adjust the weights of the major classes, which are more frequently observed, by lowering them, while escalating the weights of the minor classes, which are less commonly observed. In our class re-weight formula, we propose a hyper-parameter for finding the optimal trade-off between the major and minor appeared classes. As we apply regularization to enhance the stability of the minor class, occasional minor noise may appear at the object's edges. We propose a novel object-selective color harmonization method empowered by the Segment Anything Model (SAM) to refine and enhance these edges. We propose two new color image evaluation metrics, the Color Class Activation Ratio (CCAR), and the True Activation Ratio (TAR), to quantify the richness of color components. We compare our proposed model with state-of-the-art models using six different dataset: Place, ADE, Celeba, COCO, Oxford 102 Flower, and ImageNet, in qualitative and quantitative approaches. The experimental results show that our proposed model outstrips other models in visualization, CNR and in our proposed CCAR and TAR measurement criteria while maintaining satisfactory performance in regression (MSE, PSNR), similarity (SSIM, LPIPS, UIUI), and generative criteria (FID).
  • PDF
    We introduce a forward-backward-forward (FBF) algorithm for solving bilevel equilibrium problem associated with bifunctions on a real Hilbert space. This modifies the forward-backward algorithm by relaxing cocoercivity with monotone and Lipschitzness. Further, we present the FBF dynamical system and investigate the generated trajectory's existence, uniqueness and weak convergence. We illustrate the proposed method for equilibrium problem under saddle point constraint.
  • PDF
    Predicting the future motion of surrounding agents is essential for autonomous vehicles (AVs) to operate safely in dynamic, human-robot-mixed environments. Context information, such as road maps and surrounding agents' states, provides crucial geometric and semantic information for motion behavior prediction. To this end, recent works explore two-stage prediction frameworks where coarse trajectories are first proposed, and then used to select critical context information for trajectory refinement. However, they either incur a large amount of computation or bring limited improvement, if not both. In this paper, we introduce a novel scenario-adaptive refinement strategy, named SmartRefine, to refine prediction with minimal additional computation. Specifically, SmartRefine can comprehensively adapt refinement configurations based on each scenario's properties, and smartly chooses the number of refinement iterations by introducing a quality score to measure the prediction quality and remaining refinement potential of each scenario. SmartRefine is designed as a generic and flexible approach that can be seamlessly integrated into most state-of-the-art motion prediction models. Experiments on Argoverse (1 & 2) show that our method consistently improves the prediction accuracy of multiple state-of-the-art prediction models. Specifically, by adding SmartRefine to QCNet, we outperform all published ensemble-free works on the Argoverse 2 leaderboard (single agent track) at submission. Comprehensive studies are also conducted to ablate design choices and explore the mechanism behind multi-iteration refinement. Codes are available at https://github.com/opendilab/SmartRefine/
  • PDF
    Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample. Although recent TTA has shown promising performance, we still face two key challenges: 1) prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications; 2) while existing TTA can significantly improve the test performance on out-of-distribution data, they often suffer from severe performance degradation on in-distribution data after TTA (known as forgetting). To this end, we have proposed an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples for test-time entropy minimization. To alleviate forgetting, EATA introduces a Fisher regularizer estimated from test samples to constrain important model parameters from drastic changes. However, in EATA, the adopted entropy loss consistently assigns higher confidence to predictions even for samples that are underlying uncertain, leading to overconfident predictions. To tackle this, we further propose EATA with Calibration (EATA-C) to separately exploit the reducible model uncertainty and the inherent data uncertainty for calibrated TTA. Specifically, we measure the model uncertainty by the divergence between predictions from the full network and its sub-networks, on which we propose a divergence loss to encourage consistent predictions instead of overconfident ones. To further recalibrate prediction confidence, we utilize the disagreement among predicted labels as an indicator of the data uncertainty, and then devise a min-max entropy regularizer to selectively increase and decrease prediction confidence for different samples. Experiments on image classification and semantic segmentation verify the effectiveness of our methods.
  • PDF
    Besides the exactly solvable spin-1/2 Kitaev model, higher spin-$S$ ones, not exactly solvable, are promising playgrounds for researches on the quantum spin liquid as well. One of the main interests in higher spin-S cases is the interplay between the Kitaev spin liquid (KSL) and spin nematics. We probe this interplay in a spin-1 model on the honeycomb lattice with competing bilinear-biquadratic and Kitaev interactions. Utilizing the 2D infinite projected entangled-pair state (iPEPS), we map out the phase diagram for the ferro-biquadratic interaction. In the phase diagram, we discover the direct KSL--spin-nematics transitions in the vicinity of pure Kitaev limits. It has been revealed that the ferro KSL exhibits robustness against perturbations from ferro-quadrupolar interactions. Also, the spin-nematic phase is extended to the parameter region near the antiferro-Kitaev limit.
  • PDF
    For statistical inference on an infinite-dimensional Hilbert space $\H $ with no moment conditions we introduce a new class of energy distances on the space of probability measures on $\H$. The proposed distances consist of the integrated squared modulus of the corresponding difference of the characteristic functionals with respect to a reference probability measure on the Hilbert space. Necessary and sufficient conditions are established for the reference probability measure to be \em characteristic, the property that guarantees that the distance defines a metric on the space of probability measures on $\H$. We also use these results to define new distance covariances, which can be used to measure the dependence between the marginals of a two dimensional distribution of $\H^2$ without existing moments. On the basis of the new distances we develop statistical inference for Hilbert space valued data, which does not require any moment assumptions. As a consequence, our methods are robust with respect to heavy tails in finite dimensional data. In particular, we consider the problem of comparing the distributions of two samples and the problem of testing for independence and construct new minimax optimal tests for the corresponding hypotheses. We also develop aggregated (with respect to the reference measure) procedures for power enhancement and investigate the finite-sample properties by means of a simulation study.
  • PDF
    We investigate a gedanken experiment to destroy an extremally charged black hole by dropping a test particle, provided that there are multiple $U(1)$ gauge fields coupled with each other through higher derivative interactions. In the absence of higher derivative corrections, it is known that the Coulomb repulsion prevents a test particle that would break the extremal condition from falling into an extremal black hole and therefore the black hole cannot be destroyed. We extend this observation to include higher derivative corrections. Although the extremal condition is modified by the higher derivative interactions, we find that the repulsive force induced by the higher derivative couplings is responsible for preventing a test particle that would break the modified extremal condition to reach the event horizon. Thus, we confirm that the weak cosmic censorship conjecture holds for extremally charged black holes even in the presence of higher derivative corrections, as long as the test particle approximation is justified.
  • PDF
    We present a novel approach to automatically synthesize "wayfinding instructions" for an embodied robot agent. In contrast to prior approaches that are heavily reliant on human-annotated datasets designed exclusively for specific simulation platforms, our algorithm uses in-context learning to condition an LLM to generate instructions using just a few references. Using an LLM-based Visual Question Answering strategy, we gather detailed information about the environment which is used by the LLM for instruction synthesis. We implement our approach on multiple simulation platforms including Matterport3D, AI Habitat and ThreeDWorld, thereby demonstrating its platform-agnostic nature. We subjectively evaluate our approach via a user study and observe that 83.3% of users find the synthesized instructions accurately capture the details of the environment and show characteristics similar to those of human-generated instructions. Further, we conduct zero-shot navigation with multiple approaches on the REVERIE dataset using the generated instructions, and observe very close correlation with the baseline on standard success metrics (< 1% change in SR), quantifying the viability of generated instructions in replacing human-annotated data. To the best of our knowledge, ours is the first LLM-driven approach capable of generating "human-like" instructions in a platform-agnostic manner, without requiring any form of training.
  • PDF
    In the development of advanced Texas Hold'em AI systems, abstraction technology has garnered widespread attention due to its significant effect in simplifying game complexity. This study adopts a more specific model, the games of ordered signal, to describe Texas Hold'em-style games and optimizes this model to streamline its mathematical representation and broaden its applicability. By transitioning from a broad imperfect information game model to a game with ordered signals model, we have separated the previously intertwined infoset abstraction and action abstraction into independent signal abstraction and action abstraction. Importantly, this signal abstraction provides a mathematical framework for the hand abstraction task, which is emphatically discussed in this paper. Additionally, a novel common refinement principle is introduced, revealing the limit performance of hand abstraction algorithms. We introduce potential outcome isomorphism (POI) and pinpoint that it suffers from the issue of excessive abstraction. Futher, We demonstrate that POI serves as a common refinement for leading outcome-based hand abstraction algorithms, such as E[HS] and PA\&PAEMD. Consequently, excessive abstraction also inherently affects these algorithms, leading to suboptimal performance. Our investigation reveals the omission of historical data as a primary contributor to excessive abstraction. To remedy this, we propose the K-Recall Outcome Isomorphism (KROI) to incorporate the missing information. Compared with POI, KROI more accurately mirrors lossless isomorphism (LI), the ground truth, offering enhanced signal abstraction resolution. Experimental results in the Numeral211 Hold'em indicate that strategies developed through KROI approximate the exploitability of those developed through LI more closely than those trained through POI.
  • PDF
    The status-quo of misinformation moderation is a central authority, usually social platforms, deciding what content constitutes misinformation and how it should be handled. However, to preserve users' autonomy, researchers have explored democratized misinformation moderation. One proposition is to enable users to assess content accuracy and specify whose assessments they trust. We explore how these affordances can be provided on the web, without cooperation from the platforms where users consume content. We present a browser extension that empowers users to assess the accuracy of any content on the web and shows the user assessments from their trusted sources in-situ. Through a two-week user study, we report on how users perceive such a tool, the kind of content users want to assess, and the rationales they use in their assessments. We identify implications for designing tools that enable users to moderate content for themselves with the help of those they trust.
  • PDF
    This paper presents a novel reactive motion planning framework for navigating robots in unknown and cluttered 2D workspace. Typical existing methods are developed by enforcing the robot staying in free regions represented by the locally extracted ellipse or polygon. Instead, we navigate the robot in free space with an alternate starshaped decomposition, which is calculated directly from real-time sensor data. Additionally, a roadmap is constructed incrementally to maintain the connectivity information of the starshaped regions. Compared to the roadmap built upon connected polygons or ellipses in the conventional approaches, the concave starshaped region is better suited to capture the natural distribution of sensor data, so that the perception information can be fully exploited for robot navigation. In this sense, conservative and myopic behaviors are avoided with the proposed approach, and intricate obstacle configurations can be suitably accommodated in unknown and cluttered environments. Then, we design a heuristic exploration algorithm on the roadmap to determine the frontier points of the starshaped regions, from which short-term goals are selected to attract the robot towards the goal configuration. It is noteworthy that, a recovery mechanism is developed on the roadmap that is triggered once a non-extendable short-term goal is reached. This mechanism renders it possible to deal with dead-end situations that can be typically encountered in unknown and cluttered environments. Furthermore, safe and smooth motion within the starshaped regions is generated by employing the Dynamical System Modulation (DSM) approach on the constructed roadmap. Through comprehensive evaluation in both simulations and real-world experiments, the proposed method outperforms the benchmark methods in terms of success rate and traveling time.
  • PDF
    Open-world semi-supervised learning (Open-world SSL) for node classification, that classifies unlabeled nodes into seen classes or multiple novel classes, is a practical but under-explored problem in the graph community. As only seen classes have human labels, they are usually better learned than novel classes, and thus exhibit smaller intra-class variances within the embedding space (named as imbalance of intra-class variances between seen and novel classes). Based on empirical and theoretical analysis, we find the variance imbalance can negatively impact the model performance. Pre-trained feature encoders can alleviate this issue via producing compact representations for novel classes. However, creating general pre-trained encoders for various types of graph data has been proven to be challenging. As such, there is a demand for an effective method that does not rely on pre-trained graph encoders. In this paper, we propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification, which trains the node classification model from scratch via contrastive learning with bias-reduced pseudo labels. Extensive experiments on seven popular graph benchmarks demonstrate the effectiveness of OpenIMA, and the source code has been available on GitHub.
  • PDF
    Geographical, physical, or economic constraints often result in missing traces within seismic data, making the reconstruction of complete seismic data a crucial step in seismic data processing. Traditional methods for seismic data reconstruction require the selection of multiple empirical parameters and struggle to handle large-scale continuous missing data. With the development of deep learning, various neural networks have demonstrated powerful reconstruction capabilities. However, these convolutional neural networks represent a point-to-point reconstruction approach that may not cover the entire distribution of the dataset. Consequently, when dealing with seismic data featuring complex missing patterns, such networks may experience varying degrees of performance degradation. In response to this challenge, we propose a novel diffusion model reconstruction framework tailored for 3D seismic data. To constrain the results generated by the diffusion model, we introduce conditional supervision constraints into the diffusion model, constraining the generated data of the diffusion model based on the input data to be reconstructed. We introduce a 3D neural network architecture into the diffusion model, successfully extending the 2D diffusion model to 3D space. Additionally, we refine the model's generation process by incorporating missing data into the generation process, resulting in reconstructions with higher consistency. Through ablation studies determining optimal parameter values, our method exhibits superior reconstruction accuracy when applied to both field datasets and synthetic datasets, effectively addressing a wide range of complex missing patterns. Our implementation is available at https://github.com/WAL-l/SeisFusion.
  • PDF
    We explore how reconciling several foundation models (large language models and vision-language models) with a novel unified memory mechanism could tackle the challenging video understanding problem, especially capturing the long-term temporal relations in lengthy videos. In particular, the proposed multimodal agent VideoAgent: 1) constructs a structured memory to store both the generic temporal event descriptions and object-centric tracking states of the video; 2) given an input task query, it employs tools including video segment localization and object memory querying along with other visual foundation models to interactively solve the task, utilizing the zero-shot tool-use ability of LLMs. VideoAgent demonstrates impressive performances on several long-horizon video understanding benchmarks, an average increase of 6.6% on NExT-QA and 26.0% on EgoSchema over baselines, closing the gap between open-sourced models and private counterparts including Gemini 1.5 Pro.
  • PDF
    Recent advances in neuroimaging have enabled studies in functional connectivity (FC) of human brain, alongside investigation of the neuronal basis of cognition. One important FC study is the representation of vision in human brain. The release of publicly available dataset BOLD5000 has made it possible to study the brain dynamics during visual tasks in greater detail. In this paper, a comprehensive analysis of fMRI time series (TS) has been performed to explore different types of visual brain networks (VBN). The novelty of this work lies in (1) constructing VBN with consistently significant direct connectivity using both marginal and partial correlation, which is further analyzed using graph theoretic measures, (2) classification of VBNs as formed by image complexity-specific TS, using graphical features. In image complexity-specific VBN classification, XGBoost yields average accuracy in the range of 86.5% to 91.5% for positively correlated VBN, which is 2% greater than that using negative correlation. This result not only reflects the distinguishing graphical characteristics of each image complexity-specific VBN, but also highlights the importance of studying both positively correlated and negatively correlated VBN to understand the how differently brain functions while viewing different complexities of real-world images.
  • PDF
    We prove the existence and regularity of convex solutions to the first initial-boundary value problem of the parabolic Monge-Ampère equation $$ \left{\begineqnarray &u_t=\det D^2u\quad\text in Q_T, \\ &u=\phi\quad\text on \partial_pQ_T, \endeqnarray\right. $$ where $\phi$ is a smooth function, $Q_T=\Omega\times(0,T]$, $\partial_p Q_T$ is the parabolic boundary of $Q_T$, and $\Omega$ is a uniformly convex domain in $\mathbb{R}^n$ with smooth boundary. Our approach can also be used to prove similar results for $\gamma$-Gauss curvature flow with any $0<\gamma\le 1$.
  • PDF
    The moiré system provides a tunable platform for exploring exotic phases of materials. This article shows the possible realization of a non-Abelian state characterized by the Moore-Read wavefunction in a half-filled moiré Chern band, exemplified by twisted $\rm MoTe_2$. This is achieved by introducing short-range repulsive three-body interaction. Exact diagonalization is employed to examine the spectrum in finite size. The incompressibility of the system, the degeneracy of the ground states, and the number of low-energy states provide compelling evidence to identify the ground state as the Moore-Read state. We further interpolate between the three-body interaction and Coulomb interaction to show a phase transition between the composite Fermi-liquid and the Moore-Read state. Finally, we consider the effect of band mixing and derive the three-body interaction using perturbation theory. By exploring the conditions under which band mixing effects mimic short-range repulsive three-body interaction we provide insights towards realizing non-Abelian phases of matter in the moiré system.
  • PDF
    We study the sample complexity of learning an $\epsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $\tilde{O}(SA\frac{H}{\epsilon^2})$, where $H$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,H$ and $\epsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We further investigate sample complexity in general (non-weakly-communicating) average-reward MDPs. We argue a new transient time parameter $B$ is necessary, establish an $\tilde{O}(SA\frac{B+H}{\epsilon^2})$ complexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To establish the optimality of this reduction, we develop improved bounds for $\gamma$-discounted MDPs, showing that $\tilde{\Omega}\left(SA\frac{H}{(1-\gamma)^2\epsilon^2}\right)$ samples suffice to learn an $\epsilon$-optimal policy in weakly communicating MDPs under the regime that $\gamma\geq 1-1/H$, and $\tilde{\Omega}\left(SA\frac{B+H}{(1-\gamma)^2\epsilon^2}\right)$ samples suffice in general MDPs when $\gamma\geq 1-\frac{1}{B+H}$. Both these results circumvent the well-known lower bound of $\tilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\epsilon^2}\right)$ for arbitrary $\gamma$-discounted MDPs. Our analysis develops upper bounds on certain instance-dependent variance parameters in terms of the span and transient time parameters. The weakly communicating bounds are tighter than those based on the mixing time or diameter of the MDP and may be of broader use.
  • PDF
    Complex quantum information tasks in a gravitational background require multipartite entanglement for effective processing. Therefore, it is necessary to investigate the properties of multipartite entanglement in a relativistic setting. In this paper, we study genuine N-partite entanglement of massless Dirac fields in the Schwarzschild-de Sitter (SdS) spacetime, characterized by the presence of a black hole event horizon (BEH) and a cosmological event horizon (CEH). We obtain the general analytical expression of genuine N-partite entanglement shared by n observers near BEH and m (n+m = N) observers near CEH. It is shown that genuine N-partite entanglement monotonically decreases with the decrease of the mass of the black hole, suggesting that the Hawking effect of the black hole destroys quantum entanglement. It is interesting to note that genuine N-partite entanglement is a non-monotonic function of the cosmological constant, meaning that the Hawking effect of the expanding universe can enhance quantum entanglement. This result contrasts with multipartite entanglement in single-event horizon spacetime, offering a new perspective on the Hawking effect in multi-event horizon spacetime.
  • PDF
    Post-asymptotic giant branch stars (post-AGB) in binary systems, with typical orbital periods between ~100 to ~1000 days, result from a poorly understood interaction that terminates their precursory AGB phase. The majority of these binaries display a photospheric anomaly called 'chemical depletion', thought to arise from an interaction between the circumbinary disc and the post-AGB star, leading to the reaccretion of pure gas onto the star, devoid of refractory elements due to dust formation. In this paper, we focus on a subset of chemically peculiar binary post-AGBs in the Galaxy and the Magellanic Clouds (MCs) whose high-resolution optical spectroscopic study revealed a carbon and s-process enrichment, contrary to the commonly observed photospheric chemical depletion. Using spectral energy distribution (SED) fitting and period-luminosity-colour (PLC) relation methods, we determine the luminosity of the targets ($2700-8300L_{\odot}$), which enables confirmation of their evolutionary phase and estimation of initial masses (as a function of metallicity) ($1-2.5M_{\odot}$). In conjunction with predictions from dedicated ATON stellar evolutionary models, our results indicate a predominant intrinsic enrichment of carbon and s-process elements in our binary post-AGB targets. We qualitatively rule out extrinsic enrichment and inherited s-process enrichment from the host galaxy as plausible explanations for the observed overabundances. Our chemically peculiar subset of intrinsic carbon and s-process enriched binary post-AGBs also hints at potential variation in the efficiency of chemical depletion between stars with C-rich and O-rich circumbinary disc chemistries. However, critical observational studies of circumbinary disc chemistry are necessary to address gaps in our current understanding of disc-binary interactions inducing chemical depletion in binary post-AGB systems.
  • PDF
    Let $f$ be a normalized newform of weight 2 on $\Gamma_0(N)$ whose coefficients lie in $\mathbb{Q}$ and let $\chi_M$ be a primitive quadratic Dirichlet character with conductor $M$. In this paper, under mild assumptions on $M$, we give a sharp lower bound of the 2-adic valuation of the algebraic central $L$-value $L(f, \chi_M, 1)$ and evaluate the 2-adic valuation for an infinite number of $M$.
  • PDF
    Existing works have studied the impacts of the order of words within natural text. They usually analyze it by destroying the original order of words to create a scrambled sequence, and then comparing the models' performance between the original and scrambled sequences. The experimental results demonstrate marginal drops. Considering this findings, different hypothesis about word order is proposed, including ``the order of words is redundant with lexical semantics'', and ``models do not rely on word order''. In this paper, we revisit the aforementioned hypotheses by adding a order reconstruction perspective, and selecting datasets of different spectrum. Specifically, we first select four different datasets, and then design order reconstruction and continuing generation tasks. Empirical findings support that ChatGPT relies on word order to infer, but cannot support or negate the redundancy relations between word order lexical semantics.
  • PDF
    Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes. These indexes use the mapping information as training data. Learned indexes require frequent retrainings of their models to incorporate the changes introduced by update queries. To efficiently retrain the models, existing learned index systems often harness a linear algebraic QR factorization technique that performs matrix decomposition. This factorization approach processes all key-position pairs during each retraining, resulting in compute operations that grow linearly with the total number of keys and their lengths. Consequently, the retrainings create a severe performance bottleneck, especially for variable-length string keys, while the retrainings are crucial for maintaining high prediction accuracy and in turn, ensuring low query service latency. To address this performance problem, we develop an algorithm-hardware co-designed string-key learned index system, dubbed SIA. In designing SIA, we leverage a unique algorithmic property of the matrix decomposition-based training method. Exploiting the property, we develop a memoization-based incremental training scheme, which only requires computation over updated keys, while decomposition results of non-updated keys from previous computations can be reused. We further enhance SIA to offload a portion of this training process to an FPGA accelerator to not only relieve CPU resources for serving index queries (i.e., inference), but also accelerate the training itself. Our evaluation shows that compared to ALEX, LIPP, and SIndex, a state-of-the-art learned index systems, SIA-accelerated learned indexes offer 2.6x and 3.4x higher throughput on the two real-world benchmark suites, YCSB and Twitter cache trace, respectively.
  • PDF
    Motivated by recent breakthrough on smooth imploding solutions of compressible Euler, we construct self-similar smooth imploding solutions of isentropic relativistic Euler equations with isothermal equation of state $p=\frac1\ell\varrho$ for \textitall $\ell>1$ in physical space dimension $d=2,3$ and for $\ell>1$ close to 1 in higher dimensions. This work is a crucial step toward solving the long-standing problem: finite time blow-up of the supercritical defocusing nonlinear wave equation.
  • PDF
    Motivated by Hadwiger's conjecture and related problems for list-coloring, we study graphs $H$ for which every graph with minimum degree at least $|V(H)|-1$ contains $H$ as a minor. We prove that a large class of apex-outerplanar graphs satisfies this property. Our result gives the first examples of such graphs whose vertex cover numbers are significantly larger than half of the number of its vertices, which breaks a barrier for attacking related coloring problems via extremal functions, and recovers all known such graphs that have arbitrarily large maximum degree. Our proof can be adapted to directed graphs to show that if $\vec H$ is the digraph obtained from a directed cycle or an in-arborescence by adding an apex source, then every digraph with minimum out-degree $|V(\vec H)|-1$ contains $\vec H$ as a subdivision or a butterfly minor respectively. These results provide the optimal upper bound for the chromatic number and dichromatic number of graphs and digraphs that do not contain the aforementioned graphs or digraphs as a minor, butterfly minor and a subdivision, respectively. Special cases of our results solve an open problem of Aboulker, Cohen, Havet, Lochet, Moura and Thomassé and strengthen results of Gishboliner, Steiner and Szabó.
  • PDF
    Stylized motion breathes life into characters. However, the fixed skeleton structure and style representation hinder existing data-driven motion synthesis methods from generating stylized motion for various characters. In this work, we propose a generative motion stylization pipeline, named MotionS, for synthesizing diverse and stylized motion on cross-structure characters using cross-modality style prompts. Our key insight is to embed motion style into a cross-modality latent space and perceive the cross-structure skeleton topologies, allowing for motion stylization within a canonical motion space. Specifically, the large-scale Contrastive-Language-Image-Pre-training (CLIP) model is leveraged to construct the cross-modality latent space, enabling flexible style representation within this space. Additionally, two topology-encoded tokens are learned to capture the canonical and specific skeleton topologies, facilitating cross-structure topology shifting. Subsequently, the topology-shifted stylization diffusion is designed to generate motion content for the specific skeleton and stylize it in the shifted canonical motion space using multi-modality style descriptions. Through an extensive set of examples, we demonstrate the flexibility and generalizability of our pipeline across various characters and style descriptions. Qualitative and quantitative experiments underscore the superiority of our pipeline over state-of-the-art methods, consistently delivering high-quality stylized motion across a broad spectrum of skeletal structures.
  • PDF
    Recent advancements in generative AI have suggested that by taking visual prompt, GPT-4V can demonstrate significant proficiency in image recognition task. Despite its impressive capabilities, the financial cost associated with GPT-4V's inference presents a substantial barrier for its wide use. To address this challenge, our work introduces Collage Prompting, a budget-friendly prompting approach that concatenates multiple images into a single visual input. With collage prompt, GPT-4V is able to perform image recognition on several images simultaneously. Based on the observation that the accuracy of GPT-4V's image recognition varies significantly with the order of images within the collage prompt, our method further learns to optimize the arrangement of images for maximum recognition accuracy. A graph predictor is trained to indicate the accuracy of each collage prompt, then we propose an optimization method to navigate the search space of possible image arrangements. Experiment results across various datasets demonstrate the cost-efficiency score of collage prompt is much larger than standard prompt. Additionally, collage prompt with learned arrangement achieves clearly better accuracy than collage prompt with random arrangement in GPT-4V's visual recognition.
  • PDF
    Let $\mathbb{H}^{n}$ be the Heisenberg group. For $0 \leq \alpha < Q=2n+2$ and $N \in \mathbb{N}$ we consider exponent functions $p(\cdot) : \mathbb{H}^{n} \to (0, +\infty)$, which satisfies Hölder conditions, such that $\frac{Q}{Q+N} < p_{-} \leq p(\cdot) \leq p_{+} < \frac{Q}{\alpha}$. In this article we prove the $H^{p(\cdot)}(\mathbb{H}^{n}) \to L^{q(\cdot)}(\mathbb{H}^{n})$ and $H^{p(\cdot)}(\mathbb{H}^{n}) \to H^{q(\cdot)}(\mathbb{H}^{n})$ boundedness of convolution operators with kernels of type $(\alpha, N)$ on $\mathbb{H}^{n}$, where $\frac{1}{q(\cdot)} = \frac{1}{p(\cdot)} - \frac{\alpha}{Q}$. In particular, the Riesz potential on $\mathbb{H}^{n}$ satisfies such estimates.
  • PDF
    We report the close form expressions of the photon number statistics for a generalized coherent state and a generalized photon-added coherent state, which are shown to be crucial for proposing a variety of quantum scissor operations. The analytically obtained distributions are also capable of predicting the precise laser intensity windows for realizing a variety of quantum scissors. Truncating a photon added state overcomes the selection rule of obtaining the lower order Fock states. Photon addition also enables us to obtain a higher order Fock state in a lower order superposition. The importance of circular geometry is also demonstrated for engineering such quantum scissors.
  • PDF
    Van der Waals encapsulation of two-dimensional materials within hexagonal boron nitride (h-BN) stacks has proven to be a promising way to create ultrahigh-performance electronic devices. However, contemporary approaches for achieving van der Waals encapsulation, which involve artificial layer stacking using mechanical transfer techniques, are difficult to control, prone to contamination, and unscalable. Here, we report on the transfer-free direct growth of high-quality graphene nanoribbons (GNRs) within h-BN stacks. The as-grown embedded GNRs exhibit highly desirable features being ultralong (up to 0.25 mm), ultranarrow ( < 5 nm), and homochiral with zigzag edges. Our atomistic simulations reveal that the mechanism underlying the embedded growth involves ultralow GNR friction when sliding between AA'-stacked h-BN layers. Using the grown structures, we demonstrate the transfer-free fabrication of embedded GNR field-effect devices that exhibit excellent performance at room temperature with mobilities of up to 4,600 $cm^{2} V^{-1} s^{-1}$ and on-off ratios of up to $10^{6}$. This paves the way to the bottom-up fabrication of high-performance electronic devices based on embedded layered materials.
  • PDF
    Personalized Federated Learning (PFL) is widely employed in IoT applications to handle high-volume, non-iid client data while ensuring data privacy. However, heterogeneous edge devices owned by clients may impose varying degrees of resource constraints, causing computation and communication bottlenecks for PFL. Federated Dropout has emerged as a popular strategy to address this challenge, wherein only a subset of the global model, i.e. a \textitsub-model, is trained on a client's device, thereby reducing computation and communication overheads. Nevertheless, the dropout-based model-pruning strategy may introduce bias, particularly towards non-iid local data. When biased sub-models absorb highly divergent parameters from other clients, performance degradation becomes inevitable. In response, we propose federated learning with stochastic parameter update (FedSPU). Unlike dropout that tailors the global model to small-size local sub-models, FedSPU maintains the full model architecture on each device but randomly freezes a certain percentage of neurons in the local model during training while updating the remaining neurons. This approach ensures that a portion of the local model remains personalized, thereby enhancing the model's robustness against biased parameters from other clients. Experimental results demonstrate that FedSPU outperforms federated dropout by 7.57\% on average in terms of accuracy. Furthermore, an introduced early stopping scheme leads to a significant reduction of the training time by \(24.8\%\sim70.4\%\)while maintaining high accuracy.
  • PDF
    Video Paragraph Grounding (VPG) is an emerging task in video-language understanding, which aims at localizing multiple sentences with semantic relations and temporal order from an untrimmed video. However, existing VPG approaches are heavily reliant on a considerable number of temporal labels that are laborious and time-consuming to acquire. In this work, we introduce and explore Weakly-Supervised Video Paragraph Grounding (WSVPG) to eliminate the need of temporal annotations. Different from previous weakly-supervised grounding frameworks based on multiple instance learning or reconstruction learning for two-stage candidate ranking, we propose a novel siamese learning framework that jointly learns the cross-modal feature alignment and temporal coordinate regression without timestamp labels to achieve concise one-stage localization for WSVPG. Specifically, we devise a Siamese Grounding TRansformer (SiamGTR) consisting of two weight-sharing branches for learning complementary supervision. An Augmentation Branch is utilized for directly regressing the temporal boundaries of a complete paragraph within a pseudo video, and an Inference Branch is designed to capture the order-guided feature correspondence for localizing multiple sentences in a normal video. We demonstrate by extensive experiments that our paradigm has superior practicability and flexibility to achieve efficient weakly-supervised or semi-supervised learning, outperforming state-of-the-art methods trained with the same or stronger supervision.
  • PDF
    There has been a significant effort in recent years to generalize the traditional concept of iterated function systems (IFS).In this article, we proposed Suzuki contraction in hyperspace and finding out the fixed point for Hutchinson mapping, which is called a deterministic fractal. The deterministic fractal for such a Suzuki contraction mapping is shown to exist and to be unique. We propose the Suzuki IFS (SIFS) in the literature for fractal creation based on this conclusion. Keywords: Fixed point, iterated function system, Suzuki contraction, Suzuki iterated function system, Attractor (Deterministic Fractal).
  • PDF
    In this work, we introduce the Virtual In-Hand Eye Transformer (VIHE), a novel method designed to enhance 3D manipulation capabilities through action-aware view rendering. VIHE autoregressively refines actions in multiple stages by conditioning on rendered views posed from action predictions in the earlier stages. These virtual in-hand views provide a strong inductive bias for effectively recognizing the correct pose for the hand, especially for challenging high-precision tasks such as peg insertion. On 18 manipulation tasks in RLBench simulated environments, VIHE achieves a new state-of-the-art, with a 12% absolute improvement, increasing from 65% to 77% over the existing state-of-the-art model using 100 demonstrations per task. In real-world scenarios, VIHE can learn manipulation tasks with just a handful of demonstrations, highlighting its practical utility. Videos and code implementation can be found at our project site: https://vihe-3d.github.io.
  • PDF
    In this work, we present Fed3DGS, a scalable 3D reconstruction framework based on 3D Gaussian splatting (3DGS) with federated learning. Existing city-scale reconstruction methods typically adopt a centralized approach, which gathers all data in a central server and reconstructs scenes. The approach hampers scalability because it places a heavy load on the server and demands extensive data storage when reconstructing scenes on a scale beyond city-scale. In pursuit of a more scalable 3D reconstruction, we propose a federated learning framework with 3DGS, which is a decentralized framework and can potentially use distributed computational resources across millions of clients. We tailor a distillation-based model update scheme for 3DGS and introduce appearance modeling for handling non-IID data in the scenario of 3D reconstruction with federated learning. We simulate our method on several large-scale benchmarks, and our method demonstrates rendered image quality comparable to centralized approaches. In addition, we also simulate our method with data collected in different seasons, demonstrating that our framework can reflect changes in the scenes and our appearance modeling captures changes due to seasonal variations.
  • PDF
    To tackle the "reality gap" encountered in Sim-to-Real transfer, this study proposes a diffusion-based framework that minimizes inconsistencies in grasping actions between the simulation settings and realistic environments. The process begins by training an adversarial supervision layout-to-image diffusion model(ALDM). Then, leverage the ALDM approach to enhance the simulation environment, rendering it with photorealistic fidelity, thereby optimizing robotic grasp task training. Experimental results indicate this framework outperforms existing models in both success rates and adaptability to new environments through improvements in the accuracy and reliability of visual grasping actions under a variety of conditions. Specifically, it achieves a 75\% success rate in grasping tasks under plain backgrounds and maintains a 65\% success rate in more complex scenarios. This performance demonstrates this framework excels at generating controlled image content based on text descriptions, identifying object grasp points, and demonstrating zero-shot learning in complex, unseen scenarios.
  • PDF
    This study presents investigations on pulsed loading of a magneto-optical trap (MOT) on an atom chip in an UHV environment. Using three parallel resistively heated Rb-metal dispensers activated by pulsed current supply, approximately 3.0 $\times$ $10^{7}$ cold $^{87}Rb$ atoms were loaded into the MOT. A current pulse of $\sim$ 24 A with duration of $\sim$ 10 s raised the pressure in the chamber from 2.0 $\times$ $10^{-10}$ Torr to 3.3 $\times$ $10^{-10}$ Torr. Remarkably, the pressure recovery time after switching off the dispensers current was found to be $\sim$ 600 ms, making a significant advancement in achieving fast recovery of UHV environment surrounding the MOT region. This study is very useful for laser cooling and magnetic trapping / evaporative cooling of atoms on atom chip in the same UHV chamber.
  • PDF
    Using density functional theory (DFT) and linear response approaches, we compute the on-site Hubbard interaction $U$ of elemental Terbium (Tb) metal in the pressure range $\sim 0-65$ GPa. The resulting first-principles $U$ values with experimental crystal structures enable us to examine the magnetic properties of Tb using a self-consistent DFT+U method. The lowest-energy magnetic states in our calculations for different high-pressure Tb phases -- including hcp, $\alpha$-Sm, and dhcp -- are found to be compatible with the corresponding magnetic ordering vectors reported in experiments. The result shows that the inclusion of Hubbard $U$ substantially improves the accuracy and efficiency in modeling correlated rare-earth materials. Our study also provides the necessary $U$ information for other quantum many-body techniques to study Tb under extreme pressure conditions.

Recent comments

Yadong Wu Mar 13 2024 07:12 UTC

A closely related result about learning quantum states on infinite dimensional systems has already been obtained in this paper https://arxiv.org/abs/2303.05097.

Julio Magdalena Mar 12 2024 14:10 UTC

thanks for the response. It cleared out some confusion about the "self-correcting" errors you mention. Thanks!

M. Sohaib Alam Mar 07 2024 19:15 UTC

Dear Julio, thanks for pointing us to the square lattice construction in your nice paper. We'll be sure to mention it, and modify our statement in a later version.

Re: self-correcting errors, the 2-qubit errors this refers to here are actually check operators of the parent subsystem code, and so

...(continued)
Julio Magdalena Mar 07 2024 12:33 UTC

Dear authors!
Thanks for posting this interesting paper. I was wondering if you could elaborate on a question I got while reading through. Additionally, I have a minor comment that might be interesting to you.

The question is on the phenomenon you call "self-correcting errors" in your paper. Can

...(continued)
Sascha Heußen Mar 04 2024 08:27 UTC

Hi Jahan, extending the formalism to a multi-parameter noise model works straightforwardly by extending Eq. 2 to a product of binomial factors. This is described in detail, for example in https://arxiv.org/pdf/1801.07035.pdf, see Eq. 10 in there. Cheers!

Tuomas Laakkonen Feb 23 2024 14:25 UTC

Thanks! - so far as I can tell, the circuit constructed in that paper is for a specific case of the unary iteration circuit (where $P_i = R_a(\alpha_i)$ for some axis $a$ and angles $\alpha_i$). We have a more general circuit where each $P_i$ can be any Pauli operator (potentially on multiple qubits

...(continued)
Ammar Daskin Feb 23 2024 06:36 UTC

cool paper!
I do not know if it is relevant, but for unitary iteration circuit, there is an exact simple decomposition (though not in terms of T gates) in "Möttönen, M., Vartiainen, J. J., Bergholm, V., & Salomaa, M. M. (2004). Quantum circuits for general multiqubit gates. Physical review letters,

...(continued)
Kelly Pawlak Feb 06 2024 23:17 UTC

> a quantum black box that outputs a uniform superposition of such points

This isn't so farfetched. Craft a stabilizer that checks a radius condition, and alternately apply it with Hadamard gates as in the distribution preparation algorithms of arxiv:2310.20191. The only question is the exact scali

...(continued)
Michael J Gullans Feb 06 2024 04:56 UTC

As the authors point out in the preprint, the cost of this exponential time algorithm can be increased from 2^(n/3) scaling to 2^n with the use of in-block permutation CNOTs for the [[8,3,2]] code. Such gates are readily implementable with the parallel control hardware that is available in the lab.

...(continued)
Andrew Childs Jan 30 2024 13:52 UTC

Daniel, your question is addressed in arXiv:0705.2784, which considers closely related problems using similar techniques. If the dimension is odd, then the walk can be implemented efficiently. If the dimension is even, then the implementation of the walk is closely related to the problem of (approxi

...(continued)