Tuesday, August 20, 2019
3:00 PM - 4:00 PM
Annenberg 213

IQI Weekly Seminar

Machine Learning for Quantum and Vice Versa
Murphy Yuezhen Niu, Research Scientist, Google Research,

Abstract: In this talk, we review some recent progress in leveraging power machine learning techniques to improve the fidelity and robustness of quantum computation actuation through quantum controls. In return, insights from quantum computation algorithm design can be used to benefit the design of better machine learning algorithms.

In the first half of the talk, we introduce a new control framework to simultaneously optimize the speed and fidelity of quantum computation against both leakage and stochastic control errors. For a broad family of two-qubit unitary gates that are important for quantum simulation of many-electron systems, we improve the control robustness by adding control noise into training environments for reinforcement learning agents trained with trusted-region-policy-optimization. The agent control solutions demonstrate a two-order-of-magnitude reduction in average-gate-error over baseline stochastic-gradient-descent solutions and up to a one-order-of-magnitude reduction in gate time from optimal gate synthesis counterparts.
In the second half of the talk, we turn the table around to ask how physical models can help us understand and design machine learning algorithms better. We focus on one of the most widely used architectures: recurrent neural networks (RNNs). We analyze RNN architectures using numerical methods of ordinary differential equations (ODEs). We define a general family of RNNs by relating the composition rules of RNNs to integration methods of ODEs at discrete time steps. We show that the degree of functional nonlinearity n and the range of temporal memory t of RNN representation can be mapped to the stage of Runge-Kutta recursion and the order of time-derivative of the ODEs. We then proposed a clock-Hamiltonian inspired RNN ansatz: the Quantum-inspired Universal computing Neural Network (QUNN), to reduce the required number of training parameters from polynomial in both data length and temporal memory length to only linear in temporal memory length.

For more information, please contact Bonnie Leung by email at bjleung@caltech.edu.