## An Introduction to Quantum Computation

### ERIC **CHITAMBAR**

*University of Illinois Urbana-Champaign, USA*

#### ABSTRACT

In this tutorial I will provide a brief introduction to quantum computation. The first half of the tutorial will focus on the basic principles of quantum circuits and quantum information processing. Topics here include qubits, quantum gates, quantum measurement, and entanglement. In the second half we will apply these principles and study some preliminary quantum algorithms such as the Deutsch-Jozsa algorithm and the Bernstein-Vazirani algorithm.

#### SPEAKER BIOGRAPHY

**Eric Chitambar** is an Associate Professor of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. Dr. Chitambar’s primary expertise is in quantum resource theories, multi-party entanglement/nonlocality, and distributed classical/quantum information processing, with extensive results obtained in the study of local quantum operations and classical communication (LOCC). He is a recipient of the National Science Foundation (NSF) Early CAREER Award and has been PI or co-PI on a number of funded projects through the NSF and Department of Defense.

## Explicit and Implicit Inductive Bias in Deep Learning

### NATHAN **SREBRO**

*Toyota Technological Institute, Chicago, USA*

#### ABSTRACT

Inductive bias (reflecting prior knowledge or assumptions) lies at the core of every learning system and is essential for allowing learning and generalization, both from a statistical perspective, and from a computational perspective. What is the inductive bias that drives deep learning? A simplistic answer to this question is that we learn functions representable by a given architecture. But this is not sufficient neither computational (as learning even modestly sized neural networks is intractable) nor statistically (since modern architectures are too large to ensure generalization). In this tutorial we will explore these considerations, how training humongous, even infinite, deep networks can ensure generalization, what function spaces such infinite networks might correspond to, and how the inductive bias is tightly tied to the local search procedures used to train deep networks.

#### SPEAKER BIOGRAPHY

**Nati (Nathan) Srebro** is a professor at the Toyota Technological Institute at Chicago, with cross-appointments at the University of Chicago Dept. of Computer Science and Committee on Computational and Applied Mathematics. He obtained his PhD at the Massachusetts Institute of Technology (MIT) in 2004, and previously was a post-doctoral fellow at the University of Toronto, a Visiting Scientist at IBM, and an Associate Professor at the Technion. Prof. Srebro's research encompasses methodological, statistical and computational aspects of Machine Learning, as well as related problems in Optimization. Some of Prof. Srebro's significant contributions include work on learning "wider" Markov networks; introducing the use of the nuclear norm for machine learning and matrix reconstruction; work on fast optimization techniques for machine learning, the optimality of stochastic methods, and on the relationship between learning and optimization more broadly. His current interests include understanding deep learning through a detailed understanding of optimization; distributed and federated learning; algorithmic fairness and practical adaptive data analysis.