COLT.2020 - Accept

| Total: 123

#1 Conference on Learning Theory 2020: Preface [PDF] [Copy] [Kimi] [REL]

Authors: Jacob Abernethy ; Shivani Agarwal

Preface to the proceedings of the 32nd Conference on Learning Theory

#2 Domain Compression and its Application to Randomness-Optimal Distributed Goodness-of-Fit [PDF] [Copy] [Kimi1] [REL]

Authors: Jayadev Acharya ; Clément L Canonne ; Yanjun Han ; Ziteng Sun ; Himanshu Tyagi

We study goodness-of-fit of discrete distributions in the distributed setting, where samples are divided between multiple users who can only release a limited amount of information about their samples due to various information constraints. Recently, a subset of the authors showed that having access to a common random seed (i.e., shared randomness) leads to a significant reduction in the sample complexity of this problem. In this work, we provide a complete understanding of the interplay between the amount of shared randomness available, the stringency of information constraints, and the sample complexity of the testing problem by characterizing a tight trade-off between these three parameters. We provide a general distributed goodness-of-fit protocol that as a function of the amount of shared randomness interpolates smoothly between the private- and public-coin sample complexities. We complement our upper bound with a general framework to prove lower bounds on the sample complexity of this testing problems under limited shared randomness. Finally, we instantiate our bounds for the two archetypal information constraints of communication and local privacy, and show that our sample complexity bounds are optimal as a function of all the parameters of the problem, including the amount of shared randomness. A key component of our upper bounds is a new primitive of \textit{domain compression}, a tool that allows us to map distributions to a much smaller domain size while preserving their pairwise distances, using a limited amount of randomness.

#3 Distributed Signal Detection under Communication Constraints [PDF] [Copy] [Kimi] [REL]

Authors: Jayadev Acharya ; Clément L Canonne ; Himanshu Tyagi

Independent draws from a $d$-dimensional spherical Gaussian distribution are distributed across users, each holding one sample. A central server seeks to distinguish between the two hypotheses: the distribution has zero mean, or the mean has $\ell_2$-norm at least $\varepsilon$, a pre-specified threshold. However, the users can each transmit at most $\ell$ bits to the server. This is the problem of detecting whether an observed signal is simply white noise in a distributed setting. We study this distributed testing problem with and without the availability of a common randomness shared by the users. We design schemes with and without such shared randomness which achieve sample complexities. We then obtain lower bounds for protocols with public randomness, tight when $\ell=O(1)$. We finally conclude with several conjectures and open problems.

#4 Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes [PDF] [Copy] [Kimi] [REL]

Authors: Alekh Agarwal ; Sham M Kakade ; Jason D Lee ; Gaurav Mahajan

Policy gradient (PG) methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution (say with a sufficiently rich policy class); how they cope with approximation error due to using a restricted class of parametric policies; or their finite sample behavior. Such characterizations are important not only to compare these methods to their approximate value function counterparts (where such issues are relatively well understood, at least in the worst case), but also to help with more principled approaches to algorithm design. This work provides provable characterizations of computational, approximation, and sample size issues with regards to policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: 1) “tabular” policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy, and 2) restricted policy classes, which may not contain the optimal policy and where we provide agnostic learning results. In the \emph{tabular setting}, our main results are: 1) convergence rate to global optimum for direct parameterization and projected gradient ascent 2) an asymptotic convergence to global optimum for softmax policy parameterization and PG; and a convergence rate with additional entropy regularization, and 3) dimension-free convergence to global optimum for softmax policy parameterization and Natural Policy Gradient (NPG) method with exact gradients. In \emph{function approximation}, we further analyze NPG with exact as well as inexact gradients under certain smoothness assumptions on the policy parameterization and establish rates of convergence in terms of the quality of the initial state distribution. One insight of this work is in formalizing how a favorable initial state distribution provides a means to circumvent worst-case exploration issues. Overall, these results place PG methods under a solid theoretical footing, analogous to the global convergence guarantees of iterative value function based algorithms.

#5 Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal [PDF] [Copy] [Kimi] [REL]

Authors: Alekh Agarwal ; Sham Kakade ; Lin F. Yang

This work considers the sample and computational complexity of obtaining an $\epsilon$-optimal policy in a discounted Markov Decision Process (MDP), given only access to a generative model. In this model, the learner accesses the underlying transition model via a sampling oracle that provides a sample of the next state, when given any state-action pair as input. We are interested in a basic and unresolved question in model based planning: is this naïve “plug-in” approach — where we build the maximum likelihood estimate of the transition model in the MDP from observations and then find an optimal policy in this empirical MDP — non-asymptotically, minimax optimal? Our main result answers this question positively. With regards to computation, our result provides a simpler approach towards minimax optimal planning: in comparison to prior model-free results, we show that using \emph{any} high accuracy, black-box planning oracle in the empirical model suffices to obtain the minimax error rate. The key proof technique uses a leave-one-out analysis, in a novel “absorbing MDP” construction, to decouple the statistical dependency issues that arise in the analysis of model-based planning; this construction may be helpful more generally.

#6 From Nesterov’s Estimate Sequence to Riemannian Acceleration [PDF] [Copy] [Kimi] [REL]

Authors: Kwangjun Ahn ; Suvrit Sra

We propose the first global accelerated gradient method for Riemannian manifolds. Toward establishing our results, we revisit Nesterov’s estimate sequence technique and develop a conceptually simple alternative from first principles. We then extend our analysis to Riemannian acceleration, localizing the key difficulty into “metric distortion.” We control this distortion via a novel geometric inequality, which enables us to formulate and analyze global Riemannian acceleration.

#7 Closure Properties for Private Classification and Online Prediction [PDF] [Copy] [Kimi] [REL]

Authors: Noga Alon ; Amos Beimel ; Shay Moran ; Uri Stemmer

Let H be a class of boolean functions and consider a composed class H’ that is derived from H using some arbitrary aggregation rule (for example, H’ may be the class of all 3-wise majority-votes of functions in H). We upper bound the Littlestone dimension of H’ in terms of that of H. As a corollary, we derive closure properties for online learning and private PAC learning. The derived bounds on the Littlestone dimension exhibit an undesirable exponential dependence. For private learning, we prove close to optimal bounds that circumvents this suboptimal dependency. The improved bounds on the sample complexity of private learning are derived algorithmically via transforming a private learner for the original class H to a private learner for the composed class H’. Using the same ideas we show that any (proper or improper) private algorithm that learns a class of functions H in the realizable case (i.e., when the examples are labeled by some function in the class) can be transformed to a private algorithm that learns the class H in the agnostic case.

#8 Hierarchical Clustering: A 0.585 Revenue Approximation [PDF] [Copy] [Kimi] [REL]

Authors: Noga Alon ; Yossi Azar ; Danny Vainstein

Hierarchical Clustering trees have been widely accepted as a useful form of clustering data, resulting in a prevalence of adopting fields including phylogenetics, image analysis, bioinformatics and more. Recently, Dasgupta (STOC 16’) initiated the analysis of these types of algorithms through the lenses of approximation. Later, the dual problem was considered by Moseley and Wang (NIPS 17’) dubbing it the Revenue goal function. In this problem, given a nonnegative weight $w_{ij}$ for each pair $i,j \in [n]=\{1,2, \ldots ,n\}$, the objective is to find a tree $T$ whose set of leaves is $[n]$ that maximizes the function $\sum_{i<j \in [n]} w_{ij} (n -|T_{ij}|)$, where $|T_{ij}|$ is the number of leaves in the subtree rooted at the least common ancestor of $i$ and $j$. In our work we consider the revenue goal function and prove the following results. First, we prove the existence of a bisection (i.e., a tree of depth $2$ in which the root has two children, each being a parent of $n/2$ leaves) which approximates the general optimal tree solution up to a factor of $\frac{1}{2}$ (which is tight). Second, we apply this result in order to prove a $\frac{2}{3}p$ approximation for the general revenue problem, where $p$ is defined as the approximation ratio of the \textsc{Max-Uncut Bisection} problem. Since $p$ is known to be at least $0.8776$ (Austrin et al., 2016) (Wu et al., 2015), we get a $0.585$ approximation algorithm for the revenue problem. This improves a sequence of earlier results which culminated in an $0.4246$-approximation guarantee (Ahmadian et al., 2019).

#9 Winnowing with Gradient Descent [PDF] [Copy] [Kimi] [REL]

Authors: Ehsan Amid ; Manfred K. Warmuth

The performance of multiplicative updates is typically logarithmic in the number of features when the targets are sparse. Strikingly, we show that the same property can also be achieved with gradient descent updates. We obtain this result by rewriting the non-negative weights $w_i$ of multiplicative updates by $u_i^2$ and then performing a gradient descent step w.r.t. the new $u_i$ parameters. We apply this method to the Winnow update, the Hedge update, and the unnormalized and normalized exponentiated gradient (EG) updates for linear regression. When the original weights $w_i$ are scaled to sum to one (as done for Hedge and normalized EG), then in the corresponding reparameterized update, the $u_i$ parameters are now divided by $\Vert\mathbf{u}\Vert_2$ after the gradient descent step. We show that these reparameterizations closely track the original multiplicative updates by proving in each case the same online regret bounds (albeit in some cases, with slightly different constants). As a side, our work exhibits a simple two-layer linear neural network that, when trained with gradient descent, can experimentally solve a certain sparse linear problem (known as the Hadamard problem) with exponentially fewer examples than any kernel method.

#10 Pan-Private Uniformity Testing [PDF] [Copy] [Kimi] [REL]

Authors: Kareem Amin ; Matthew Joseph ; Jieming Mao

A centrally differentially private algorithm maps raw data to differentially private outputs. In contrast, a locally differentially private algorithm may only access data through public interaction with data holders, and this interaction must be a differentially private function of the data. We study the intermediate model of \emph{pan-privacy}. Unlike a locally private algorithm, a pan-private algorithm receives data in the clear. Unlike a centrally private algorithm, the algorithm receives data one element at a time and must maintain a differentially private internal state while processing this stream. First, we show that pan-privacy against multiple intrusions on the internal state is equivalent to sequentially interactive local privacy. Next, we contextualize pan-privacy against a single intrusion by analyzing the sample complexity of uniformity testing over domain $[k]$. Focusing on the dependence on $k$, centrally private uniformity testing has sample complexity $\Theta(\sqrt{k})$, while noninteractive locally private uniformity testing has sample complexity $\Theta(k)$. We show that the sample complexity of pan-private uniformity testing is $\Theta(k^{2/3})$. By a new $\Omega(k)$ lower bound for the sequentially interactive setting, we also separate pan-private from sequentially interactive locally private and multi-intrusion pan-private uniformity testing.

#11 Dimension-Free Bounds for Chasing Convex Functions [PDF] [Copy] [Kimi] [REL]

Authors: C.J. Argue ; Anupam Gupta ; Guru Guruganesh

We consider the problem of chasing convex functions, where functions arrive over time. The player takes actions after seeing the function, and the goal is to achieve a small function cost for these actions, as well as a small cost for moving between actions. While the general problem requires a polynomial dependence on the dimension, we show how to get dimension-independent bounds for well-behaved functions. In particular, we consider the case where the convex functions are $\kappa$-well-conditioned, and give an algorithm that achieves an $O(\sqrt \kappa)$-competitiveness. Moreover, when the functions are supported on $k$-dimensional affine subspaces—e.g., when the function are the indicators of some affine subspaces—we get $O(\min(k, \sqrt{k \log T}))$-competitive algorithms for request sequences of length $T$. We also show some lower bounds, that well-conditioned functions require $\Omega(\kappa^{1/3})$-competitiveness, and $k$-dimensional functions require $\Omega(\sqrt{k})$-competitiveness.

#12 Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations [PDF] [Copy] [Kimi] [REL]

Authors: Yossi Arjevani ; Yair Carmon ; John C. Duchi ; Dylan J. Foster ; Ayush Sekhari ; Karthik Sridharan

We design an algorithm which finds an $\epsilon$-approximate stationary point (with $\|\nabla F(x)\|\le \epsilon$) using $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stronger assumption of access to multiple queries with the same random seed. We prove a lower bound which establishes that this rate is optimal and—surprisingly—that it cannot be improved using stochastic $p$th order methods for any $p\ge 2$, even when the first $p$ derivatives of the objective are Lipschitz. Together, these results characterize the complexity of non-convex stochastic optimization with second-order methods and beyond. Expanding our scope to the oracle complexity of finding $(\epsilon,\gamma)$-approximate second-order stationary points, we establish nearly matching upper and lower bounds for stochastic second-order methods. Our lower bounds here are novel even in the noiseless case.

#13 Data-driven confidence bands for distributed nonparametric regression [PDF] [Copy] [Kimi] [REL]

Author: Valeriy Avanesov

Gaussian Process Regression and Kernel Ridge Regression are popular nonparametric regression approaches. Unfortunately, they suffer from high computational complexity rendering them inapplicable to the modern massive datasets. To that end a number of approximations have been suggested, some of them allowing for a distributed implementation. One of them is the divide and conquer approach, splitting the data into a number of partitions, obtaining the local estimates and finally averaging them. In this paper we suggest a novel computationally efficient fully data-driven algorithm, quantifying uncertainty of this method, yielding frequentist $L_2$-confidence bands. We rigorously demonstrate validity of the algorithm. Another contribution of the paper is a minimax-optimal high-probability bound for the averaged estimator, complementing and generalizing the known risk bounds.

#14 Estimating Principal Components under Adversarial Perturbations [PDF] [Copy] [Kimi] [REL]

Authors: Pranjal Awasthi ; Xue Chen ; Aravindan Vijayaraghavan

Robustness is a key requirement for widespread deployment of machine learning algorithms, and has received much attention in both statistics and computer science. We study a natural model of robustness for high-dimensional statistical estimation problems that we call the {\em adversarial perturbation model}. An adversary can perturb {\em every} sample arbitrarily up to a specified magnitude $\delta$ measured in some $\ell_q$ norm, say $\ell_\infty$. Our model is motivated by emerging paradigms such as {\em low precision machine learning} and {\em adversarial training}. We study the classical problem of estimating the top-$r$ principal subspace of the Gaussian covariance matrix in high dimensions, under the adversarial perturbation model. We design a computationally efficient algorithm that given corrupted data, recovers an estimate of the top-$r$ principal subspace with error that depends on a robustness parameter $\kappa$ that we identify. This parameter corresponds to the $q \to 2$ operator norm of the projector onto the principal subspace, and generalizes well-studied analytic notions of sparsity. Additionally, in the absence of corruptions, our algorithmic guarantees recover existing bounds for problems such as sparse PCA and its higher rank analogs. We also prove that the above dependence on the parameter $\kappa$ is almost optimal asymptotically, not just in a minimax sense, but remarkably for {\em every} instance of the problem. This {\em instance-optimal} guarantee shows that the $q \to 2$ operator norm of the subspace essentially {\em characterizes} the estimation error under adversarial perturbations.

#15 Active Local Learning [PDF] [Copy] [Kimi] [REL]

Authors: Arturs Backurs ; Avrim Blum ; Neha Gupta

In this work we consider active {\em local learning}: given a query point $x$, and active access to an unlabeled training set $S$, output the prediction $h(x)$ of a near-optimal $h \in H$ using significantly fewer labels than would be needed to actually learn $h$ fully. In particular, the number of label queries should be independent of the complexity of $H$, and the function $h$ should be well-defined, independent of $x$. This immediately also implies an algorithm for {\em distance estimation}: estimating the value $opt(H)$ from many fewer labels than needed to actually learn a near-optimal $h \in H$, by running local learning on a few random query points and computing the average error. For the hypothesis class consisting of functions supported on the interval $[0,1]$ with Lipschitz constant bounded by $L$, we present an algorithm that makes $O(({1 / \epsilon^6}) \log(1/\epsilon))$ label queries from an unlabeled pool of $O(({L / \epsilon^4})\log(1/\epsilon))$ samples. It estimates the distance to the best hypothesis in the class to an additive error of $\epsilon$ for an arbitrary underlying distribution. We further generalize our algorithm to more than one dimensions. We emphasize that the number of labels used is independent of the complexity of the hypothesis class which is linear in $L$ in the one-dimensional case. Furthermore, we give an algorithm to locally estimate the values of a near-optimal function at a few query points of interest with number of labels independent of $L$. We also consider the related problem of approximating the minimum error that can be achieved by the Nadaraya-Watson estimator under a linear diagonal transformation with eigenvalues coming from a small range. For a $d$-dimensional pointset of size $N$, our algorithm achieves an additive approximation of $\epsilon$, makes $\tilde{O}({d}/{\epsilon^2})$ queries and runs in $\tilde{O}({d^2}/{\epsilon^{d+4}}+{dN}/{\epsilon^2})$ time.

#16 Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent [PDF] [Copy] [Kimi] [REL]

Authors: James P. Bailey ; Gauthier Gidel ; Georgios Piliouras

Gradient descent is arguably one of the most popular online optimization methods with a wide array of applications. However, the standard implementation where agents simultaneously update their strategies yields several undesirable properties; strategies diverge away from equilibrium and regret grows over time. In this paper, we eliminate these negative properties by considering a different implementation to obtain $O\left( \nicefrac{1}{T}\right)$ time-average regret via arbitrary fixed step-size. We obtain this surprising property by having agents take turns when updating their strategies. In this setting, we show that an agent that uses gradient descent with any linear loss function obtains bounded regret – regardless of how their opponent updates their strategies. Furthermore, we show that in adversarial settings that agents’ strategies are bounded and cycle when both are using the alternating gradient descent algorithm.

#17 Calibrated Surrogate Losses for Adversarially Robust Classification [PDF] [Copy] [Kimi] [REL]

Authors: Han Bao ; Clay Scott ; Masashi Sugiyama

Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are \emph{calibrated} with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated.

#18 Complexity Guarantees for Polyak Steps with Momentum [PDF] [Copy] [Kimi] [REL]

Authors: Mathieu Barré ; Adrien Taylor ; Alexandre d’Aspremont

In smooth strongly convex optimization, knowledge of the strong convexity parameter is critical for obtaining simple methods with accelerated rates. In this work, we study a class of methods, based on Polyak steps, where this knowledge is substituted by that of the optimal value, $f_*$. We first show slightly improved convergence bounds than previously known for the classical case of simple gradient descent with Polyak steps, we then derive an accelerated gradient method with Polyak steps and momentum, along with convergence guarantees.

#19 Free Energy Wells and Overlap Gap Property in Sparse PCA [PDF] [Copy] [Kimi] [REL]

Authors: Gérard Ben Arous ; Alexander S. Wein ; Ilias Zadik

We study a variant of the sparse PCA (principal component analysis) problem in the “hard” regime, where the inference task is possible yet no polynomial-time algorithm is known to exist. Prior work, based on the low-degree likelihood ratio, has conjectured a precise expression for the best possible (sub-exponential) runtime throughout the hard regime. Following instead a statistical physics inspired point of view, we show bounds on the depth of free energy wells for various Gibbs measures naturally associated to the problem. These free energy wells imply hitting time lower bounds that corroborate the low-degree conjecture: we show that a class of natural MCMC (Markov chain Monte Carlo) methods (with worst-case initialization) cannot solve sparse PCA with less than the conjectured runtime. These lower bounds apply to a wide range of values for two tuning parameters: temperature and sparsity misparametrization. Finally, we prove that the Overlap Gap Property (OGP), a structural property that implies failure of certain local search algorithms, holds in a significant part of the hard regime.

#20 Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process [PDF] [Copy] [Kimi] [REL]

Authors: Guy Blanc ; Neha Gupta ; Gregory Valiant ; Paul Valiant

We consider networks, trained via stochastic gradient descent to minimize $\ell_2$ loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared $\ell_2$ norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards “simple” models.

#21 Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models [PDF] [Copy] [Kimi] [REL]

Authors: Antonio Blanca ; Zongchen Chen ; Daniel Štefankovič ; Eric Vigoda

We study identity testing for restricted Boltzmann machines (RBMs), and more generally for undirected graphical models. Given sample access to the Gibbs distribution corresponding to an unknown or hidden model $M^*$ and given an explicit model $M$, can we distinguish if either $M = M^*$ or if they are (statistically) far apart? Daskalakis et al. (2018) presented a polynomial-time algorithm for identity testing for the ferromagnetic (attractive) Ising model. In contrast, for the antiferromagnetic (repulsive) Ising model, Bezáková et al. (2019) proved that unless $RP=NP$ there is no identity testing algorithm when $\beta d=\omega(\log{n})$, where $d$ is the maximum degree of the visible graph and $\beta$ is the largest edge weight (in absolute value). We prove analogous hardness results for RBMs (i.e., mixed Ising models on bipartite graphs), even when there are no latent variables or an external field. Specifically, we show that if $RP\neq NP$, then when $\beta d=\omega(\log{n})$ there is no polynomial-time algorithm for identity testing for RBMs; when $\beta d =O(\log{n})$ there is an efficient identity testing algorithm that utilizes the structure learning algorithm of Klivans and Meka (2017). In addition, we prove similar lower bounds for purely ferromagnetic RBMs with inconsistent external fields, and for the ferromagnetic Potts model. Previous hardness results for identity testing of Bezáková et al. (2019) utilized the hardness of finding the maximum cuts, which corresponds to the ground states of the antiferromagnetic Ising model. Since RBMs are on bipartite graphs such an approach is not feasible. We instead introduce a novel methodology to reduce from the corresponding approximate counting problem and utilize the phase transition that is exhibited by RBMs and the mean-field Potts model. We believe that our method is general, and that it can be used to establish the hardness of identity testing for other spin systems.

#22 Selfish Robustness and Equilibria in Multi-Player Bandits [PDF] [Copy] [Kimi] [REL]

Authors: Etienne Boursier ; Vianney Perchet

Motivated by cognitive radios, stochastic multi-player multi-armed bandits gained a lot of interest recently. In this class of problems, several players simultaneously pull arms and encounter a collision – with 0 reward – if some of them pull the same arm at the same time. While the cooperative case where players maximize the collective reward (obediently following some fixed protocol) has been mostly considered, robustness to malicious players is a crucial and challenging concern. Existing approaches consider only the case of adversarial jammers whose objective is to blindly minimize the collective reward. We shall consider instead the more natural class of selfish players whose incentives are to maximize their individual rewards, potentially at the expense of the social welfare. We provide the first algorithm robust to selfish players (a.k.a. Nash equilibrium) with a logarithmic regret, when the arm performance is observed. When collisions are also observed, Grim Trigger type of strategies enable some implicit communication-based algorithms and we construct robust algorithms in two different settings: the homogeneous (with a regret comparable to the centralized optimal one) and heterogeneous cases (for an adapted and relevant notion of regret). We also provide impossibility results when only the reward is observed or when arm means vary arbitrarily among players.

#23 Sharper Bounds for Uniformly Stable Algorithms [PDF] [Copy] [Kimi] [REL]

Authors: Olivier Bousquet ; Yegor Klochkov ; Nikita Zhivotovskiy

Deriving generalization bounds for stable algorithms is a classical question in learning theory taking its roots in the early works by Vapnik and Chervonenkis (1974) and Rogers and Wagner (1978). In a series of recent breakthrough papers by Feldman and Vondrak (2018, 2019), it was shown that the best known high probability upper bounds for uniformly stable learning algorithms due to Bousquet and Elisseef (2002) are sub-optimal in some natural regimes. To do so, they proved two generalization bounds that significantly outperform the simple generalization bound of Bousquet and Elisseef (2002). Feldman and Vondrak also asked if it is possible to provide sharper bounds and prove corresponding high probability lower bounds. This paper is devoted to these questions: firstly, inspired by the original arguments of Feldman and Vondrak (2019), we provide a short proof of the moment bound that implies the generalization bound stronger than both recent results in Feldman and Vondrak (2018, 2019). Secondly, we prove general lower bounds, showing that our moment bound is sharp (up to a logarithmic factor) unless some additional properties of the corresponding random variables are used. Our main probabilistic result is a general concentration inequality for weakly correlated random variables, which may be of independent interest.

#24 The Gradient Complexity of Linear Regression [PDF] [Copy] [Kimi] [REL]

Authors: Mark Braverman ; Elad Hazan ; Max Simchowitz ; Blake Woodworth

We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, $\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm. Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble.

#25 A Corrective View of Neural Networks: Representation, Memorization and Learning [PDF] [Copy] [Kimi] [REL]

Authors: Guy Bresler ; Dheeraj Nagaraj

We develop a \emph{corrective mechanism} for neural network approximation: the total available non-linear units are divided into multiple groups and the first group approximates the function under consideration, the second approximates the error in approximation produced by the first group and corrects it, the third group approximates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks: 1. Two-layer neural networks in the random features regime (RF) can memorize arbitrary labels for $n$ arbitrary points in $\mathbb{R}^d$ with $\tilde{O}(\tfrac{n}{\theta^4})$ ReLUs, where $\theta$ is the minimum distance between two different points. This bound can be shown to be optimal in $n$ up to logarithmic factors. 2. Two-layer neural networks with ReLUs and smoothed ReLUs can represent functions with an error of at most $\epsilon$ with $O(C(a,d)\epsilon^{-1/(a+1)})$ units for $a \in \mathbb{N}\cup\{0\}$ when the function has $\Theta(ad)$ bounded derivatives. In certain cases $d$ can be replaced with effective dimension $q \ll d$. Our results indicate that neural networks with only a single nonlinear layer are surprisingly powerful with regards to representation, and show that in contrast to what is suggested in recent work, depth is not needed in order to represent highly smooth functions. 3. Gradient Descent on the recombination weights of a two-layer random features network with ReLUs and smoothed ReLUs can learn low degree polynomials up to squared error $\epsilon$ with $\mathrm{subpoly}(1/\epsilon)$ units. Even though deep networks can approximate these polynomials with $\mathrm{polylog}(1/\epsilon)$ units, existing \emph{learning} bounds for this problem require $\mathrm{poly}(1/\epsilon)$ units. To the best of our knowledge, our results give the first sub-polynomial learning guarantees for this problem.