MLSYS.2019

| Total: 32

#1 3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning [PDF] [Copy] [Kimi2] [REL]

Authors: Hyeontaek Lim ; David G Andersen ; Michael Kaminsky

3LC is a lossy compression scheme for state change traffic in distributed machine learning (ML) that strikes a balance between multiple goals: traffic reduction, accuracy, computation overhead, and generality. It combines three techniques---3-value quantization with sparsity multiplication, base-3^5 encoding, and zero-run encoding---to leverage the strengths of quantization and sparsification techniques and avoid their drawbacks. 3LC achieves a data compression ratio of up to 39--107X, preserves the high test accuracy of trained models, and provides high compression speed. Distributed ML frameworks can use 3LC without modifications to existing ML algorithms. Our experiments show that 3LC reduces wall-clock training time of ResNet-110 for CIFAR-10 on a bandwidth-constrained 10-GPU cluster by up to 16--23X compared to TensorFlow's baseline design.

#2 To Compress Or Not To Compress: Understanding The Interactions Between Adversarial Attacks And Neural Network Compression [PDF1] [Copy] [Kimi] [REL]

Authors: Ilia Shumailov ; Yiren Zhao ; Robert Mullins ; Ross Anderson

As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquitous on edge devices; such compressed DNNs lower the computational requirements. Meanwhile, multiple recent studies show ways of constructing adversarial samples that make DNNs misclassify. We therefore investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs. We find that such samples remain transferable for both pruned and quantised models. For pruning, adversarial samples at high sparsities are marginally less transferable. For quantisation, we find the transferability of adversarial samples is highly sensitive to integer precision.

#3 BlueConnect: Decomposing All-Reduce for Deep Learning on Heterogeneous Network Hierarchy [PDF] [Copy] [Kimi] [REL]

Authors: Minsik Cho ; Ulrich Finkler ; David Kung ; Hillery Hunter

As deep neural networks get more complex and input datasets get larger, it can take days or even weeks to train a deep neural network to the desired accuracy. Therefore, enabling distributed deep learning at a massive scale is a critical, since it offers the potential to reduce the training time from weeks to hours. In this paper, we present BlueConnect, an efficient communication library for distributed deep learning that is highly optimized for popular GPU-based platforms. BlueConnect decomposes a single all-reduce operation into a large number of parallelizable reduce-scatter and all-gather operations to exploit the trade-off between latency and bandwidth, and adapt to a variety of network configurations. Therefore, each individual operation can be mapped to a different network fabric and take advantage of the best performing library for that fabric. We integrated BlueConnect into Caffe2, and demonstrated that BlueConnect significantly pushes the state-of-the-art in large-scale deep learning by reducing communication overhead by 87\% on 192 GPUs for Resnet-50 training over prior arts.

#4 CaTDet: Cascaded Tracked Detector for Efficient Object Detection from Video [PDF] [Copy] [Kimi] [REL]

Authors: Huizi Mao ; Taeyoung Kong ; bill dally

Detecting objects in a video is a compute-intensive task. In this paper we propose CaTDet, a system to speedup object detection by leveraging the temporal correlation in video. CaTDet consists of two DNN models that form a cascaded detector, and an additional tracker to predict regions of interests based on historic detections. We also propose a new metric, mean Delay(mD), which is designed for latency-critical video applications. Experiments on the KITTI dataset show that CaTDet reduces operation count by 5.1-8.7x with the same mean Average Precision(mAP) as the single-model Faster R-CNN detector and incurs additional delay of 0.3 frame. On CityPersons dataset, CaTDet achieves 13.0x reduction in operations with 0.8 mAP loss.

#5 Pytorch-BigGraph: A Large Scale Graph Embedding System [PDF] [Copy] [Kimi] [REL]

Authors: Adam Lerer ; Ledell Wu ; Jiajun Shen ; Timothee Lacroix ; Luca Wehrstedt ; Abhijit Bose ; Alex Peysakhovich

Graph embedding methods produce unsupervised node features from graphs that can then be used for a variety of machine learning tasks. However, modern graph datasets contain billions of nodes and trillions of edges, which exceeds the capability of existing embedding systems. We present Pytorch-BigGraph (PBG), an embedding system that incorporates several modifications to traditional multi-relation embedding systems that allow it to scale to graphs with billions of nodes and trillions of edges. PBG uses graph partitioning to train arbitrarily large embeddings on either a single machine or in a distributed environment. We evaluate demonstrate comparable performance with existing embedding systems on common benchmarks, while allowing for scaling to arbitrarily large graphs and parallelization on multiple machines. We train and evaluate embeddings on several large social network graphs and on the full Freebase dataset, which contains over 100 million nodes and 2 billion edges.

#6 AGGREGATHOR: Byzantine Machine Learning via Robust Gradient Aggregation [PDF] [Copy] [Kimi] [REL]

Authors: Georgios Damaskinos ; El-Mahdi El-Mhamdi ; Rachid Guerraoui ; Arsany Guirguis ; Sébastien Rouault

We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) distributed stochastic gradient descent. Following the standard parameter server model, we assume that a minority of worker machines can be controlled by an adversary and behave arbitrarily. Such a setting has been theoretically studied with several of the existing approaches using a robust aggregation of the workers’ gradient estimations. Yet, the question is whether a Byzantine-resilient aggregation can leverage more workers to speed-up learning. We answer this theoretical question, and implement these state-of-the-art theoretical approaches on AGGREGATHOR, to assess their practical costs. We built AGGREGATHOR around TensorFlow and introduce modifications for vanilla TensorFlow towards making it usable in an actual Byzantine setting. AGGREGATHOR also permits the use of unreliable gradient transfer over UDP to provide further speed-up (without losing the accuracy) over the native communication protocols (TCP-based) of TensorFlow in saturated networks. We quantify the overhead of Byzantine resilience of AGGREGATHOR to 19% and 43% (to ensure weak and strong Byzantine resilience respectively) compared to vanilla TensorFlow.

#7 Priority-based Parameter Propagation for Distributed DNN Training [PDF] [Copy] [Kimi] [REL]

Authors: Anand Jayarajan ; Jinliang Wei ; Garth Gibson ; Alexandra Fedorova ; Gennady Pekhimenko

Data parallel training is widely used for scaling distributed deep neural network (DNN) training. However, the performance benefits are often limited by the communication-heavy parameter synchronization step. In this paper, we take advantage of the domain specific knowledge of DNN training and overlap parameter synchronization with computation in order to improve the training performance. We make two key observations: (1) the optimal data representation granularity for the communication may differ from that used by the underlying DNN model implementation and (2) different parameters can afford different synchronization delays. Based on these observations, we propose a new synchronization mechanism called Priority-based Parameter Propagation (P3). P3 synchronizes parameters at a finer granularity and schedules data transmission in such a way that the training process incurs minimal communication delay. We show that P3 can improve the training throughput of ResNet-50, Sockeye and VGG-19 by as much as 25%, 38% and 66% respectively on clusters with realistic network bandwidth.

#8 Continuous Integration of Machine Learning Models with ease.ml/ci: Towards a Rigorous Yet Practical Treatment [PDF] [Copy] [Kimi] [REL]

Authors: Cedric Renggli ; Bojan Karlaš ; Bolin Ding ; Feng Liu ; Kevin Schawinski ; Wentao Wu ; Ce Zhang

Continuous integration is an indispensable step of modern software engineering practices to systematically manage the life cycles of system development. Developing a machine learning model is no difference — it is an engineering process with a life cycle, including design, implementation, tuning, testing, and deployment. However, most, if not all, existing continuous integration engines do not support machine learning as first-class citizens.

#9 RLgraph: Modular Computation Graphs for Deep Reinforcement Learning [PDF] [Copy] [Kimi] [REL]

Authors: Michael Schaarschmidt ; Sven Mika ; Kai Fricke ; Eiko Yoneki

Reinforcement learning (RL) tasks are challenging to implement, execute and test due to algorithmic instability, hyper-parameter sensitivity, and heterogeneous distributed communication patterns. We argue for the separation of logical component composition, backend graph definition, and distributed execution. To this end, we introduce RLgraph, a library for designing and executing reinforcement learning tasks in both static graph and define-by-run paradigms. The resulting implementations are robust, incrementally testable, and yield high performance across different deep learning frameworks and distributed backends.

#10 Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD [PDF] [Copy] [Kimi] [REL]

Authors: Jianyu Wang ; Gauri Joshi

Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays. This work considers a distributed training framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. We analyze the true speed of error convergence with respect to wall-clock time (instead of the number of iterations), and analyze how it is affected by the frequency of averaging. The main contribution is the design of AdaComm, an adaptive communication strategy that starts with infrequent averaging to save communication delay and improve convergence speed, and then increases the communication frequency in order to achieve a low error floor. Rigorous experiments on training deep neural networks show that AdaComm can take 3 times less time than fully synchronous SGD, and still reach the same final training loss.

#11 Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications [PDF] [Copy] [Kimi] [REL]

Authors: Dibakar Gope ; Ganesh Dasika ; Matthew Mattina

Machine learning-based applications are increasingly prevalent in IoT devices. The power and storage constraints of these devices make it particularly challenging to run modern neural networks, limiting the number of new applications that can be deployed on an IoT system. A number of compression techniques have been proposed, each with its own trade-offs. We propose a hybrid network which combines the strengths of current neural- and tree-based learning techniques in conjunction with ternary quantization, and show a detailed analysis of the associated model design space. Using this hybrid model we obtained a 11.1% reduction in the number of computations, a 52.2% reduction in the model size, and a 30.6% reduction in the overall memory footprint over a state-of-the-art keyword-spotting neural network, with negligible loss in accuracy.

#12 Optimizing DNN Computation with Relaxed Graph Substitutions [PDF] [Copy] [Kimi] [REL]

Authors: Zhihao Jia ; James Thomas ; Todd Warszawski ; Mingyu Gao ; Matei Zaharia ; Alex Aiken

Existing deep learning frameworks optimize the computation graph of a DNN model by performing greedy rule-based graph transformations, which generally only consider transformations that strictly improve runtime performance. We propose relaxed graph substitutions that enable the exploration of complex graph optimizations by relaxing the strict performance improvement constraint, which greatly increases the space of semantically equivalent computation graphs that can be discovered by repeated application of a suitable set of graph transformations. We introduce a backtracking search algorithm over a set of relaxed graph substitutions to find optimized networks and use a flow-based graph split algorithm to recursively split a computation graph into smaller subgraphs to allow efficient search. We implement relaxed graph substitutions in a system called MetaFlow and show that MetaFlow improves the inference and training performance by 1.1-1.6× and 1.1-1.2× respectively over existing deep learning frameworks.

#13 Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification [PDF] [Copy] [Kimi] [REL]

Authors: Qi Lei ; Lingfei Wu ; Pin-Yu Chen ; Alex Dimakis ; Inderjit S. Dhillon ; Michael J Witbrock

No summary was provided.

#14 Scaling Video Analytics on Constrained Edge Nodes [PDF] [Copy] [Kimi] [REL]

Authors: Christopher Canel ; Thomas Kim ; Giulio Zhou ; Conglong Li ; Hyeontaek Lim ; David G Andersen ; Michael Kaminsky ; Subramanya Dulloor

As video camera deployments continue to grow, the need to process large volumes of real-time data strains wide-area network infrastructure. When per-camera bandwidth is limited, it is infeasible for applications such as traffic monitoring and pedestrian tracking to offload high-quality video streams to a datacenter. This paper presents FilterForward, a new edge-to-cloud system that enables datacenter-based applications to process content from thousands of cameras by installing lightweight edge filters that backhaul only relevant video frames. FilterForward introduces fast and expressive per-application “microclassifiers” that share computation to simultaneously detect dozens of events on computationally-constrained edge nodes. Only matching events are transmitted to the datacenter. Evaluation on two real-world camera feed datasets shows that FilterForward improves computational efficiency and event detection accuracy for challenging video content while substantially reducing network bandwidth use.

#15 Full Deep Neural Network Training On A Pruned Weight Budget [PDF] [Copy] [Kimi] [REL]

Authors: Mieszko Lis ; Maximilian Golub ; Guy Lemieux

We introduce a DNN training technique that learns only a fraction of the full parameter set without incurring an accuracy penalty. To do this, our algorithm constrains the total number of weights updated during backpropagation to those with the highest total gradients. The remaining weights are not tracked, and their initial value is regenerated at every access to avoid storing them in memory. This can dramatically reduce the number of off-chip memory accesses during both training and inference, a key component of the energy needs of DNN accelerators. By ensuring that the total weight diffusion remains close to that of baseline unpruned SGD, networks pruned using our technique are able to retain state-of-the-art accuracy across network architectures — including networks previously identified as difficult to compress, such as Densenet and WRN. With ResNet18 on ImageNet, we observe an 11.7× weight reduction with no accuracy loss, and up to 24.4× with a small accuracy impact.

#16 Towards Federated Learning at Scale: System Design [PDF] [Copy] [Kimi] [REL]

Authors: Keith Bonawitz ; Hubert Eichner ; Wolfgang Grieskamp ; Dzmitry Huba ; Alex Ingerman ; Vladimir Ivanov ; Chloé Kiddon ; Jakub Konečný ; Stefano Mazzocchi ; Brendan McMahan ; Timon Van Overveldt ; David Petrou ; Daniel Ramage ; Jason Roselander

Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.

#17 Data Validation for Machine Learning [PDF] [Copy] [Kimi] [REL]

Authors: Neoklis Polyzotis ; Martin Zinkevich ; Sudip Roy ; Eric Breck ; Steven Whang

Machine learning is a powerful tool for gleaning knowledge from massive amounts of data. While a great deal of machine learning research has focused on improving the accuracy and efficiency of training and inference algorithms, there is less attention in the equally important problem of monitoring the quality of data fed to machine learning. The importance of this problem is hard to dispute: errors in the input data can nullify any benefits on speed and accuracy for training and inference. This argument points to a data-centric approach to machine learning that treats training and serving data as an important production asset, on par with the algorithm and infrastructure used for learning.

#18 TicTac: Accelerating Distributed Deep Learning with Communication Scheduling [PDF] [Copy] [Kimi] [REL]

Authors: Sayed Hadi Hashemi ; Sangeetha Abdu Jyothi ; Roy Campbell

No summary was provided.

#19 Restructuring Batch Normalization to Accelerate CNN Training [PDF] [Copy] [Kimi] [REL]

Authors: Wonkyung Jung ; Daejin Jung ; Byeongho Kim ; Sunjung Lee ; Wonjong Rhee ; Jung Ho Ahn

Batch Normalization (BN) has become a core design block of modern Convolutional Neural Networks (CNNs). A typical modern CNN has a large number of BN layers in its lean and deep architecture. BN requires mean and variance calculations over each mini-batch during training. Therefore, the existing memory access reduction techniques, such as fusing multiple CONV layers, are not effective for accelerating BN due to their inability to optimize mini-batch related calculations during training. To address this increasingly important problem, we propose to restructure BN layers by first splitting a BN layer into two sub-layers (fission) and then combining the first sub-layer with its preceding CONV layer and the second sub-layer with the following activation and CONV layers (fusion). The proposed solution can significantly reduce main-memory accesses while training the latest CNN models, and the experiments on a chip multiprocessor show that the proposed BN restructuring can improve the performance of DenseNet-121 by 25.7%.

#20 Serving Recurrent Neural Networks Efficiently with a Spatial Accelerator [PDF] [Copy] [Kimi] [REL]

Authors: Tian Zhao ; Yaqi Zhang ; Kunle Olukotun

Recurrent Neural Network (RNN) applications form a major class of AI-powered, low-latency data center workloads. Most execution models for RNN acceleration break computation graphs into BLAS kernels, which lead to significant inter-kernel data movement and resource underutilization. We show that by supporting more general loop constructs that capture design parameters in accelerators, it is possible to improve resource utilization using cross-kernel optimization without sacrificing programmability. Such abstraction level enables a design space search that can lead to efficient usage of on-chip resources on a spatial architecture across a range of problem sizes. We evaluate our optimization strategy on such abstraction with DeepBench using a configurable spatial accelerator. We demonstrate that this implementation provides a geometric speedup of 30x in performance, 1.6x in area, and 2x in power efficiency compared to a Tesla V100 GPU, and a geometric speedup of 2x compared to Microsoft Brainwave implementation on a Stratix 10 FPGA.

#21 TensorFlow.js: Machine Learning For The Web and Beyond [PDF] [Copy] [Kimi] [REL]

Authors: Daniel Smilkov ; Nikhil Thorat ; Yannick Assogba ; Charles Nicholson ; Nick Kreeger ; Ping Yu ; Shanqing Cai ; Eric Nielsen ; David Soegel ; Stan Bileschi ; Michael Terry ; Ann Yuan ; Kangyi Zhang ; Sandeep Gupta ; Sarah Sirajuddin ; D Sculley ; Rajat Monga ; Greg Corrado ; Fernanda Viegas ; Martin M Wattenberg

TensorFlow.js is a library for building and executing machine learning algorithms in JavaScript. TensorFlow.js models run in a web browser and in the Node.js environment. The library is part of the TensorFlow ecosystem, providing a set of APIs that are compatible with those in Python, allowing models to be ported between the Python and JavaScript ecosystems. TensorFlow.js has empowered a new set of developers from the extensive JavaScript community to build and deploy machine learning models and enabled new classes of on-device computation. This paper describes the design, API, and implementation of TensorFlow.js, and highlights some of the impactful use cases.

#22 YellowFin and the Art of Momentum Tuning [PDF] [Copy] [Kimi] [REL]

Authors: Jian Zhang ; Ioannis Mitliagkas

Hyperparameter tuning is one of the most time-consuming workloads in deep learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable. Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better test metrics. Motivated by this trend, we ask: can simple adaptive methods based on SGD perform as well or better? We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam. We then analyze its robustness to learning rate misspecification and objective curvature variation. Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD. YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly. We empirically show that YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to 3.28x in synchronous and up to 2.69x in asynchronous settings.

#23 AdaScale: Towards Real-time Video Object Detection using Adaptive Scaling [PDF] [Copy] [Kimi] [REL]

Authors: Ting-Wu Chin ; Ruizhou Ding ; Diana Marculescu

In vision-enabled autonomous systems such as robots and autonomous cars, video object detection plays a crucial role, and both its speed and accuracy are important factors to provide reliable operation. The key insight we show in this paper is that speed and accuracy are not necessarily a trade-off when it comes to image scaling. Our results show that re-scaling the image to a lower resolution will sometimes produce better accuracy. Based on this observation, we propose a novel approach, dubbed AdaScale, which adaptively selects the input image scale that improves both accuracy and speed for video object detection. To this end, our results on ImageNet VID and mini YouTube-BoundingBoxes datasets demonstrate 1.3 points and 2.7 points mAP improvement with 1.6× and 1.8× speedup, respectively. Additionally, we improve state-of-the-art video acceleration work by an extra 1.25× speedup with slightly better mAP on ImageNet VID dataset.

#24 TensorFlow Eager: A multi-stage, Python-embedded DSL for machine learning [PDF] [Copy] [Kimi] [REL]

Authors: Akshay Agrawal ; Akshay Modi ; Alexandre Passos ; Allen Lavoie ; Ashish Agarwal ; Asim Shankar ; Igor Ganichev ; Josh Levenberg ; Mingsheng Hong ; Rajat Monga ; Shanqing Cai

TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production. TensorFlow, which TensorFlow Eager extends, requires users to represent computations as dataflow graphs; this permits compiler optimizations and simplifies deployment but hinders rapid prototyping and run-time dynamism. TensorFlow Eager eliminates these usability costs without sacrificing the benefits furnished by graphs: It provides an imperative front-end to TensorFlow that executes operations immediately and a JIT tracer that translates Python functions composed of TensorFlow operations into executable dataflow graphs. TensorFlow Eager thus offers a multi-stage programming model that makes it easy to interpolate between imperative and staged execution in a single package.

#25 Beyond Data and Model Parallelism for Deep Neural Networks. [PDF1] [Copy] [Kimi] [REL]

Authors: Zhihao Jia ; Matei Zaharia ; Alex Aiken

Existing deep learning systems commonly parallelize deep neural network (DNN) training using data or model parallelism, but these strategies often result in suboptimal parallelization performance. We introduce SOAP, a more comprehensive search space of parallelization strategies for DNNs that includes strategies to parallelize a DNN in the Sample, Operator, Attribute, and Parameter dimensions. We present FlexFlow, a deep learning engine that uses guided randomized search of the SOAP space to find a fast parallelization strategy for a specific parallel machine. To accelerate this search, FlexFlow introduces a novel execution simulator that can accurately predict a parallelization strategy’s performance and is three orders of magnitude faster than prior approaches that execute each strategy. We evaluate FlexFlow with six real-world DNN benchmarks on two GPU clusters and show that FlexFlow increases training throughput by up to 3.3× over state-of-the-art approaches, even when including its search time, and also improves scalability.