MLSYS.2021

| Total: 52

#1 Accounting for Variance in Machine Learning Benchmarks [PDF] [Copy] [Kimi1] [REL]

Authors: Xavier Bouthillier ; Pierre Delaunay ; Mirko Bronzi ; Assya Trofimov ; Brennan Nichyporuk ; Justin Szeto ; Nazanin Mohammadi Sepahvand ; Edward Raff ; Kanika Madan ; Vikram Voleti ; Samira Ebrahimi Kahou ; Vincent Michalski ; Tal Arbel ; Chris Pal ; Gael Varoquaux ; Pascal Vincent

Strong empirical evidence that one machine-learning algorithm A outperforms another one B, ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process and all sources of variation, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly machine learning benchmark. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that a biased estimator with more source of variation will give better results, closer to the ideal estimator at a 51× reduction in compute cost. Using this we perform a detailed study on the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for future performance comparisons.

#2 Wavelet: Efficient DNN Training with Tick-Tock Scheduling [PDF] [Copy] [Kimi] [REL]

Authors: Guanhua Wang ; Kehan Wang ; Kenan Jiang ; XIANGJUN LI ; Ion Stoica

DNNs have revolutionized across a wide range of applications, such as image classification, speech recognition and robotics control. As DNN models become more computationally expensive to train, parallel execution with multiple accelerators (e.g. GPUs) is adopted. System efficiency is a big issue when scaling out. However, as computation power increases, GPUs are under-utilized mainly due to limited local memory size. To address this memory bound, we present Wavelet, an efficient and generic approach that can fully utilize all the available on-device memory among GPUs involved in the distributed training job. Wavelet achieves near optimal on-device memory usage by adopting a simple scheduling scheme called Tick-Tock, which interleaves waves of peak memory usage among the accelerators. Evaluations on a variety of DNN models and tasks show that, Wavelet trains models up to 6.7x faster than commonly used parallelism techniques.

#3 Pipelined Backpropagation at Scale: Training Large Models without Batches [PDF] [Copy] [Kimi] [REL]

Authors: Atli Kosson ; Vitaliy Chiley ; Abhinav Venigalla ; Joel Hestness ; Urs Koster

New hardware can substantially increase the speed and efficiency of deep neural network training. To guide the development of future hardware architectures, it is pertinent to explore the hardware and machine learning properties of alternative training algorithms. In this work we evaluate the use of small batch, fine-grained Pipelined Backpropagation, an asynchronous pipeline parallel training algorithm that has significant hardware advantages. We introduce two methods, Spike Compensation and Linear Weight Prediction, that effectively mitigate the downsides caused by the asynchronicity of Pipelined Backpropagation and outperform existing techniques in our setting. We show that appropriate normalization and small batch sizes can also aid training. With our methods, fine-grained Pipelined Backpropagation using a batch size of one can match the accuracy of SGD for multiple networks trained on CIFAR-10 and ImageNet. Simple scaling rules allow the use of existing hyperparameters for traditional training without additional tuning.

#4 Boveda: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick [PDF] [Copy] [Kimi] [REL]

Authors: Isak Edo Vivancos ; Sayeh Sharify ; Daniel Ly-Ma ; Ameer Abdelhadi ; Ciaran Bannon ; Milos Nikolic ; Mostafa Mahmoud ; Alberto Delmas Lascorz ; Gennady Pekhimenko ; Andreas Moshovos

Data access between on- and off-chip memories account for a large fraction of overall energy consumption during inference with deep learning networks. On-chip memory compression can greatly reduce this energy cost as long as it balances the simplicity and low cost of the compression/decompression implementation and its effectiveness in data size reduction. We present Boveda, a simple and effective on-chip lossless memory compression technique for fixed-point precision networks. It reduces data widths by exploiting the value distribution deep learning applications naturally exhibit. Boveda can increase the effective on-chip capacity, reduce off-chip traffic, and/or achieve a desired performance/energy target while using smaller on-chip memories. Boveda can be placed after any memory block in the on-chip memory hierarchy and can work with \textul{any} data-parallel processing units such as the vector-like or the tensorcore units of modern graphics processors, systolic arrays such as that used in the Tensor Processing Unit, and units that process sparse tensors such as those used in the SCNN accelerator. To demonstrate the potential of Boveda, we implement it over (i) SCNN, a state-of-the-art accelerator for sparse networks, (ii) a Tensorcore-like architecture, and (iii) TPU. Boveda reduces memory footprint by 34\% for SCNN and sparse models on top of zero compression. For dense models, Boveda improves compression by 47\%. We also present a prototype FPGA implementation.

#5 TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models [PDF] [Copy] [Kimi] [REL]

Authors: Chunxing Yin ; Bilge Acun ; Carole-Jean Wu ; Xing Liu

The memory capacity of embedding tables in deep learning recommendation models (DLRMs) is increasing dramatically from tens of GBs to TBs across the industry. Given the fast growth in DLRMs, novel solutions are urgently needed in order to enable DLRM innovations. At the same time, this must be done in a fast and efficient way without having to exponentially increase infrastructure capacity demands. In this paper, we demonstrate the promising potential of Tensor Train decomposition for DLRMs (TT-Rec), an important yet under-investigated context. We design and implement optimized kernels (TT-EmbeddingBag) to evaluate the proposed TT-Rec design. TT-EmbeddingBag is 3x faster than the SOTA TT implementation. The performance of TT-Rec is further optimized with the batched matrix multiplication and caching strategies for embedding vector lookup operations. In addition, we present mathematically and empirically the effect of weight initialization distribution on DLRM accuracy and propose to initialize the tensor cores of TT-Rec following the sampled Gaussian distribution. We evaluate TT-Rec across three important design space dimensions---memory capacity, accuracy, and timing performance---by training MLPerf-DLRM with Criteo's Kaggle and Terabyte data sets. TT-Rec compresses the model size by 4x to 221x for Kaggle, with 0.03% to 0.3% loss of accuracy correspondingly. For Terabyte, our approach achieves 112x model size reduction which comes with no accuracy loss nor training time overhead as compared to the uncompressed baseline.

#6 FLAML: A Fast and Lightweight AutoML Library [PDF] [Copy] [Kimi] [REL]

Authors: Chi Wang ; Qingyun Wu ; Markus Weimer ; Erkang Zhu

We study the problem of using low computational cost to automate the choices of learners and hyperparameters for an ad-hoc training dataset and error metric, by conducting trials of different configurations on the given training data. We investigate the joint impact of multiple factors on both trial cost and model error, and propose several design guidelines. Following them, we build a fast and lightweight library FLAML which optimizes for low computational resource in finding accurate models. FLAML integrates several simple but effective search strategies into an adaptive system. It significantly outperforms top-ranked AutoML libraries on a large open source AutoML benchmark under equal, or sometimes orders of magnitude smaller budget constraints.

#7 Swift for TensorFlow: A portable, flexible platform for deep learning [PDF] [Copy] [Kimi] [REL]

Authors: Brennan Saeta ; Denys Shabalin

Swift for TensorFlow is a deep learning platform that scales from mobile devices to clusters of hardware accelerators in data centers. It combines a language-integrated automatic differentiation system and multiple Tensor implementations within a modern ahead-of-time compiled language oriented around mutable value semantics. The resulting platform has been validated through use in over 30 deep learning models and and has been employed across data center and mobile applications.

#8 IOS: Inter-Operator Scheduler for CNN Acceleration [PDF] [Copy] [Kimi] [REL]

Authors: Yaoyao Ding ; Ligeng Zhu ; Zhihao Jia ; Gennady Pekhimenko ; Song Han

To accelerate CNN inference, existing deep learning frameworks focus on optimizing intra-operator parallelization. However, a single operator can no longer fully utilize the available parallelism given the rapid advances in high-performance hardware, resulting in a large gap between the peak performance and the real performance. This performance gap is more severe under smaller batch sizes. In this work, we extensively study the parallelism between operators and propose Inter-Operator Scheduler (IOS) to automatically schedule multiple operators' parallel execution through a novel dynamic programming algorithm. IOS consistently outperforms state-of-the-art libraries (e.g., TensorRT) by 1.1 to 1.5x on modern CNN benchmarks. The code to reproduce each experiment is available at: https://github.com/mit-han-lab/inter-operator-scheduler.

#9 Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters [PDF] [Copy] [Kimi] [REL]

Authors: Shaohuai Shi ; Xianhao Zhou ; Shutao Song ; Xingyao Wang ; Zilin Zhu ; Xue Huang ; Xinan Jiang ; Feihu Zhou ; Zhenyu Guo ; Liqiang Xie ; Rui Lan ; Xianbin Ouyang ; Yan Zhang ; Jieqian Wei ; Jing Gong ; Weiliang Lin ; Ping Gao ; Peng Meng ; Xiaomin Xu ; Chenyang Guo ; Bo Yang ; Zhibo Chen ; Yongjian Wu ; Xiaowen Chu

Distributed training techniques have been widely deployed in large-scale deep models training on dense-GPU clusters. However, on public cloud clusters, due to the moderate inter-connection bandwidth between instances, traditional state-of-the-art distributed training systems cannot scale well in training large-scale models. In this paper, we propose a new computing and communication efficient top-k sparsification communication library for distributed training. To further improve the system scalability, we optimize I/O by proposing a simple yet efficient multi-level data caching mechanism and optimize the update operation by introducing a novel parallel tensor operator. Experimental results on a 16-node Tencent Cloud cluster (each node with 8 Nvidia Tesla V100 GPUs) show that our system achieves 25%-40% faster than existing state-of-the-art systems on CNNs and Transformer. We finally break the record on DAWNBench on training ResNet-50 to 93% top-5 accuracy on ImageNet.

#10 FirePlace: Placing Firecraker Virtual Machines with Hindsight Imitation [PDF] [Copy] [Kimi] [REL]

Authors: Bharathan Balaji ; Christopher Kakovitch ; Balakrishnan Narayanaswamy

Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract per-user compute from shared physical infrastructure. Users of these services require VMs of varying sizes and configurations, which the provider places on a set of physical machines (PMs). VMs on the same physical PM share memory and CPU resources so a bad packing directly impacts the quality of user experience. We consider the placement of Firecracker VMs (a form of Micro-VMs) -- lightweight VMs that are typically used for short lived tasks. Our objective is to place each VM as it arrives, so that the peak to average ratio of resource usage across PMs is minimized. Placement is challenging as we need to consider resource use in multiple dimensions, such as CPU and memory, and because resource use changes over time. Past approaches to similar problems have suggested that one could forecast VM resource use for placement. We see that in our production traffic, Micro-VM resource use is spiky and short lived, and that forecasting algorithms are not useful. We evaluate Reinforcement Learning (RL) approaches for this task, but find that off-the-shelf RL algorithms are not always performant. We present a forecasting free algorithm, called FirePlace, that learns the placement decision using a variant of hindsight optimization, which we call hindsight imitation. We evaluate our approach using a production traffic trace of Firecracker usage AWS Lambda. FirePlace improves upon baseline algorithms by 10% on a production data trace of 100K Firecracker VMs.

#11 Bit Error Robustness for Energy-Efficient DNN Accelerators [PDF] [Copy] [Kimi1] [REL]

Authors: David Stutz ; Nandhini Chandramoorthy ; Matthias Hein ; Bernt Schiele

Deep neural network (DNN) accelerators received considerable attention in past years due to saved energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights significantly. This leads to high energy savings from both low-voltage operation as well as low-precision quantization. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays. We also discuss why weight clipping alone is already a quite effective way to achieve robustness against bit errors. Moreover, we specifically discuss the involved trade-offs regarding accuracy, robustness and precision: Without losing more than 1% in accuracy compared to a normally trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even for 4-bit DNNs.

#12 SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier Detection [PDF] [Copy] [Kimi] [REL]

Authors: Yue Zhao ; Xiyang Hu ; Cheng Cheng ; Cong Wang ; Changlin Wan ; Wen Wang ; Jianing Yang ; Haoping Bai ; Zheng Li ; Cao Xiao ; Yunlong Wang ; Zhi Qiao ; Jimeng Sun ; Leman Akoglu

Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples with numerous high-stake applications including fraud detection and intrusion detection. Due to the lack of ground truth labels, practitioners often have to build a large number of unsupervised, heterogeneous models (i.e., different algorithms with varying hyperparameters) for further combination and analysis, rather than relying on a single model. How to accelerate the training and scoring on new-coming samples by outlyingness (referred as prediction throughout the paper) with a large number of unsupervised, heterogeneous OD models? In this study, we propose a modular acceleration system, called SUOD, to address it. The proposed system focuses on three complementary acceleration aspects (data reduction for high-dimensional data, approximation for costly models, and taskload imbalance optimization for distributed environment), while maintaining performance accuracy. Extensive experiments on more than 20 benchmark datasets demonstrate SUOD's effectiveness in heterogeneous OD acceleration, along with a real-world deployment case on fraudulent claim analysis at IQVIA, a leading healthcare firm. We open-source SUOD for reproducibility and accessibility.

#13 To Bridge Neural Network Design and Real-World Performance: A Behaviour Study for Neural Networks [PDF] [Copy] [Kimi] [REL]

Authors: Xiaohu Tang ; Shihao Han ; Li Lyna Zhang ; Ting Cao ; Yunxin Liu

The boom of edge AI applications has spawned a great many neural network (NN) algorithms and inference platforms. Unfortunately, the fast pace of development in their fields have magnified the gaps between them. A well-designed NN algorithm with reduced number of computation operations and memory accesses can easily result in increased inference latency in real-world deployment, due to a mismatch between the algorithm and the features of target platforms.

#14 Scaling Distributed Training with Adaptive Summation [PDF] [Copy] [Kimi] [REL]

Authors: Saeed Maleki ; Madan Musuvathi ; Todd Mytkowicz ; Olli Saarikivi ; Tianju Xu ; Vadim Eksarevskiy ; Jaliya Ekanayake ; Emad Barsoum

Data parallelism is a common way to parallelize stochastic gradient descent (SGD). However, the loss of convergence at large minibatch sizes limits the scalability of data parallelism. This paper introduces a novel method to combine gradients called Adasum that significantly improves the convergence when using large minibatches. This paper provides the intuition and formal justification of Adasum along with a convergence proof. Additionally, the paper describes an efficient implementation of Adasum and its integration into the open-source toolkit Horovod for use in both TensorFlow and PyTorch.

#15 VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference [PDF] [Copy] [Kimi] [REL]

Authors: Steve Dai ; Rangha Venkatesan ; Mark Ren ; Brian Zimmer ; William Dally ; Brucek Khailany

Quantization enables efficient acceleration of deep neural networks by reducing model memory footprint and exploiting low-cost integer math hardware units. Quantization maps floating-point weights and activations in a trained model to low-bitwidth integer values using scale factors. Excessive quantization, reducing precision too aggressively, results in accuracy degradation. When scale factors are shared at a coarse granularity across many dimensions of each tensor, effective precision of individual elements within the tensor are limited. To reduce quantization-related accuracy loss, we propose using a separate scale factor for each small vector of (~16-64) elements within a single dimension of a tensor. To achieve an efficient hardware implementation, the per-vector scale factors can be implemented with low-bitwidth integers when calibrated using a two-level quantization scheme. We find that per-vector scaling consistently achieves better inference accuracy at low precision compared to conventional scaling techniques for popular neural networks without requiring retraining. We also modify a deep learning accelerator hardware design to study the area and energy overheads of per-vector scaling support. Our evaluation demonstrates that per-vector scaled quantization with 4-bit weights and activations achieves 69% energy saving and 36% area saving over an 8-bit baseline while maintaining over 75% accuracy for ResNet50 on ImageNet. 4-bit weights and 8-bit activations achieve near-full-precision accuracy for both BERT-base and BERT-large on SQuAD while reducing area by 28% compared to an 8-bit baseline.

#16 Horizontally Fused Training Array: An Effective Hardware Utilization Squeezer for Training Novel Deep Learning Models [PDF] [Copy] [Kimi] [REL]

Authors: Shang Wang ; Peiming Yang ; Yuxuan Zheng ; Xin Li ; Gennady Pekhimenko

Driven by the tremendous effort in researching novel deep learning (DL) algorithms, the training cost of developing new models increases staggeringly in recent years. We analyze GPU cluster usage statistics from a top research institute for more insights into the hardware efficiency achieved by typical DL training jobs. Our study reveals that single-accelerator training jobs can dominate the cluster-wide resource consumption when launched repetitively (e.g., for hyper-parameter tuning) while severely under-utilizing the hardware. Fortunately, we observe that such workloads have the following unique characteristics: (i) the models among jobs often have the same types of operators with the same shapes, and (ii) the inter-model horizontal fusion of such operators is mathematically equivalent to other already well-optimized operators. Thus, to help DL researchers and practitioners effectively improve the hardware utilization of their novel DL training workloads, we propose Horizontally Fused Training Array (HFTA). HFTA is a new DL framework extension library that horizontally fuses the models from different repetitive jobs deeply down to operators and then trains them simultaneously on a shared accelerator. To show the generality of our solution, we apply HFTA to six DL models training on state-of-the-art accelerators (GPUs and TPUs). Our results indicate that HFTA is highly effective in improving hardware utilization and achieves up to 15.1x higher training throughput vs. the standard practice of running each job on a separate accelerator.

#17 Characterizing and Taming Model Instability Across Edge Devices [PDF] [Copy] [Kimi] [REL]

Authors: Eyal Cidon ; Evgenya Pergament ; Zain Asgar ; Asaf Cidon ; Sachin Katti

The same machine learning model running on different edge devices may produce highly-divergent outputs on a nearly-identical input. Possible reasons for the divergence include differences in the device sensors, the device's signal processing hardware and software, and its operating system and processors. This paper presents the first methodical characterization of the variations in model prediction across real-world mobile devices. We demonstrate that accuracy is not a useful metric to characterize prediction divergence, and introduce a new metric, instability, which captures this variation. We characterize different sources for instability, and show that differences in compression formats and image signal processing account for significant instability in object classification models. Notably, in our experiments, 14-17% of images produced divergent classifications across one or more phone models. We evaluate three different techniques for reducing instability. In particular, we adapt prior work on making models robust to noise in order to fine-tune models to be robust to variations across edge devices. We demonstrate our fine-tuning techniques reduce instability by 75%.

#18 Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy [PDF] [Copy] [Kimi] [REL]

Authors: Lucas Liebenwein ; Cenk Baykal ; Brandon Carter ; David Gifford ; Daniela Rus

Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks. Starting from a pre-trained network, the process is as follows: remove redundant parameters, retrain, and repeat while maintaining the same test accuracy. The result is a model that is a fraction of the size of the original with comparable predictive performance (test accuracy). Here, we reassess and evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well across a wide spectrum of "harder" metrics such as generalization to out-of-distribution data and resilience to noise. Across evaluations on varying architectures and data sets, we find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks. These results call into question the extent of \emph{genuine} overparameterization in deep learning and raise concerns about the practicability of deploying pruned networks, specifically in the context of safety-critical systems, unless they are widely evaluated beyond test accuracy to reliably predict their performance. Our code is available at https://github.com/lucaslie/torchprune.

#19 Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference [PDF] [Copy] [Kimi] [REL]

Authors: Haichen Shen ; Jared Roesch ; Zhi Chen ; Wei Chen ; Yong Wu ; Mu Li ; Vin Sharma ; Zachary Tatlock ; Yida Wang

Modern deep neural networks increasingly make use of features such as control flow, dynamic data structures, and dynamic tensor shapes. Existing deep learning systems focus on optimizing and executing static neural networks which assume a pre-determined model architecture and input data shapes—assumptions that are violated by dynamic neural networks. Therefore, executing dynamic models with deep learning systems is currently both inflexible and sub-optimal, if not impossible. Optimizing dynamic neural networks is more challenging than static neural networks; optimizations must consider all possible execution paths and tensor shapes. This paper proposes Nimble, a high-performance and flexible system to optimize, compile, and execute dynamic neural networks on multiple platforms. Nimble handles model dynamism by introducing a dynamic type system, a set of dynamism-oriented optimizations, and a light-weight virtual machine runtime. Our evaluation demonstrates that Nimble outperforms existing solutions for dynamic neural networks by up to 20x on hardware platforms including Intel CPUs, ARM CPUs, and Nvidia GPUs.

#20 In-network Aggregation for Shared Machine Learning Clusters [PDF] [Copy] [Kimi] [REL]

Authors: Nadeen Gebara ; Manya Ghobadi ; Paolo Costa

We present PANAMA, a network architecture for machine learning (ML) workloads on shared clusters where a variety of training jobs co-exist.PANAMA consists of two key components: (i) an efficient in-network hardware accelerator designed to accelerate large data-parallel training transfers; and (ii) a lightweight congestion control protocol to enable fair sharing of network resources across different flows. Our congestion control protocol exploits the unique communication pattern in training to ensure large in-network aggregation transfers do not negatively impact short latency-sensitive flows. To evaluate the feasibility of PANAMA, we build an FPGA-based prototype with 10 Gbps transceivers and show that our hardware datapath achieves line-rate aggregation. Our large-scale simulations demonstrate that PANAMA improves the mean and 99%-tile completion time of latency-sensitive short flows by a factor of 2–4.5 while reducing the average training time of large jobs by a factor of 1.25.

#21 Amazon SageMaker Debugger: A System for Real-Time Insights into Machine Learning Model Training [PDF] [Copy] [Kimi] [REL]

Authors: Nathalie Rauschmayr ; Vikas Kumar ; Rahul Huilgol ; Andrea Olgiati ; Satadal Bhattacharjee ; Nihal Harish ; Vandana Kannan ; Amol Lele ; Anirudh Acharya ; Jared Nielsen ; Lakshmi Ramakrishnan ; Ishan Bhatt ; Kohen Chia ; Neelesh Dodda ; Zhihan Li ; Jiacheng Gu ; Miyoung Choi ; Balajee Nagarajan ; Jeffrey Geevarghese ; Denis Davydenko ; Sifei Li ; Lu Huang ; Edward Kim ; Tyler Hill ; Krishnaram Kenthapadi

Manual debugging is a common productivity drain in the machine learning (ML) lifecycle. Identifying underperforming training jobs requires constant developer attention and deep domain expertise. As state-of-the-art models grow in size and complexity, debugging becomes increasingly difficult. Just as unit tests boost traditional software development, an automated ML debugging library can save time and money. We present Amazon SageMaker Debugger, a machine learning feature that automatically identifies and stops underperforming training jobs. Debugger is a new feature of Amazon SageMaker that automatically captures relevant data during training and evaluation and presents it for online and offline inspection. Debugger helps users define a set of conditions, in the form of built-in or custom rules, that are applied to this data, thereby enabling users to catch training issues as well as monitor and debug ML model training in real-time. These rules save time and money by alerting the developer and terminating a problematic training job early.

#22 RL-Scope: Cross-stack Profiling for Deep Reinforcement Learning Workloads [PDF] [Copy] [Kimi1] [REL]

Authors: James Gleeson ; Moshe Gabel ; Gennady Pekhimenko ; Eyal de Lara ; Srivatsan Krishnan ; Vijay Janapa Reddi

Deep reinforcement learning (RL) has made groundbreaking advancements in robotics, data center management and other applications. Unfortunately, system-level bottlenecks in RL workloads are poorly understood; we observe fundamental structural differences in RL workloads that make them inherently less GPU-bound than supervised learning (SL). To explain where training time is spent in RL workloads, we propose RL-Scope, a cross-stack profiler that scopes low-level CPU/GPU resource usage to high-level algorithmic operations, and provides accurate insights by correcting for profiling overhead. Using RL-Scope, we survey RL workloads across its major dimensions including ML backend, RL algorithm, and simulator. For ML backends, we explain a 2.3× difference in runtime between equivalent PyTorch and TensorFlow algorithm implementations, and identify a bottleneck rooted in overly abstracted algorithm implementations. For RL algorithms and simulators, we show that on-policy algorithms are at least 3.5× more simulation-bound than off-policy algorithms. Finally, we profile a scale-up workload and demonstrate that GPU utilization metrics reported by commonly used tools dramatically inflate GPU usage, whereas RL-Scope reports true GPU-bound time. RL-Scope is an open-source tool available at https://github.com/UofT-EcoSystem/rlscope.

#23 A Learned Performance Model for Tensor Processing Units [PDF] [Copy] [Kimi] [REL]

Authors: Sam Kaufman ; Phitchaya Phothilimthana ; Yanqi Zhou ; Charith Mendis ; Sudip Roy ; Amit Sabne ; Mike Burrows

Accurate hardware performance models are critical to efficient code generation. They can be used by compilers to make heuristic decisions, by superoptimizers as a minimization objective, or by autotuners to find an optimal configuration for a specific program. However, they are difficult to develop because contemporary processors are complex, and the recent proliferation of deep learning accelerators has increased the development burden. We demonstrate a method of learning performance models from a corpus of tensor computation graph programs for Tensor Processing Unit (TPU) instances. We show that our learned model outperforms a heavily-optimized analytical performance model on two tasks—tile-size selection and operator fusion—and that it helps an autotuner discover faster programs in a setting where access to TPUs is limited or expensive.

#24 TensorFlow Lite Micro: Embedded Machine Learning for TinyML Systems [PDF] [Copy] [Kimi] [REL]

Authors: Robert David ; Jared Duke ; Advait Jain ; Vijay Janapa Reddi ; Nat Jeffries ; Jian Li ; Nick Kreeger ; Ian Nappier ; Meghna Natraj ; Tiezhen Wang ; Pete Warden ; Rocky Rhodes

We introduce TensorFlow (TF) Micro, an open-source machine learning inference framework for running deep-learning models on embedded systems. TF Micro tackles the efficiency requirements imposed by embedded system resource constraints and the fragmentation challenges that make cross-platform interoperability nearly impossible. The framework adopts a unique interpreter-based approach that provides flexibility while overcoming the challenges. This paper explains the design decisions behind TF Micro and describes its implementation. We present an evaluation to demonstrate its low resource requirement and minimal run-time performance overhead.

#25 Scaling Polyhedral Neural Network Verification on GPUs [PDF] [Copy] [Kimi] [REL]

Authors: Christoph Müller ; François Serre ; Gagandeep Singh ; Markus Püschel ; Martin Vechev

No summary was provided.