NeurIPS.2023 - Outstanding

Total: 2

#1 [Re] Pure Noise to the Rescue of Insufficient Data [PDF151] [Copy] [Kimi358]

Authors: Ryan Lee ; Seungmin Lee

Scope of Reproducibility — We examine the main claims of the original paper [1], whichstates that in an image classification task with imbalanced training data, (i) using purenoise to augment minority‐class images encourages generalization by improving minority‐class accuracy. This method is paired with (ii) a new batch normalization layer thatnormalizes noise images using affine parameters learned from natural images, whichimproves the model’s performance. Moreover, (iii) this improvement is robust to vary‐ing levels of data augmentation. Finally, the authors propose that (iv) adding pure noiseimages can improve classification even on balanced training data.Methodology — We implemented the training pipeline from the description of the paperusing PyTorch and integrated authors’ code snippets for sampling pure noise imagesand batch normalizing noise and natural images separately. All of our experiments wererun on a machine from a cloud computing service with one NVIDIA RTX A5000 GraphicsCard and had a total computational time of approximately 432 GPU hours.Results — We reproduced the main claims that (i) oversampling with pure noise improvesgeneralization by improving the minority‐class accuracy, (ii) the proposed batch nor‐malization (BN) method outperforms baselines, (iii) and this improvement is robustacross data augmentations. Our results also support that (iv) adding pure noise imagescan improve classification on balanced training data. However, additional experimentssuggest that the performance improvement from OPeN may be more orthogonal to theimprovement caused by a bigger network or more complex data augmentation.What was easy — The code snippet in the original paper was thoroughly documented andwas easy to use. The authors also clearly documented most of the hyperparameters thatwere used in the main experiments.What was difficult — The repo linked in the original paper was not populated yet. As a re‐sult, we had to retrieve the CIFAR‐10‐LT dataset from previous works [2, 3], re‐implementWideResNet [4], and the overall training pipeline.Communication with original authors — We contacted the authors for clarifications on theimplementation details of the algorithm. Prior works had many important implemen‐tation details such as linear learning rate warmup or deferred oversampling, so we con‐firmed with the authors on whether these methods were used.

#2 [Re] On Explainability of Graph Neural Networks via Subgraph Explorations [PDF72] [Copy] [Kimi167]

Authors: Yannik Mahlau ; Lukas Berg ; Leonie Kayser

Yuan et al. claim their proposed method SubgraphX achieves (i) higher fidelity in explaining models for graph- and node classification tasks compared to other explanation techniques, namely GNNExplainer. Additionally, (ii) the computational effort of SubgraphX is at a 'reasonable level', which is not further specified by the original authors. We define this as at most ten times slower than GNNExplainer. We reimplemented the proposed algorithm in PyTorch. Then, we replicated the experiments performed by the authors on a smaller scale due to resource constraints. Additionally, we checked the performance on a new dataset and investigated the influence of hyperparameters. Lastly, we improved SubgraphX using greedy initialization and utilizing fidelity as a score function. We were able to reproduce the main claims on the MUTAG dataset, where SubgraphX has a better performance than GNNExplainer. Furthermore, SubgraphX has a reasonable runtime of about seven times longer than GNNExplainer. We successfully employed SubgraphX on the Karate Club dataset, where it outperforms GNNExplainer as well. The hyperparameter study revealed that the number of Monte-Carlo Tree search iterations and Monte-Carlo sampling steps are the most important hyperparameters and directly trade performance for runtime. Lastly, we show that our proposed improvements to SubgraphX significantly enhance fidelity and runtime. The authors' description of the algorithm was clear and concise. The original implementation is available in the DIG-library as a reference to our implementation. The authors performed extensive experiments, which we could not replicate in their full scale due to resource constraints. However, we were able to achieve similar results on a subset of the datasets used. Another issue was that despite the original code of the authors and datasets being publicly available, there were many compatibility issues. The original authors briefly reviewed our work and agreed with the findings.