USENIX-Sec.2021

Total: 176

#1 Weak Links in Authentication Chains: A Large-scale Analysis of Email Sender Spoofing Attacks [PDF] [Copy] [Kimi2]

Authors: Kaiwen Shen ; Chuhan Wang ; Minglei Guo ; Xiaofeng Zheng ; Chaoyi Lu ; Baojun Liu ; Yuxuan Zhao ; Shuang Hao ; Haixin Duan ; Qingfeng Pan ; Min Yang

As a fundamental communicative service, email is playing an important role in both individual and corporate communications, which also makes it one of the most frequently attack vectors. An email's authenticity is based on an authentication chain involving multiple protocols, roles and services, the inconsistency among which creates security threats. Thus, it depends on the weakest link of the chain, as any failed part can break the whole chain-based defense. This paper systematically analyzes the transmission of an email and identifies a series of new attacks capable of bypassing SPF, DKIM, DMARC and user-interface protections. In particular, by conducting a "cocktail" joint attack, more realistic emails can be forged to penetrate the celebrated email services, such as Gmail and Outlook. We conduct a large-scale experiment on 30 popular email services and 23 email clients, and find that all of them are vulnerable to certain types of new attacks. We have duly reported the identified vulnerabilities to the related email service providers, and received positive responses from 11 of them, including Gmail, Yahoo, iCloud and Alibaba. Furthermore, we propose key mitigating measures to defend against the new attacks. Therefore, this work is of great value for identifying email spoofing attacks and improving the email ecosystem's overall security.

#2 Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection [PDF1] [Copy] [Kimi2]

Authors: Di Tang ; XiaoFeng Wang ; Haixu Tang ; Kehuan Zhang

A security threat to deep neural networks (DNN) is data contamination attack, in which an adversary poisons the training data of the target model to inject a backdoor so that images carrying a specific trigger will always be given a specific label. We discover that prior defense on this problem assumes the dominance of the trigger in model's representation space, which causes any image with the trigger to be classified to the target label. Such dominance comes from the unique representations of trigger-carrying images, which are assumed to be significantly different from what benign images produce. Our research, however, shows that this assumption can be broken by a targeted contamination TaCT that obscures the difference between those two kinds of representations and causes the attack images to be less distinguishable from benign ones, thereby evading existing protection. In our research, we observe that TaCT can affect the representation distribution of the target class but can hardly change the distribution across all classes, allowing us to build new defense performing a statistic analysis on the global information. More specifically, we leverage an EM algorithm to decompose an images into its identity part (e.g., person) and variation part (e.g., poses). Then the distribution of the variation, based upon the global information across all classes, is utilized by a likelihood-ratio test to analyze the representations in each class, identifying those more likely to be characterized by a mixture model resulted from adding attack samples into the legitimate image pool of the current class. Our research illustrates that our approach can effectively detect data contamination attacks, not only the known ones but the new TaCT attack discovered in our study.

#3 Automated Discovery of Denial-of-Service Vulnerabilities in Connected Vehicle Protocols [PDF] [Copy] [Kimi1]

Authors: Shengtuo Hu ; Qi Alfred Chen ; Jiachen Sun ; Yiheng Feng ; Z. Morley Mao ; Henry X. Liu

With the development of the emerging Connected Vehicle (CV) technology, vehicles can wirelessly communicate with traffic infrastructure and other vehicles to exchange safety and mobility information in real time. However, the integrated communication capability inevitably increases the attack surface of vehicles, which can be exploited to cause safety hazard on the road. Thus, it is highly desirable to systematically understand design-level flaws in the current CV network stack as well as in CV applications, and the corresponding security/safety consequences so that these flaws can be proactively discovered and addressed before large-scale deployment. In this paper, we design CVAnalyzer, a system for discovering design-level flaws for availability violations of the CV network stack, as well as quantifying the corresponding security/safety consequences. To achieve this, CVAnalyzer combines the attack discovery capability of a general model checker and the quantitative threat assessment capability of a probabilistic model checker. Using CVAnalyzer, we successfully uncovered 4 new DoS (Denial-of-Service) vulnerabilities of the latest CV network protocols and 14 new DoS vulnerabilities of two CV platoon management protocols. Our quantification results show that these attacks can have as high as 99% success rates, and in the worst case can at least double the delay in packet processing, violating the latency requirement in CV communication. We implemented and validated all attacks in a real-world testbed, and also analyzed the fundamental causes to propose potential solutions. We have reported our findings in the CV network protocols to the IEEE 1609 Working Group, and the group has acknowledged the discovered vulnerabilities and plans to adopt our solutions.

#4 An Analysis of Speculative Type Confusion Vulnerabilities in the Wild [PDF] [Copy] [Kimi1]

Authors: Ofek Kirzner ; Adam Morrison

Spectre v1 attacks, which exploit conditional branch misprediction, are often identified with attacks that bypass array bounds checking to leak data from a victim's memory. Generally, however, Spectre v1 attacks can exploit any conditional branch misprediction that makes the victim execute code incorrectly. In this paper, we investigate speculative type confusion, a Spectre v1 attack vector in which branch mispredictions make the victim execute with variables holding values of the wrong type and thereby leak memory content. We observe that speculative type confusion can be inadvertently introduced by a compiler, making it extremely hard for programmers to reason about security and manually apply Spectre mitigations. We thus set out to determine the extent to which speculative type confusion affects the Linux kernel. Our analysis finds exploitable and potentially-exploitable arbitrary memory disclosure vulnerabilities. We also find many latent vulnerabilities, which could become exploitable due to innocuous system changes, such as coding style changes. Our results suggest that Spectre mitigations which rely on statically/manually identifying "bad" code patterns need to be rethought, and more comprehensive mitigations are needed.

#5 PACStack: an Authenticated Call Stack [PDF] [Copy] [Kimi1]

Authors: Hans Liljestrand ; Thomas Nyman ; Lachlan J. Gunn ; Jan-Erik Ekberg ; N. Asokan

A popular run-time attack technique is to compromise the control-flow integrity of a program by modifying function return addresses on the stack. So far, shadow stacks have proven to be essential for comprehensively preventing return address manipulation. Shadow stacks record return addresses in integrity-protected memory secured with hardware-assistance or software access control. Software shadow stacks incur high overheads or trade off security for efficiency. Hardware-assisted shadow stacks are efficient and secure, but require the deployment of special-purpose hardware. We present authenticated call stack (ACS), an approach that uses chained message authentication codes (MACs). Our prototype, PACStack, uses the ARMv8.3-A general purpose hardware mechanism for pointer authentication (PA) to implement ACS. Via a rigorous security analysis, we show that PACStack achieves security comparable to hardware-assisted shadow stacks without requiring dedicated hardware. We demonstrate that PACStack's performance overhead is small (≈3%).

#6 Can Systems Explain Permissions Better? Understanding Users' Misperceptions under Smartphone Runtime Permission Model [PDF] [Copy] [Kimi1]

Authors: Bingyu Shen ; Lili Wei ; Chengcheng Xiang ; Yudong Wu ; Mingyao Shen ; Yuanyuan Zhou ; Xinxin Jin

Current smartphone operating systems enable users to manage permissions according to their personal preferences with a runtime permission model. Nonetheless, the systems provide very limited information when requesting permissions, making it difficult for users to understand permissions' capabilities and potentially induced risks. In this paper, we first investigated to what extent current system-provided information can help users understand the scope of permissions and their potential risks. We took a mixed-methods approach by collecting real permission settings from 4,636 Android users, an interview study of 20 participants, and large-scale Internet surveys of 1559 users. Our study identified several common misunderstandings on the runtime permission model among users. We found that only a very small percentage (6.1%) of users can infer the scope of permission groups accurately from the system-provided information. This indicates that the information provided by current systems is far from sufficient. We thereby explored what extra information that systems can provide to help users make more informed permission decisions. By surveying users' common concerns on apps' permission requests, we identified five types of information (i.e., decision factors) that are helpful for users' decisions. We further studied the impact and helpfulness of the factors to users' permission decisions with both positive and negative messages. Our study shows that the background access factor helps most while the grant rate helps the least. Based on the findings, we provide suggestions for system designers to enhance future systems with more permission information.

#7 EVMPatch: Timely and Automated Patching of Ethereum Smart Contracts [PDF] [Copy] [Kimi1]

Authors: Michael Rodler ; Wenting Li ; Ghassan O. Karame ; Lucas Davi

Recent attacks exploiting errors in smart contract code had devastating consequences thereby questioning the benefits of this technology. It is currently highly challenging to fix errors and deploy a patched contract in time. Instant patching is especially important since smart contracts are always online due to the distributed nature of blockchain systems. They also manage considerable amounts of assets, which are at risk and often beyond recovery after an attack. Existing solutions to upgrade smart contracts depend on manual and error-prone processes. This paper presents a framework, called EVMPatch, to instantly and automatically patch faulty smart contracts. EVMPatch features a bytecode rewriting engine for the popular Ethereum blockchain, and transparently/automatically rewrites common off-the-shelf contracts to upgradable contracts. The proof-of-concept implementation of EVMPatch automatically hardens smart contracts that are vulnerable to integer over/underflows and access control errors, but can be easily extended to cover more bug classes. Our evaluation on 14,000 real-world contracts demonstrates that our approach successfully blocks attack transactions launched on contracts, while keeping the intended functionality of the contract intact. We perform a study with experienced software developers, showing that EVMPatch is practical, and reduces the time for converting a given Solidity smart contract to an upgradable contract by 97.6 %, while ensuring functional equivalence to the original contract.

#8 Privacy and Integrity Preserving Computations with CRISP [PDF] [Copy] [Kimi1]

Authors: Sylvain Chatel ; Apostolos Pyrgelis ; Juan Ramón Troncoso-Pastoriza ; Jean-Pierre Hubaux

In the digital era, users share their personal data with service providers to obtain some utility, e.g., access to high-quality services. Yet, the induced information flows raise privacy and integrity concerns. Consequently, cautious users may want to protect their privacy by minimizing the amount of information they disclose to curious service providers. Service providers are interested in verifying the integrity of the users' data to improve their services and obtain useful knowledge for their business. In this work, we present a generic solution to the trade-off between privacy, integrity, and utility, by achieving authenticity verification of data that has been encrypted for offloading to service providers. Based on lattice-based homomorphic encryption and commitments, as well as zero-knowledge proofs, our construction enables a service provider to process and reuse third-party signed data in a privacy-friendly manner with integrity guarantees. We evaluate our solution on different use cases such as smart-metering, disease susceptibility, and location-based activity tracking, thus showing its versatility. Our solution achieves broad generality, quantum-resistance, and relaxes some assumptions of state-of-the-art solutions without affecting performance.

#9 Virtual Secure Platform: A Five-Stage Pipeline Processor over TFHE [PDF] [Copy] [Kimi1]

Authors: Kotaro Matsuoka ; Ryotaro Banno ; Naoki Matsumoto ; Takashi Sato ; Song Bian

We present Virtual Secure Platform (VSP), the first comprehensive platform that implements a multi-opcode general-purpose sequential processor over Fully Homomorphic Encryption (FHE) for Secure Multi-Party Computation (SMPC). VSP protects both the data and functions on which the data are evaluated from the adversary in a secure computation offloading situation like cloud computing. We proposed a complete processor architecture with a five-stage pipeline, which improves the performance of the VSP by providing more parallelism in circuit evaluation. In addition, we also designed a custom Instruction Set Architecture (ISA) to reduce the gate count of our processor, along with an entire set of toolchains to ensure that arbitrary C programs can be compiled into our custom ISA. In order to speed up instruction evaluation over VSP, CMUX Memory based ROM and RAM constructions over FHE are also proposed. Our experiments show both the pipelined architecture and the CMUX Memory are effective in improving the performance of the proposed processor. We provide an fully open-source implementation of VSP which attains a per-instruction latency of less than 1 second. We show that compared to the best existing processor over FHE, our implementation runs nearly 1,600× faster.

#10 Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics [PDF] [Copy] [Kimi1]

Authors: Rishabh Poddar ; Sukrit Kalra ; Avishay Yanai ; Ryan Deng ; Raluca Ada Popa ; Joseph M. Hellerstein

Many organizations stand to benefit from pooling their data together in order to draw mutually beneficial insights—e.g., for fraud detection across banks, better medical studies across hospitals, etc. However, such organizations are often prevented from sharing their data with each other by privacy concerns, regulatory hurdles, or business competition. We present Senate, a system that allows multiple parties to collaboratively run analytical SQL queries without revealing their individual data to each other. Unlike prior works on secure multi-party computation (MPC) that assume that all parties are semi-honest, Senate protects the data even in the presence of malicious adversaries. At the heart of Senate lies a new MPC decomposition protocol that decomposes the cryptographic MPC computation into smaller units, some of which can be executed by subsets of parties and in parallel, while preserving its security guarantees. Senate then provides a new query planning algorithm that decomposes and plans the cryptographic computation effectively, achieving a performance of up to 145 × faster than the state-of-the-art.

#11 Accurately Measuring Global Risk of Amplification Attacks using AmpMap [PDF] [Copy] [Kimi]

Authors: Soo-Jin Moon ; Yucheng Yin ; Rahul Anand Sharma ; Yifei Yuan ; Jonathan M. Spring ; Vyas Sekar

Many recent DDoS attacks rely on amplification, where an attacker induces public servers to generate a large volume of network traffic to a victim. In this paper, we argue for a low-footprint Internet health monitoring service that can systematically and continuously quantify this risk to inform mitigation efforts. Unfortunately, the problem is challenging because amplification is a complex function of query (header) values and server instances. As such, existing techniques that enumerate the total number of servers or focus on a specific amplification-inducing query are fundamentally imprecise. In designing AmpMap, we leverage key structural insights to develop an efficient approach that searches across the space of protocol headers and servers. Using AmpMap, we scanned thousands of servers for 6 UDP-based protocols. We find that relying on prior recommendations to block or rate-limit specific queries still leaves open substantial residual risk as they miss many other amplification-inducing query patterns. We also observe significant variability across servers and protocols, and thus prior approaches that rely on server census can substantially misestimate amplification risk.

#12 Protecting Cryptography Against Compelled Self-Incrimination [PDF] [Copy] [Kimi1]

Authors: Sarah Scheffler ; Mayank Varia

The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person's right to silence and against self-incrimination. In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure. As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.

#13 Effective Notification Campaigns on the Web: A Matter of Trust, Framing, and Support [PDF] [Copy] [Kimi]

Authors: Max Maass ; Alina Stöver ; Henning Pridöhl ; Sebastian Bretthauer ; Dominik Herrmann ; Matthias Hollick ; Indra Spiecker

Misconfigurations and outdated software are a major cause of compromised websites and data leaks. Past research has proposed and evaluated sending automated security notifications to the operators of misconfigured websites, but encountered issues with reachability, mistrust, and a perceived lack of importance. In this paper, we seek to understand the determinants of effective notifications. We identify a data protection misconfiguration that affects 12.7 % of the 1.3 million websites we scanned and opens them up to legal liability. Using a subset of 4754 websites, we conduct a multivariate randomized controlled notification experiment, evaluating contact medium, sender, and framing of the message. We also include a link to a public web-based self-service tool that is run by us in disguise and conduct an anonymous survey of the notified website owners (N=477) to understand their perspective. We find that framing a misconfiguration as a problem of legal compliance can increase remediation rates, especially when the notification is sent as a letter from a legal research group, achieving remediation rates of 76.3 % compared to 33.9 % for emails sent by computer science researchers warning about a privacy issue. Across all groups, 56.6 % of notified owners remediated the issue, compared to 9.2 % in the control group. In conclusion, we present factors that lead website owners to trust a notification, show what framing of the notification brings them into action, and how they can be supported in remediating the issue.

#14 Hermes Attack: Steal DNN Models with Lossless Inference Accuracy [PDF1] [Copy] [Kimi1]

Authors: Yuankun Zhu ; Yueqiang Cheng ; Husheng Zhou ; Yantao Lu

Deep Neural Network (DNN) models become one of the most valuable enterprise assets due to their critical roles in all aspects of applications. With the trend of privatization deployment of DNN models, the data leakage of the DNN models is becoming increasingly severe and widespread. All existing model-extraction attacks can only leak parts of targeted DNN models with low accuracy or high overhead. In this paper, we first identify a new attack surface -- unencrypted PCIe traffic, to leak DNN models. Based on this new attack surface, we propose a novel model-extraction attack, namely Hermes Attack, which is the first attack to fully steal the whole victim DNN model. The stolen DNN models have the same hyper-parameters, parameters, and semantically identical architecture as the original ones. It is challenging due to the closed-source CUDA runtime, driver, and GPU internals, as well as the undocumented data structures and the loss of some critical semantics in the PCIe traffic. Additionally, there are millions of PCIe packets with numerous noises and chaos orders. Our Hermes Attack addresses these issues by massive reverse engineering efforts and reliable semantic reconstruction, as well as skillful packet selection and order correction. We implement a prototype of the Hermes Attack and evaluate two sequential DNN models (i.e., MINIST and VGG) and one non-sequential DNN model (i.e., ResNet) on three NVIDIA GPU platforms, i.e., NVIDIA Geforce GT 730, NVIDIA Geforce GTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti. The evaluation results indicate that our scheme can efficiently and completely reconstruct ALL of them by making inferences on any one image. Evaluated with Cifar10 test dataset that contains 10, 000 images, the experiment results show that the stolen models have the same inference accuracy as the original ones (i.e., lossless inference accuracy).

#15 Deep Entity Classification: Abusive Account Detection for Online Social Networks [PDF1] [Copy] [Kimi]

Authors: Teng Xu ; Gerard Goossen ; Huseyin Kerem Cevahir ; Sara Khodeir ; Yingyezhe Jin ; Frank Li ; Shawn Shan ; Sagar Patel ; David Freeman ; Paul Pearce

Online social networks (OSNs) attract attackers that use abusive accounts to conduct malicious activities for economic, political, and personal gain. In response, OSNs often deploy abusive account classifiers using machine learning (ML) approaches. However, a practical, effective ML-based defense requires carefully engineering features that are robust to adversarial manipulation, obtaining enough ground truth labeled data for model training, and designing a system that can scale to all active accounts on an OSN (potentially in the billions). To address these challenges we present Deep Entity Classification (DEC), an ML framework that detects abusive accounts in OSNs that have evaded other, traditional abuse detection systems. We leverage the insight that while accounts in isolation may be difficult to classify, their embeddings in the social graph—the network structure, properties, and behaviors of themselves and those around them—are fundamentally difficult for attackers to replicate or manipulate at scale. Our system: Extracts "deep features" of accounts by aggregating properties and behavioral features from their direct and indirect neighbors in the social graph. Employs a "multi-stage multi-task learning" (MS-MTL) paradigm that leverages imprecise ground truth data by consuming, in separate stages, both a small number of high-precision human-labeled samples and a large amount of lower-precision automated labels. This architecture results in a single model that provides high-precision classification for multiple types of abusive accounts. Scales to billions of users through various sampling and reclassification strategies that reduce system load. DEC has been deployed at Facebook where it classifies all users continuously, resulting in an estimated reduction of abusive accounts on the network by 27% beyond those already detected by other, traditional methods.

#16 Evil Under the Sun: Understanding and Discovering Attacks on Ethereum Decentralized Applications [PDF] [Copy] [Kimi1]

Authors: Liya Su ; Xinyue Shen ; Xiangyu Du ; Xiaojing Liao ; XiaoFeng Wang ; Luyi Xing ; Baoxu Liu

The popularity of Ethereum decentralized applications (Dapps) also brings in new security risks: it has been reported that these Dapps have been under various kinds of attacks from cybercriminals to gain profit. To the best of our knowledge, little has been done so far to understand this new cybercrime, in terms of its scope, criminal footprints and attack operational intents, not to mention any efforts to investigate these attack incidents automatically on a large scale. In this paper, we performed the first measurement study on real-world Dapp attack instances to recover critical threat intelligence (e.g., kill chain and attack patterns). Utilizing such threat intelligence, we proposed the first technique DEFIER to automatically investigate attack incidents on a large scale. Running DEFIER on 2.3 million transactions from 104 Ethereum on-chain Dapps, we were able to identify 476,342 exploit transactions on 85 target Dapps, which related to 75 0-day victim Dapps and 17K previously-unknown attacker EOAs. To the best of our knowledge, it is the largest Ethereum on-chain Dapp attack incidents dataset ever reported.

#17 PTAuth: Temporal Memory Safety via Robust Points-to Authentication [PDF] [Copy] [Kimi1]

Authors: Reza Mirzazade Farkhani ; Mansour Ahmadi ; Long Lu

Temporal memory corruptions are commonly exploited software vulnerabilities that can lead to powerful attacks. Despite significant progress made by decades of research on mitigation techniques, existing countermeasures fall short due to either limited coverage or overly high overhead. Furthermore, they require external mechanisms (e.g., spatial memory safety) to protect their metadata. Otherwise, their protection can be bypassed or disabled. To address these limitations, we present robust points-to authentication, a novel runtime scheme for detecting all kinds of temporal memory corruptions. We built a prototype system, called PTAuth, that realizes this scheme on ARM architectures. PTAuth contains a customized compiler for code analysis and instrumentation and a runtime library for performing the points-to authentication as a protected program runs. PTAuth leverages the Pointer Authentication Code (PAC) feature, provided by the ARMv8.3 and later CPUs, which serves as a simple hardware-based encryption primitive. PTAuth uses minimal in-memory metadata and protects its metadata without requiring spatial memory safety. We report our evaluation of PTAuth in terms of security, robustness and performance using 150 vulnerable programs from Juliet test suite and the SPEC CPU2006 benchmarks. PTAuth detects all three categories of heap-based temporal memory corruptions, generates zero false alerts, and slows down program execution by 26% (this number was measured based on software-emulated PAC; it is expected to decrease to 20% when using hardware-based PAC). We also show that PTAuth incurs 2% memory overhead thanks to the efficient use of metadata.

#18 UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers [PDF] [Copy] [Kimi1]

Authors: Yuwei Li ; Shouling Ji ; Yuan Chen ; Sizhuang Liang ; Wei-Han Lee ; Yueyao Chen ; Chenyang Lyu ; Chunming Wu ; Raheem Beyah ; Peng Cheng ; Kangjie Lu ; Ting Wang

A flurry of fuzzing tools (fuzzers) have been proposed in the literature, aiming at detecting software vulnerabilities effectively and efficiently. To date, it is however still challenging to compare fuzzers due to the inconsistency of the benchmarks, performance metrics, and/or environments for evaluation, which buries the useful insights and thus impedes the discovery of promising fuzzing primitives. In this paper, we design and develop UNIFUZZ, an open-source and metrics-driven platform for assessing fuzzers in a comprehensive and quantitative manner. Specifically, UNIFUZZ to date has incorporated 35 usable fuzzers, a benchmark of 20 real-world programs, and six categories of performance metrics. We first systematically study the usability of existing fuzzers, find and fix a number of flaws, and integrate them into UNIFUZZ. Based on the study, we propose a collection of pragmatic performance metrics to evaluate fuzzers from six complementary perspectives. Using UNIFUZZ, we conduct in-depth evaluations of several prominent fuzzers including AFL [1], AFLFast [2], Angora [3], Honggfuzz [4], MOPT [5], QSYM [6], T-Fuzz [7] and VUzzer64 [8]. We find that none of them outperforms the others across all the target programs, and that using a single metric to assess the performance of a fuzzer may lead to unilateral conclusions, which demonstrates the significance of comprehensive metrics. Moreover, we identify and investigate previously overlooked factors that may significantly affect a fuzzer's performance, including instrumentation methods and crash analysis tools. Our empirical results show that they are critical to the evaluation of a fuzzer. We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

#19 VoltPillager: Hardware-based fault injection attacks against Intel SGX Enclaves using the SVID voltage scaling interface [PDF] [Copy] [Kimi1]

Authors: Zitai Chen ; Georgios Vasilakis ; Kit Murdock ; Edward Dean ; David Oswald ; Flavio D. Garcia

Hardware-based fault injection attacks such as voltage and clock glitching have been thoroughly studied on embedded devices. Typical targets for such attacks include smartcards and low-power microcontrollers used in IoT devices. This paper presents the first hardware-based voltage glitching attack against a fully-fledged Intel CPU. The transition to complex CPUs is not trivial due to several factors, including: a complex operating system, large power consumption, multi-threading, and high clock speeds. To this end, we have built VoltPillager, a low-cost tool for injecting messages on the Serial Voltage Identification bus between the CPU and the voltage regulator on the motherboard. This allows us to precisely control the CPU core voltage. We leverage this powerful tool to mount fault-injection attacks that breach confidentiality and integrity of Intel SGX enclaves. We present proof-of-concept key-recovery attacks against cryptographic algorithms running inside SGX. We demonstrate that VoltPillager attacks are more powerful than recent software-only undervolting attacks against SGX (CVE-2019-11157) because they work on fully patched systems with all countermeasures against software undervolting enabled. Additionally, we are able to fault security-critical operations by delaying memory writes. Mitigation of VoltPillager is not straightforward and may require a rethink of the SGX adversarial model where a cloud provider is untrusted and has physical access to the hardware.

#20 ReDMArk: Bypassing RDMA Security Mechanisms [PDF] [Copy] [Kimi1]

Authors: Benjamin Rothenberger ; Konstantin Taranov ; Adrian Perrig ; Torsten Hoefler

State-of-the-art remote direct memory access (RDMA) technologies such as InfiniBand (IB) or RDMA over Converged Ethernet (RoCE) are becoming widely used in data center applications and are gaining traction in cloud environments. Hence, the security of RDMA architectures is crucial, yet potential security implications of using RDMA communication remain largely unstudied. ReDMArk shows that current security mechanisms of IB-based architectures are insufficient against both in-network attackers and attackers located on end hosts, thus affecting not only secrecy, but also integrity of RDMA applications. We demonstrate multiple vulnerabilities in the design of IB-based architectures and implementations of RDMA-capable network interface cards (RNICs) and exploit those vulnerabilities to enable powerful attacks such as packet injection using impersonation, unauthorized memory access, and Denial-of-Service (DoS) attacks. To thwart the discovered attacks we propose multiple mitigation mechanisms that are deployable in current RDMA networks.

#21 Stealing Links from Graph Neural Networks [PDF] [Copy] [Kimi1]

Authors: Xinlei He ; Jinyuan Jia ; Michael Backes ; Neil Zhenqiang Gong ; Yang Zhang

Graph data, such as chemical networks and social networks, may be deemed confidential/private because the data owner often spends lots of resources collecting the data or the data contains sensitive information, e.g., social relationships. Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs). Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection. In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph. Specifically, given a black-box access to a GNN model, our attacks can infer whether there exists a link between any pair of nodes in the graph used to train the model. We call our attacks link stealing attacks. We propose a threat model to systematically characterize an adversary's background knowledge along three dimensions which in total leads to a comprehensive taxonomy of 8 different link stealing attacks. We propose multiple novel methods to realize these 8 attacks. Extensive experiments on 8 real-world datasets show that our attacks are effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95 in multiple cases. Our results indicate that the outputs of a GNN model reveal rich information about the structure of the graph used to train the model.

#22 Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption [PDF] [Copy] [Kimi1]

Authors: Simon Oya ; Florian Kerschbaum

Recent Searchable Symmetric Encryption (SSE) schemes enable secure searching over an encrypted database stored in a server while limiting the information leaked to the server. These schemes focus on hiding the access pattern, which refers to the set of documents that match the client's queries. This provides protection against current attacks that largely depend on this leakage to succeed. However, most SSE constructions also leak whether or not two queries aim for the same keyword, also called the search pattern. In this work, we show that search pattern leakage can severely undermine current SSE defenses. We propose an attack that leverages both access and search pattern leakage, as well as some background and query distribution information, to recover the keywords of the queries performed by the client. Our attack follows a maximum likelihood estimation approach, and is easy to adapt against SSE defenses that obfuscate the access pattern. We empirically show that our attack is efficient, it outperforms other proposed attacks, and it completely thwarts two out of the three defenses we evaluate it against, even when these defenses are set to high privacy regimes. These findings highlight that hiding the search pattern, a feature that most constructions are lacking, is key towards providing practical privacy guarantees in SSE.

#23 Adapting Security Warnings to Counter Online Disinformation [PDF] [Copy] [Kimi1]

Authors: Ben Kaiser ; Jerry Wei ; Eli Lucherini ; Kevin Lee ; J. Nathan Matias ; Jonathan Mayer

Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation. In this work, we adapt methods and results from the information security warning literature in order to design and evaluate effective disinformation warnings. In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warnings---and respond by seeking information from alternative sources. We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning's design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects. Our work provides evidence that disinformation warnings can---when designed well---help users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making.

#24 ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation [PDF] [Copy] [Kimi1]

Authors: Arpita Patra ; Thomas Schneider ; Ajith Suresh ; Hossein Yalame

Secure Multi-party Computation (MPC) allows a set of mutually distrusting parties to jointly evaluate a function on their private inputs while maintaining input privacy. In this work, we improve semi-honest secure two-party computation (2PC) over rings, with a focus on the efficiency of the online phase. We propose an efficient mixed-protocol framework, outperforming the state-of-the-art 2PC framework of ABY. Moreover, we extend our techniques to multi-input multiplication gates without inflating the online communication, i.e., it remains independent of the fan-in. Along the way, we construct efficient protocols for several primitives such as scalar product, matrix multiplication, comparison, maxpool, and equality testing. The online communication of our scalar product is two ring elements irrespective of the vector dimension, which is a feature achieved for the first time in the 2PC literature. The practicality of our new set of protocols is showcased with four applications: i) AES S-box, ii) Circuit-based Private Set Intersection, iii) Biometric Matching, and iv) Privacy-preserving Machine Learning (PPML). Most notably, for PPML, we implement and benchmark training and inference of Logistic Regression and Neural Networks over LAN and WAN networks. For training, we improve online runtime (both for LAN and WAN) over SecureML (Mohassel et al., IEEE S&P '17) in the range 1.5x–6.1x, while for inference, the improvements are in the range of 2.5x–754.3x.

#25 "It's the Company, the Government, You and I": User Perceptions of Responsibility for Smart Home Privacy and Security [PDF] [Copy] [Kimi1]

Authors: Julie Haney ; Yasemin Acar ; Susanne Furman

Smart home technology may expose adopters to increased risk to network security, information privacy, and physical safety. However, users may lack understanding of the privacy and security implications. Additionally, manufacturers often fail to provide transparency and configuration options, and few government-provided guidelines have yet to be widely adopted. This results in little meaningful mitigation action to protect users’ security and privacy. But how can this situation be improved and by whom? It is currently unclear where perceived responsibility for smart home privacy and security lies. To address this gap, we conducted an in-depth interview study of 40 smart home adopters to explore where they assign responsibility and how their perceptions of responsibility relate to their concerns and mitigations. Results reveal that participants’ perceptions of responsibility reflect an interdependent relationship between consumers, manufacturers, and third parties such as the government. However, perceived breakdowns and gaps in the relationship result in users being concerned about their security and privacy. Based on our results, we suggest ways in which these actors can address gaps and better support each other.