USENIX-Sec.2021 - Fall

| Total: 115

#1 CACTI: Captcha Avoidance via Client-side TEE Integration [PDF] [Copy] [Kimi2] [REL]

Authors: Yoshimichi Nakatsuka ; Ercan Ozturk ; Andrew Paverd ; Gene Tsudik

Preventing abuse of web services by bots is an increasingly important problem, as abusive activities grow in both volume and variety. CAPTCHAs are the most common way for thwarting bot activities. However, they are often ineffective against bots and frustrating for humans. In addition, some recent CAPTCHA techniques diminish user privacy. Meanwhile, client-side Trusted Execution Environments (TEEs) are becoming increasingly widespread (notably, ARM TrustZone and Intel SGX), allowing establishment of trust in a small part (trust anchor or TCB) of client-side hardware. This prompts the question: can a TEE help reduce (or remove entirely) user burden of solving CAPTCHAs? In this paper, we design CACTI: CAPTCHA Avoidance via Client-side TEE Integration. Using client-side TEEs, CACTI allows legitimate clients to generate unforgeable rate-proofs demonstrating how frequently they have performed specific actions. These rate-proofs can be sent to web servers in lieu of solving CAPTCHAs. CACTI provides strong client privacy guarantees, since the information is only sent to the visited website and authenticated using a group signature scheme. Our evaluations show that overall latency of generating and verifying a CACTI rate-proof is less than 0.25 sec, while CACTI's bandwidth overhead is over 98% lower than that of current CAPTCHA systems.

#2 Obfuscation-Resilient Executable Payload Extraction From Packed Malware [PDF] [Copy] [Kimi1] [REL]

Authors: Binlin Cheng ; Jiang Ming ; Erika A Leal ; Haotian Zhang ; Jianming Fu ; Guojun Peng ; Jean-Yves Marion

Over the past two decades, packed malware is always a veritable challenge to security analysts. Not only is determining the end of the unpacking increasingly difficult, but also advanced packers embed a variety of anti-analysis tricks to impede reverse engineering. As malware's APIs provide rich information about malicious behavior, one common anti-analysis strategy is API obfuscation, which removes the metadata of imported APIs from malware's PE header and complicates API name resolution from API callsites. In this way, even when security analysts obtain the unpacked code, a disassembler still fails to recognize imported API names, and the unpacked code cannot be successfully executed. Recently, generic binary unpacking has made breakthrough progress with noticeable performance improvement. However, reconstructing unpacked code's import tables, which is vital for further malware static/dynamic analyses, has largely been overlooked. Existing approaches are far from mature: they either can be easily evaded by various API obfuscation schemes (e.g., stolen code), or suffer from incomplete API coverage. In this paper, we aim to achieve the ultimate goal of Windows malware unpacking: recovering an executable malware program from the packed and obfuscated binary code. Based on the process memory when the original entry point (OEP) is reached, we develop a hardware-assisted tool, API-Xray, to reconstruct import tables. Import table reconstruction is challenging enough in its own right. Our core technique, API Micro Execution, explores all possible API callsites and executes them without knowing API argument values. At the same time, we take advantage of hardware tracing via Intel Branch Trace Store and NX bit to resolve API names and finally rebuild import tables. Compared with the previous work, API-Xray has a better resistance against various API obfuscation schemes and more coverage on resolved Windows API names. Since July 2019, we have tested API-Xray in practice to assist security professionals in malware analysis: we have successfully rebuilt 155,811 executable malware programs and substantially improved the detection rate for 7,514 unknown or new malware variants.

#3 Effect of Mood, Location, Trust, and Presence of Others on Video-Based Social Authentication [PDF] [Copy] [Kimi2] [REL]

Authors: Cheng Guo ; Brianne Campbell ; Apu Kapadia ; Michael K. Reiter ; Kelly Caine

Current fallback authentication mechanisms are unreliable (e.g., security questions are easy to guess) and need improvement. Social authentication shows promise as a novel form of fallback authentication. In this paper, we report the results of a four-week study that explored people's perceived willingness to use video chat as a form of social authentication. We investigated whether people's mood, location, and trust, and the presence of others affected perceived willingness to use video chat to authenticate. We found that participants who were alone, reported a more positive mood, and had more trust in others reported more willingness to use video chat as an authentication method. Participants also reported more willingness to help others to authenticate via video chat than to initiate a video chat authentication session themselves. Our results provide initial insights into human-computer interaction issues that could stem from using video chat as a fallback authentication method within a small social network of people (e.g., family members and close friends) who know each other well and trust each other.

#4 Abusing Hidden Properties to Attack the Node.js Ecosystem [PDF] [Copy] [Kimi] [REL]

Authors: Feng Xiao ; Jianwei Huang ; Yichang Xiong ; Guangliang Yang ; Hong Hu ; Guofei Gu ; Wenke Lee

Nowadays, Node.js has been widely used in the development of server-side and desktop programs (e.g., Skype), with its cross-platform and high-performance execution environment of JavaScript. In past years, it has been reported other dynamic programming languages (e.g., PHP and Ruby) are unsafe on sharing objects. However, this security risk is not well studied and understood in JavaScript and Node.js programs. In this paper, we fill the gap by conducting the first systematic study on the communication process between client- and server-side code in Node.js programs. We extensively identify several new vulnerabilities in popular Node.js programs. To demonstrate their security implications, we design and develop a novel feasible attack, named hidden property abusing (HPA). Our further analysis shows HPA attacks are subtly different from existing findings regarding exploitation and attack effects. Through HPA attacks, a remote web attacker may obtain dangerous abilities, such as stealing confidential data, bypassing security checks, and launching DoS (Denial of Service) attacks. To help Node.js developers vet their programs against HPA, we design a novel vulnerability detection and verification tool, named Lynx, that utilizes hybrid program analysis to automatically reveal HPA vulnerabilities and even synthesize exploits. We apply Lynx on a set of widely-used Node.js programs and identify 15 previously unknown vulnerabilities. We have reported all of our findings to the Node.js community. 10 of them have been assigned with CVE, and 8 of them are rated as "Critical'" or "High" severity. This indicates HPA attacks can cause serious security threats.

#5 Formally Verified Memory Protection for a Commodity Multiprocessor Hypervisor [PDF] [Copy] [Kimi] [REL]

Authors: Shih-Wei Li ; Xupeng Li ; Ronghui Gu ; Jason Nieh ; John Zhuang Hui

Hypervisors are widely deployed by cloud computing providers to support virtual machines, but their growing complexity poses a security risk, as large codebases contain many vulnerabilities. We present SeKVM, a layered Linux KVM hypervisor architecture that has been formally verified on multiprocessor hardware. Using layers, we isolate KVM's trusted computing base into a small core such that only the core needs to be verified to ensure KVM's security guarantees. Using layers, we model hardware features at different levels of abstraction tailored to each layer of software. Lower hypervisor layers that configure and control hardware are verified using a novel machine model that includes multiprocessor memory management hardware such as multi-level shared page tables, tagged TLBs, and a coherent cache hierarchy with cache bypass support. Higher hypervisor layers that build on the lower layers are then verified using a more abstract and simplified model, taking advantage of layer encapsulation to reduce proof burden. Furthermore, layers provide modularity to reduce verification effort across multiple implementation versions. We have retrofitted and verified multiple versions of KVM on Arm multiprocessor hardware, proving the correctness of the implementations and that they contain no vulnerabilities that can affect KVM's security guarantees. Our work is the first machine-checked proof for a commodity hypervisor using multiprocessor memory management hardware. SeKVM requires only modest KVM modifications and incurs only modest performance overhead versus unmodified KVM on real application workloads.

#6 'Passwords Keep Me Safe' – Understanding What Children Think about Passwords [PDF] [Copy] [Kimi1] [REL]

Authors: Mary Theofanos ; Yee-Yin Choong ; Olivia Murphy

Children use technology from a very young age, and often have to authenticate. The goal of this study is to explore children's practices, perceptions, and knowledge regarding passwords. Given the limited work to date and the fact that the world's cyber posture and culture will be dependent on today's youth, it is imperative to conduct cybersecurity research with children. We conducted the first large-scale survey of 1,505 3rd to 12th graders from schools across the United States. Not surprisingly, children have fewer passwords than adults. We found that children have complicated relationships with passwords: on one hand, their perceptions about passwords and statements about password behavior are appropriate; on the other hand, however, they simultaneously do not tend to make strong passwords, and practice bad password behavior such as sharing passwords with friends. We conclude with a call for cybersecurity education to bridge the gap between students' password knowledge with their password behavior, while continuing to provide and promote security understandings.

#7 Domain Shadowing: Leveraging Content Delivery Networks for Robust Blocking-Resistant Communications [PDF] [Copy] [Kimi1] [REL]

Author: Mingkui Wei

We debut domain shadowing, a novel censorship evasion technique leveraging content delivery networks (CDNs). Domain shadowing exploits the fact that CDNs allow their customers to claim arbitrary domains as the back-end. By setting the frond-end of a CDN service as an allowed domain and the back-end a blocked one, a censored user can access resources of the blocked domain with all "indicators", including the connecting URL, the SNI of the TLS connection, and the Host header of the HTTP(S) request, appear to belong to the allowed domain. Furthermore, we demonstrate that domain shadowing can be proliferated by domain fronting, a censorship evasion technique popularly used a few years ago, making it even more difficult to block. Compared with existing censorship evasion solutions, domain shadowing is lightweight, incurs negligible overhead, and does not require dedicated third-party support. As a proof of concept, we implemented domain shadowing as a Firefox browser extension and presented its capability in circumventing censorship within a heavily censored country known by its strict censorship policies and advanced technologies.

#8 Automatic Policy Generation for Inter-Service Access Control of Microservices [PDF] [Copy] [Kimi1] [REL]

Authors: Xing Li ; Yan Chen ; Zhiqiang Lin ; Xiao Wang ; Jim Hao Chen

Cloud applications today are often composed of many microservices. To prevent a microservice from being abused by other (compromised) microservices, inter-service access control is applied. However, the complexity of fine-grained access control policies, along with the large-scale and dynamic nature of microservices, makes the current manual configuration-based access control unsuitable. This paper presents AUTOARMOR, the first attempt to automate inter-service access control policy generation for microservices, with two fundamental techniques: (1) a static analysis-based request extraction mechanism that automatically obtains the invocation logic among microservices, and (2) a graph-based policy management mechanism that generates corresponding access control policies with on-demand policy update. Our evaluation on popular microservice applications shows that AUTOARMOR is able to generate fine-grained inter-service access control policies and update them timely based on changes in the application, with only a minor runtime overhead. By seamlessly integrating with the lifecycle of microservices, it does not require any changes to existing code and infrastructures.

#9 ARCUS: Symbolic Root Cause Analysis of Exploits in Production Systems [PDF] [Copy] [Kimi1] [REL]

Authors: Carter Yagemann ; Matthew Pruett ; Simon P. Chung ; Kennon Bittick ; Brendan Saltaformaggio ; Wenke Lee

End-host runtime monitors (e.g., CFI, system call IDS) flag processes in response to symptoms of a possible attack. Unfortunately, the symptom (e.g., invalid control transfer) may occur long after the root cause (e.g., buffer overflow), creating a gap whereby bug reports received by developers contain (at best) a snapshot of the process long after it executed the buggy instructions. To help system administrators provide developers with more concise reports, we propose ARCUS, an automated framework that performs root cause analysis over the execution flagged by the end-host monitor. ARCUS works by testing “what if” questions to detect vulnerable states, systematically localizing bugs to their concise root cause while finding additional enforceable checks at the program binary level to demonstrably block them. Using hardware-supported processor tracing, ARCUS decouples the cost of analysis from host performance. We have implemented ARCUS and evaluated it on 31 vulnerabilities across 20 programs along with over 9,000 test cases from the RIPE and Juliet suites. ARCUS identifies the root cause of all tested exploits — with 0 false positives or negatives — and even finds 4 new 0-day vulnerabilities in traces averaging 4,000,000 basic blocks. ARCUS handles programs compiled from upwards of 810,000 lines of C/C++ code without needing concrete inputs or re-execution.

#10 PolyScope: Multi-Policy Access Control Analysis to Compute Authorized Attack Operations in Android Systems [PDF] [Copy] [Kimi1] [REL]

Authors: Yu-Tsung Lee ; William Enck ; Haining Chen ; Hayawardh Vijayakumar ; Ninghui Li ; Zhiyun Qian ; Daimeng Wang ; Giuseppe Petracca ; Trent Jaeger

Android's filesystem access control provides a foundation for system integrity. It combines mandatory (e.g., SEAndroid)and discretionary (e.g., Unix permissions) access control, protecting both the Android platform from Android/OEM services and Android/OEM services from third-party applications. However, OEMs often introduce vulnerabilities when they add market-differentiating features and fail to correctly reconfigure this complex combination of policies. In this paper, we propose the PolyScope tool to triage Android systems for vulnerabilities using their filesystem access control policies by: (1) identifying the resources that subjects are authorized to use that may be modified by their adversaries, both with and without policy manipulations, and (2) determining the attack operations on those resources that are actually available to adversaries to reveal the specific cases that need vulnerability testing. A key insight is that adversaries may exploit discretionary elements in Android access control to expand the permissions available to themselves and/or victims to launch attack operations, which we call permission expansion. We apply PolyScope to five Google and five OEM Android releases and find that permission expansion increases the privilege available to launch attacks, sometimes by more than 10x, but a significant fraction (about 15-20%) cannot be converted into attack operations due to other system configurations. Based on this analysis, we describe two previously un-known vulnerabilities and show how PolyScope helps OEMs triage the complex combination of access control policies down to attack operations worthy of testing

#11 PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking [PDF] [Copy] [Kimi1] [REL]

Authors: Chong Xiang ; Arjun Nitin Bhagoji ; Vikash Sehwag ; Prateek Mittal

Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks is an unsolved/open problem. In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches. The cornerstone of PatchGuard involves the use of CNNs with small receptive fields to impose a bound on the number of features corrupted by an adversarial patch. Given a bounded number of corrupted features, the problem of designing an adversarial patch defense reduces to that of designing a secure feature aggregation mechanism. Towards this end, we present our robust masking defense that robustly detects and masks corrupted features to recover the correct prediction. Notably, we can prove the robustness of our defense against any adversary within our threat model. Our extensive evaluation on ImageNet, ImageNette (a 10-class subset of ImageNet), and CIFAR-10 datasets demonstrates that our defense achieves state-of-the-art performance in terms of both provable robust accuracy and clean accuracy.

#12 mID: Tracing Screen Photos via Moiré Patterns [PDF] [Copy] [Kimi1] [REL]

Authors: Yushi Cheng ; Xiaoyu Ji ; Lixu Wang ; Qi Pang ; Yi-Chao Chen ; Wenyuan Xu

Cyber-theft of trade secrets has become a serious business threat. Digital watermarking is a popular technique to assist in identifying the source of the file leakage, whereby a unique watermark for each insider is hidden in sensitive files. However, malicious insiders may use their smartphones to photograph the secret file displayed on screens to remove the embedded hidden digital watermarks due to the optical noises introduced during photographing. To identify the leakage source despite such screen photo-based leakage attacks, we leverage Moiré pattern, an optical phenomenon resulted from the optical interaction between electronic screens and cameras. As such, we present mID, a new watermark-like technique that can create a carefully crafted Moiré pattern on the photo when it is taken towards the screen. We design patterns that appear to be natural yet can be linked to the identity of the leaker. We implemented mID and evaluate it with 5 display devices and 6 smartphones from various manufacturers and models. The results demonstrate that mID can achieve an average bit error rate (BER) of 0.6% and can successfully identify an ID with an average accuracy of 96%, with little influence from the type of display devices, cameras, IDs, and ambient lights.

#13 Scalable Detection of Promotional Website Defacements in Black Hat SEO Campaigns [PDF] [Copy] [Kimi1] [REL]

Authors: Ronghai Yang ; Xianbo Wang ; Cheng Chi ; Dawei Wang ; Jiawei He ; Siming Pang ; Wing Cheong Lau

Miscreants from online underground economies regularly exploit website vulnerabilities and inject fraudulent content into victim web pages to promote illicit goods and services. Scalable detection of such promotional website defacements remains an open problem despite their prevalence in Black Hat Search Engine Optimization (SEO) campaigns. Adversaries often manage to inject content in a stealthy manner by obfuscating the description of illicit products and/or the presence of defacements to make them undetectable. In this paper, we design and implement DMoS—a Defacement Monitoring System which protects websites from promotional defacements at scale. Our design is based on two key observations: Firstly, for effective advertising, the obfuscated jargons of illicit goods or services need to be easily understood by their target customers (i.e., sharing similar shape or pronunciation). Secondly, to promote the underground business, the defacements are crafted to boost search engine ranking of the defaced web pages while trying to stay stealthy from the maintainers and legitimate users of the compromised websites. Leveraging these insights, we first follow the human convention and design a jargon normalization algorithm to map obfuscated jargons to their original forms. We then develop a tag embedding mechanism, which enables DMoS to focus more on those not-so-visually-obvious, yet site-ranking influential HTML tags (i.e., title, meta). Consequently, DMoS can reliably detect illicit content hidden in compromised web pages. In particular, we have deployed DMoS as a cloud-based monitoring service for a five-month trial run. It has analyzed more than 38 million web pages across 7000+ commercial Chinese websites and found defacements in 11% of these websites. It achieves a recall over 99% with a precision about 89%. While the original design of DMoS focuses on the detection of Chinese promotional defacements, we have extended the system and demonstrated its applicability for English website defacement detection via proof-of-concept experiments.

#14 Evaluating In-Workflow Messages for Improving Mental Models of End-to-End Encryption [PDF] [Copy] [Kimi1] [REL]

Authors: Omer Akgul ; Wei Bai ; Shruti Das ; Michelle L. Mazurek

As large messaging providers increasingly adopt end-to-end encryption, private communication is readily available to more users than ever before. However, misunderstandings of end-to-end encryption's benefits and shortcomings limit people's ability to make informed choices about how and when to use these services. This paper explores the potential of using short educational messages, built into messaging workflows, to improve users' functional mental models of secure communication. A preliminary survey study (n=461) finds that such messages, when used in isolation, can effectively improve understanding of several key concepts. We then conduct a longitudinal study (n=61) to test these messages in a more realistic environment: embedded into a secure messaging app. In this second study, we do not find statistically significant evidence of improvement in mental models; however, qualitative evidence from participant interviews suggests that if made more salient, such messages could have potential to improve users' understanding.

#15 LIGHTBLUE: Automatic Profile-Aware Debloating of Bluetooth Stacks [PDF] [Copy] [Kimi1] [REL]

Authors: Jianliang Wu ; Ruoyu Wu ; Daniele Antonioli ; Mathias Payer ; Nils Ole Tippenhauer ; Dongyan Xu ; Dave (Jing) Tian ; Antonio Bianchi

The Bluetooth standard is ubiquitously supported by computers, smartphones, and IoT devices. Due to its complexity, implementations require large codebases, which are prone to security vulnerabilities, such as the recently discovered BlueBorne and BadBluetooth attacks. While defined by the standard, most of the Bluetooth functionality, as defined by different Bluetooth profiles, is not required in the common usage scenarios. Starting from this observation, we implement LIGHTBLUE, a framework performing automatic, profile-aware debloating of Bluetooth stacks, allowing users to automatically minimize their Bluetooth attack surface by removing unneeded Bluetooth features. L IGHT B LUE starts with a target Bluetooth application, detects the associated Bluetooth profiles, and applies a combination of control-flow and data-flow analysis to remove unused code within a Bluetooth host code. Furthermore, to debloat the Bluetooth firmware, LIGHTBLUE extracts the used Host Controller Interface (HCI) commands and patches the HCI dispatcher in the Bluetooth firmware automatically, so that the Bluetooth firmware avoids processing unneeded HCI commands. We evaluate LIGHTBLUE on four different Bluetooth hosts and three different Bluetooth controllers. Our evaluation shows that LIGHTB LUE achieves between 32% and 50% code reduction in the Bluetooth host code and between 57% and 83% HCI command reduction in the Bluetooth firmware. This code reduction leads to the prevention of attacks responsible for 20 known CVEs, such as BlueBorne and BadBluetooth, while introducing no performance overhead and without affecting the behavior of the debloated application.

#16 PrivSyn: Differentially Private Data Synthesis [PDF] [Copy] [Kimi1] [REL]

Authors: Zhikun Zhang ; Tianhao Wang ; Ninghui Li ; Jean Honorio ; Michael Backes ; Shibo He ; Jiming Chen ; Yang Zhang

In differential privacy (DP), a challenging problem is to generate synthetic datasets that efficiently capture the useful information in the private data. The synthetic dataset enables any task to be done without privacy concern and modification to existing algorithms. In this paper, we present PrivSyn, the first automatic synthetic data generation method that can handle general tabular datasets (with 100 attributes and domain size > 2500). PrivSyn is composed of a new method to automatically and privately identify correlations in the data, and a novel method to generate sample data from a dense graphic model. We extensively evaluate different methods on multiple datasets to demonstrate the performance of our method.

#17 Disrupting Continuity of Apple's Wireless Ecosystem Security: New Tracking, DoS, and MitM Attacks on iOS and macOS Through Bluetooth Low Energy, AWDL, and Wi-Fi [PDF] [Copy] [Kimi1] [REL]

Authors: Milan Stute ; Alexander Heinrich ; Jannik Lorenz ; Matthias Hollick

Apple controls one of the largest mobile ecosystems, with 1.5 billion active devices worldwide, and offers twelve proprietary wireless Continuity services. Previous works have unveiled several security and privacy issues in the involved protocols. These works extensively studied AirDrop while the coverage of the remaining vast Continuity service space is still low. To facilitate the cumbersome reverse-engineering process, we describe the first guide on how to approach a structured analysis of the involved protocols using several vantage points available on macOS. Also, we develop a toolkit to automate parts of this otherwise manual process. Based on this guide, we analyze the full protocol stacks involved in three Continuity services, in particular, Handoff (HO), Universal Clipboard (UC), and Wi-Fi Password Sharing (PWS). We discover several vulnerabilities spanning from Bluetooth Low Energy (BLE) advertisements to Apple's proprietary authentication protocols. These flaws allow for device tracking via HO's mDNS responses, a denial-of-service (DoS) attack on HO and UC, a DoS attack on PWS that prevents Wi-Fi password entry, and a machine-in-the-middle (MitM) attack on PWS that connects a target to an attacker-controlled Wi-Fi network. Our PoC implementations demonstrate that the attacks can be mounted using affordable off-the-shelf hardware ($20 micro:bit and a Wi-Fi card). Finally, we suggest practical mitigations and share our findings with Apple, who have started to release fixes through iOS and macOS updates.

#18 Muse: Secure Inference Resilient to Malicious Clients [PDF] [Copy] [Kimi1] [REL]

Authors: Ryan Lehmkuhl ; Pratyush Mishra ; Akshayaram Srinivasan ; Raluca Ada Popa

The increasing adoption of machine learning inference in applications has led to a corresponding increase in concerns about the privacy guarantees offered by existing mechanisms for inference. Such concerns have motivated the construction of efficient secure inference protocols that allow parties to perform inference without revealing their sensitive information. Recently, there has been a proliferation of such proposals, rapidly improving efficiency. However, most of these protocols assume that the client is semi-honest, that is, the client does not deviate from the protocol; yet in practice, clients are many, have varying incentives, and can behave arbitrarily. To demonstrate that a malicious client can completely break the security of semi-honest protocols, we first develop a new model-extraction attack against many state-of-the-art secure inference protocols. Our attack enables a malicious client to learn model weights with 22x--312x fewer queries than the best black-box model-extraction attack and scales to much deeper networks. Motivated by the severity of our attack, we design and implement MUSE, an efficient two-party secure inference protocol resilient to malicious clients. MUSE introduces a novel cryptographic protocol for conditional disclosure of secrets to switch between authenticated additive secret shares and garbled circuit labels, and an improved Beaver's triple generation procedure which is 8x--12.5x faster than existing techniques. These protocols allow MUSE to push a majority of its cryptographic overhead into a preprocessing phase: compared to the equivalent semi-honest protocol (which is close to state-of-the-art), MUSE's online phase is only 1.7x--2.2x slower and uses 1.4x more communication. Overall, MUSE is 13.4x--21x faster and uses 2x--3.6x less communication than existing secure inference protocols which defend against malicious clients.

#19 I Always Feel Like Somebody's Sensing Me! A Framework to Detect, Identify, and Localize Clandestine Wireless Sensors [PDF] [Copy] [Kimi1] [REL]

Authors: Akash Deep Singh ; Luis Garcia ; Joseph Noor ; Mani Srivastava

The increasing ubiquity of low-cost wireless sensors has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras but do not provide a generalized and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we propose SnoopDog, a framework to not only detect common Wi-Fi-based wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities and were able to detect causality for snooping devices 95.2% of the time and localize devices to a sufficiently reduced sub-space.

#20 M2MON: Building an MMIO-based Security Reference Monitor for Unmanned Vehicles [PDF] [Copy] [Kimi1] [REL]

Authors: Arslan Khan ; Hyungsub Kim ; Byoungyoung Lee ; Dongyan Xu ; Antonio Bianchi ; Dave (Jing) Tian

Unmanned Vehicles (UVs) often consist of multiple MicroController Units (MCUs) as peripherals to interact with the physical world, including GPS sensors, barometers, motors, etc. While the attack vectors for UV vary, a number of UV attacks aim to impact the physical world either from the cyber or the physical space, e.g., hijacking the mission of UVs via malicious ground control commands or GPS spoofing. This provides us an opportunity to build a unified and generic security framework defending against multiple kinds of UV attacks by monitoring the system's I/O activities. Accordingly, we build a security reference monitor for UVs by hooking into the memory-mapped I/O (MMIO), namely M2MON. Instead of building upon existing RTOS, we implement M2MON as a microkernel running in the privileged mode interceptingMMIOs while pushing the RTOS and applications into the unprivileged mode. We further instantiate an MMIO firewall using M2MON and demonstrate how to implement a secure Extended Kalman Filter (EKF) within M2MON. Our evaluation on a real-world UV system shows that M2MON incurs an 8.85% runtime overhead. Furthermore, M2MON-based firewall is able to defend against different cyber and physical attacks. The M2MON microkernel contains less than 4K LoC comparing to the 3M LoC RTOS used in our evaluation. We believe M2MON provides the first step towards building a trusted and practical security reference monitor for UVs.

#21 Systematic Evaluation of Privacy Risks of Machine Learning Models [PDF1] [Copy] [Kimi2] [REL]

Authors: Liwei Song ; Prateek Mittal

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model. In this paper, we show that prior work on membership inference attacks may severely underestimate the privacy risks by relying solely on training custom neural network classifiers to perform attacks and focusing only on the aggregate results over data samples, such as the attack accuracy. To overcome these limitations, we first propose to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks and proposing a new inference attack method based on a modification of prediction entropy. We propose to supplement existing neural network based attacks with our proposed benchmark attacks to effectively measure the privacy risks. We also propose benchmarks for defense mechanisms by accounting for adaptive adversaries with knowledge of the defense and also accounting for the trade-off between model accuracy and privacy risks. Using our benchmark attacks, we demonstrate that existing defense approaches against membership inference attacks are not as effective as previously reported. Next, we introduce a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score. Our privacy risk score metric measures an individual sample's likelihood of being a training member, which allows an adversary to identify samples with high privacy risks and perform membership inference attacks with high confidence. We propose to combine both existing aggregate privacy analysis and our proposed fine-grained privacy analysis for systematically measuring privacy risks. We experimentally validate the effectiveness of the privacy risk score metric and demonstrate that the distribution of privacy risk scores across individual samples is heterogeneous. Finally, we perform an in-depth investigation to understand why certain samples have high privacy risk scores, including correlations with model properties such as model sensitivity, generalization error, and feature embeddings. Our work emphasizes the importance of a systematic and rigorous evaluation of privacy risks of machine learning models. We publicly release our code at https://github.com/inspire-group/membership-inference-evaluation and our evaluation mechanisms have also been integrated in Google's TensorFlow Privacy library.

#22 "It's stressful having all these phones": Investigating Sex Workers' Safety Goals, Risks, and Practices Online [PDF] [Copy] [Kimi1] [REL]

Authors: Allison McDonald ; Catherine Barwulor ; Michelle L. Mazurek ; Florian Schaub ; Elissa M. Redmiles

We investigate how a population of end-users with especially salient security and privacy risks --- sex workers --- conceptualizes and manages their digital safety. The commercial sex industry is increasingly Internet-mediated. As such, sex workers are facing new challenges in protecting their digital privacy and security and avoiding serious consequences such as stalking, blackmail, and social exclusion. Through interviews (n=29) and a survey (n=65) with sex workers in European countries where sex work is legal and regulated, we find that sex workers have well-defined safety goals and clear awareness of the risks to their safety: clients, deficient legal protections, and hostile digital platforms. In response to these risks, our participants developed complex strategies for protecting their safety, but use few tools specifically designed for security and privacy. Our results suggest that if even high-risk users with clear risk conceptions view existing tools as insufficiently effective to merit the cost of use, these tools are not actually addressing their real security needs. Our findings underscore the importance of more holistic design of security tools to address both online and offline axes of safety.

#23 "Now I'm a bit angry:" Individuals' Awareness, Perception, and Responses to Data Breaches that Affected Them [PDF] [Copy] [Kimi1] [REL]

Authors: Peter Mayer ; Yixin Zou ; Florian Schaub ; Adam J. Aviv

Despite the prevalence of data breaches, there is a limited understanding of individuals' awareness, perception, and responses to breaches that affect them. We provide novel insights into this topic through an online study (n=413) in which we presented participants with up to three data breaches that had exposed their email addresses and other personal information. Overall, 73% of participants were affected by at least one breach, 5.36 breaches on average. Many participants attributed the cause of being affected by a breach} to their poor email and security practices; only 14% correctly attributed the cause to external factors such as breached organizations and hackers. Participants were unaware of 74% of displayed breaches and expressed various emotions when learning about them. While some reported intending to take action, most participants believed the breach would not impact them. Our findings underline the need for user-friendly tools to improve consumers' resilience against breaches and accountability for breached organizations to provide more proactive post-breach communications and mitigations.

#24 PASAN: Detecting Peripheral Access Concurrency Bugs within Bare-Metal Embedded Applications [PDF] [Copy] [Kimi1] [REL]

Authors: Taegyu Kim ; Vireshwar Kumar ; Junghwan Rhee ; Jizhou Chen ; Kyungtae Kim ; Chung Hwan Kim ; Dongyan Xu ; Dave (Jing) Tian

Concurrency bugs might be one of the most challenging software defects to detect and debug due to their non-deterministic triggers caused by task scheduling and interrupt handling. While different tools have been proposed to address concurrency issues, protecting peripherals in embedded systems from concurrent accesses imposes unique challenges. A naïve lock protection on a certain memory-mapped I/O (MMIO) address still allows concurrent accesses to other MMIO addresses of a peripheral. Meanwhile, embedded peripherals such as sensors often employ some internal state machines to achieve certain functionalities. As a result, improper locking can lead to the corruption of peripherals' on-going jobs (we call transaction corruption) thus corrupted sensor values or failed jobs. In this paper, we propose a static analysis tool namely PASAN to detect peripheral access concurrency issues for embedded systems. PASAN automatically finds the MMIO address range of each peripheral device using the parser-ready memory layout documents, extracts the peripheral's internal state machines using the corresponding device drivers, and detects concurrency bugs of peripheral accesses automatically. We evaluate PASAN on seven different embedded platforms, including multiple real time operating systems (RTOSes) and robotic aerial vehicles (RAVs). PASAN found 17 true positive concurrency bugs in total from three different platforms with the bug detection rates ranging from 40% to 100%. We have reported all our findings to the corresponding parties. To the best of our knowledge, PASAN is the first static analysis tool detecting the intrinsic problems in concurrent peripheral accesses for embedded systems.

#25 LZR: Identifying Unexpected Internet Services [PDF] [Copy] [Kimi2] [REL]

Authors: Liz Izhikevich ; Renata Teixeira ; Zakir Durumeric

Internet-wide scanning is a commonly used research technique that has helped uncover real-world attacks, find cryptographic weaknesses, and understand both operator and miscreant behavior. Studies that employ scanning have largely assumed that services are hosted on their IANA-assigned ports, overlooking the study of services on unusual ports. In this work, we investigate where Internet services are deployed in practice and evaluate the security posture of services on unexpected ports. We show protocol deployment is more diffuse than previously believed and that protocols run on many additional ports beyond their primary IANA-assigned port. For example, only 3% of HTTP and 6% of TLS services run on ports 80 and 443, respectively. Services on non-standard ports are more likely to be insecure, which results in studies dramatically underestimating the security posture of Internet hosts. Building on our observations, we introduce LZR ("Laser"), a system that identifies 99% of identifiable unexpected services in five handshakes and dramatically reduces the time needed to perform application-layer scans on ports with few responsive expected services (e.g., 5500% speedup on 27017/MongoDB). We conclude with recommendations for future studies.