NDSS.2023 - Fall

Total: 58

#1 A Security Study about Electron Applications and a Programming Methodology to Tame DOM Functionalities [PDF] [Copy] [Kimi1]

Authors: Zihao Jin (Microsoft Research and Tsinghua University) ; Shuo Chen (Microsoft Research) ; Yang Chen (Microsoft Research) ; Haixin Duan (Tsinghua University and Quancheng Laboratory) ; Jianjun Chen (Tsinghua University and Zhongguancun Laboratory) ; Jianping Wu (Tsinghua University)

The Electron platform represents a paradigm to develop modern desktop apps using HTML and JavaScript. Microsoft Teams, Visual Studio Code and other flagship products are examples of Electron apps. This new paradigm inherits the security challenges in web programming into the desktop-app realm, thus opens a new way for local-machine exploitation. We conducted a security study about real-world Electron apps, and discovered many vulnerabilities that are now confirmed by the app vendors. The conventional wisdom is to view these bugs as *sanitization errors*. Accordingly, secure programming requires programmers to explicitly enumerate all kinds of unexpected inputs to sanitize. We believe that secure programming should focus on specifying programmers' intentions as opposed to their non-intentions. We introduce a concept called *DOM-tree type*, which expresses the set of DOM trees that an app expects to see during execution, so an exploit will be caught as a type violation. With insights into the HTML standard and the Chromium engine, we build the DOM-tree type mechanism into the Electron platform. The evaluations show that the methodology is practical, and it secures all vulnerable apps that we found in the study.

#2 Access Your Tesla without Your Awareness: Compromising Keyless Entry System of Model 3 [PDF] [Copy] [Kimi1]

Authors: Xinyi Xie (Shanghai Fudan Microelectronics Group Co. ; Ltd.) ; Kun Jiang (Shanghai Fudan Microelectronics Group Co. ; Ltd.) ; Rui Dai (Shanghai Fudan Microelectronics Group Co. ; Ltd.) ; Jun Lu (Shanghai Fudan Microelectronics Group Co. ; Ltd.) ; Lihui Wang (Shanghai Fudan Microelectronics Group Co. ; Ltd.) ; Qing Li (State Key Laboratory of ASIC & System ; Fudan University) ; Jun Yu (State Key Laboratory of ASIC & System ; Fudan University)

Tesla Model 3 has equipped with Phone Keys and Key Cards in addition to traditional key fobs for better driving experiences. These new features allow a driver to enter and start the vehicle without using a mechanical key through a wireless authentication process between the vehicle and the key. Unlike the requirements of swiping against the car for Key Cards, the Tesla mobile app’s Phone Key feature can unlock a Model 3 while your smartphone is still in a pocket or bag. In this paper, we performed a detailed security analysis aiming at Tesla keys, especially for Key Cards and Phone Keys. Starting with reverse engineering the mobile application and sniffing the communication data, we reestablished pairing and authentication protocols and analyzed their potential issues. Missing the certificate verification allows an unofficial Key Card to work as an official one. Using these third-party products may lead to serious security problems. Also, the weaknesses of the current protocol lead to a man-in-the-middle (MitM) attack through a Bluetooth channel. The MitM attack is an improved relay attack breaking the security of the authentication procedures for Phone Keys. We also developed an App named TESmLA installed on customized Android devices to complete the proof-of-concept. The attackers can break into Tesla Model 3 and drive it away without the awareness of the car owner. Our results bring into question the security of Passive Keyless Entry and Start (PKES) and Bluetooth implementations in security-critical applications. To mitigate the security problems, we discussed the corresponding countermeasures and feasible secure scheme in the future.

#3 Accountable Javascript Code Delivery [PDF] [Copy] [Kimi1]

Authors: Ilkan Esiyok (CISPA Helmholtz Center for Information Security) ; Pascal Berrang (University of Birmingham & Nimiq) ; Katriel Cohn-Gordon (Meta) ; Robert Künnemann (CISPA Helmholtz Center for Information Security)

The Internet is a major distribution platform for web applications, but there are no effective transparency and audit mechanisms in place for the web. Due to the ephemeral nature of web applications, a client visiting a website has no guarantee that the code it receives today is the same as yesterday, or the same as other visitors receive. Despite advances in web security, it is thus challenging to audit web applications before they are rendered in the browser. We propose Accountable JS, a browser extension and opt-in protocol for accountable delivery of active content on a web page. We prototype our protocol, formally model its security properties with the Tamarin Prover, and evaluate its compatibility and performance impact with case studies including WhatsApp Web, AdSense and Nimiq. Accountability is beginning to be deployed at scale, with Meta’s recent announcement of Code Verify available to all 2 billion WhatsApp users, but there has been little formal analysis of such protocols. We formally model Code Verify using the Tamarin Prover and compare its properties to our Accountable JS protocol. We also compare Code Verify’s and Accountable JS extension's performance impacts on WhatsApp Web.

#4 Adversarial Robustness for Tabular Data through Cost and Utility Awareness [PDF] [Copy] [Kimi1]

Authors: Klim Kireev (EPFL) ; Bogdan Kulynych (EPFL) ; Carmela Troncoso (EPFL)

Many safety-critical applications of machine learning, such as fraud or abuse detection, use data in tabular domains. Adversarial examples can be particularly damaging for these applications. Yet, existing works on adversarial robustness primarily focus on machine-learning models in image and text domains. We argue that, due to the differences between tabular data and images or text, existing threat models are not suitable for tabular domains. These models do not capture that the costs of an attack could be more significant than imperceptibility, or that the adversary could assign different values to the utility obtained from deploying different adversarial examples. We demonstrate that, due to these differences, the attack and defense methods used for images and text cannot be directly applied to tabular settings. We address these issues by proposing new cost and utility-aware threat models that are tailored to the adversarial capabilities and constraints of attackers targeting tabular domains. We introduce a framework that enables us to design attack and defense mechanisms that result in models protected against cost or utility-aware adversaries, for example, adversaries constrained by a certain financial budget. We show that our approach is effective on three datasets corresponding to applications for which adversarial examples can have economic and social implications.

#5 Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation [PDF] [Copy] [Kimi1]

Authors: Dongqi Han (Tsinghua University) ; Zhiliang Wang (Tsinghua University) ; Wenqi Chen (Tsinghua University) ; Kai Wang (Tsinghua University) ; Rui Yu (Tsinghua University) ; Su Wang (Tsinghua University) ; Han Zhang (Tsinghua University) ; Zhihua Wang (State Grid Shanghai Municipal Electric Power Company) ; Minghui Jin (State Grid Shanghai Municipal Electric Power Company) ; Jiahai Yang (Tsinghua University) ; Xingang Shi (Tsinghua University) ; Xia Yin (Tsinghua University)

Concept drift is one of the most frustrating challenges for learning-based security applications built on the close-world assumption of identical distribution between training and deployment. Anomaly detection, one of the most important tasks in security domains, is instead immune to the drift of abnormal behavior due to the training without any abnormal data (known as zero-positive), which however comes at the cost of more severe impacts when normality shifts. However, existing studies mainly focus on concept drift of abnormal behaviour and/or supervised learning, leaving the normality shift for zero-positive anomaly detection largely unexplored. In this work, we are the first to explore the normality shift for deep learning-based anomaly detection in security applications, and propose OWAD, a general framework to detect, explain and adapt to normality shift in practice. In particular, OWAD outperforms prior work by detecting shift in an unsupervised fashion, reducing the overhead of manual labeling, and providing better adaptation performance through distribution-level tackling. We demonstrate the effectiveness of OWAD through several realistic experiments on three security-related anomaly detection applications with long-term practical data. Results show that OWAD can provide better adaptation performance of normality shift with less labeling overhead. We provide case studies to analyze the normality shift and provide operational recommendations for security applications. We also conduct an initial real-world deployment on a SCADA security system.

#6 Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software [PDF] [Copy] [Kimi1]

Authors: Hugo Lefeuvre (The University of Manchester) ; Vlad-Andrei Bădoiu (University Politehnica of Bucharest) ; Yi Chen (Rice University) ; Felipe Huici (Unikraft.io) ; Nathan Dautenhahn (Rice University) ; Pierre Olivier (The University of Manchester)

Least-privilege separation decomposes applications into compartments limited to accessing only what they need. When compartmentalizing existing software, many approaches neglect securing the new inter-compartment interfaces, although what used to be a function call from/to a trusted component is now potentially a targeted attack from a malicious compartment. This results in an entire class of security bugs: Compartment Interface Vulnerabilities (CIVs). This paper provides an in-depth study of CIVs. We taxonomize these issues and show that they affect all known compartmentalization approaches. We propose ConfFuzz, an in-memory fuzzer specialized to detect CIVs at possible compartment boundaries. We apply ConfFuzz to a set of 25 popular applications and 36 possible compartment APIs, to uncover a wide data-set of 629 vulnerabilities. We systematically study these issues, and extract numerous insights on the prevalence of CIVs, their causes, impact, and the complexity to address them. We stress the critical importance of CIVs in compartmentalization approaches, demonstrating an attack to extract isolated keys in OpenSSL and uncovering a decade-old vulnerability in sudo. We show, among others, that not all interfaces are affected in the same way, that API size is uncorrelated with CIV prevalence, and that addressing interface vulnerabilities goes beyond writing simple checks. We conclude the paper with guidelines for CIV-aware compartment interface design, and appeal for more research towards systematic CIV detection and mitigation.

#7 Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks on Automatic Speech Recognition Systems [PDF1] [Copy] [Kimi1]

Authors: Hadi Abdullah (Visa Research) ; Aditya Karlekar (University of Florida) ; Saurabh Prasad (University of Florida) ; Muhammad Sajidur Rahman (University of Florida) ; Logan Blue (University of Florida) ; Luke A. Bauer (University of Florida) ; Vincent Bindschaedler (University of Florida) ; Patrick Traynor (University of Florida)

Audio CAPTCHAs are supposed to provide a strong defense for online resources; however, advances in speech-to-text mechanisms have rendered these defenses ineffective. Audio CAPTCHAs cannot simply be abandoned, as they are specifically named by the W3C as important enablers of accessibility. Accordingly, demonstrably more robust audio CAPTCHAs are important to the future of a secure and accessible Web. We look to recent literature on attacks on speech-to-text systems for inspiration for the construction of robust, principle-driven audio defenses. We begin by comparing 20 recent attack papers, classifying and measuring their suitability to serve as the basis of new "robust to transcription" but "easy for humans to understand" CAPTCHAs. After showing that none of these attacks alone are sufficient, we propose a new mechanism that is both comparatively intelligible (evaluated through a user study) and hard to automatically transcribe (i.e., $P({rm transcription}) = 4 times 10^{-5}$). We also demonstrate that our audio samples have a high probability of being detected as CAPTCHAs when given to speech-to-text systems ($P({rm evasion}) = 1.77 times 10^{-4}$). Finally, we show that our method is robust to WaveGuard, a popular mechanism designed to defeat adversarial examples (and enable ASRs to output the original transcript instead of the adversarial one). We show that our method can break WaveGuard with a 99% success rate. In so doing, we not only demonstrate a CAPTCHA that is approximately four orders of magnitude more difficult to crack, but that such systems can be designed based on the insights gained from attack papers using the differences between the ways that humans and computers process audio.

#8 Backdoor Attacks Against Dataset Distillation [PDF] [Copy] [Kimi1]

Authors: Yugeng Liu (CISPA Helmholtz Center for Information Security) ; Zheng Li (CISPA Helmholtz Center for Information Security) ; Michael Backes (CISPA Helmholtz Center for Information Security) ; Yun Shen (Netapp) ; Yang Zhang (CISPA Helmholtz Center for Information Security)

Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.

#9 BARS: Local Robustness Certification for Deep Learning based Traffic Analysis Systems [PDF] [Copy] [Kimi1]

Authors: Kai Wang (Tsinghua University) ; Zhiliang Wang (Tsinghua University) ; Dongqi Han (Tsinghua University) ; Wenqi Chen (Tsinghua University) ; Jiahai Yang (Tsinghua University) ; Xingang Shi (Tsinghua University) ; Xia Yin (Tsinghua University)

Deep learning (DL) performs well in many traffic analysis tasks. Nevertheless, the vulnerability of deep learning weakens the real-world performance of these traffic analyzers (e.g., suffering from evasion attack). Many studies in recent years focused on robustness certification for DL-based models. But existing methods perform far from perfectly in the traffic analysis domain. In this paper, we try to match three attributes of DL-based traffic analysis systems at the same time: (1) highly heterogeneous features, (2) varied model designs, (3) adversarial operating environments. Therefore, we propose BARS, a general robustness certification framework for DL-based traffic analysis systems based on boundary-adaptive randomized smoothing. To obtain tighter robustness guarantee, BARS uses optimized smoothing noise converging on the classification boundary. We firstly propose the Distribution Transformer for generating optimized smoothing noise. Then to optimize the smoothing noise, we propose some special distribution functions and two gradient based searching algorithms for noise shape and noise scale. We implement and evaluate BARS in three practical DL-based traffic analysis systems. Experiment results show that BARS can achieve tighter robustness guarantee than baseline methods. Furthermore, we illustrate the practicability of BARS through five application cases (e.g., quantitatively evaluating robustness).

#10 BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense [PDF] [Copy] [Kimi]

Authors: Siyuan Cheng (Purdue University) ; Guanhong Tao (Purdue University) ; Yingqi Liu (Purdue University) ; Shengwei An (Purdue University) ; Xiangzhe Xu (Purdue University) ; Shiwei Feng (Purdue University) ; Guangyu Shen (Purdue University) ; Kaiyuan Zhang (Purdue University) ; Qiuling Xu (Purdue University) ; Shiqing Ma (Rutgers University) ; Xiangyu Zhang (Purdue University)

Deep Learning backdoor attacks have a threat model similar to traditional cyber attacks. Attack forensics, a critical counter-measure for traditional cyber attacks, is hence of importance for defending model backdoor attacks. In this paper, we propose a novel model backdoor forensics technique. Given a few attack samples such as inputs with backdoor triggers, which may represent different types of backdoors, our technique automatically decomposes them to clean inputs and the corresponding triggers. It then clusters the triggers based on their properties to allow automatic attack categorization and summarization. Backdoor scanners can then be automatically synthesized to find other instances of the same type of backdoor in other models. Our evaluation on 2,532 pre-trained models, 10 popular attacks, and comparison with 9 baselines show that our technique is highly effective. The decomposed clean inputs and triggers closely resemble the ground truth. The synthesized scanners substantially outperform the vanilla versions of existing scanners that can hardly generalize to different kinds of attacks.

#11 BlockScope: Detecting and Investigating Propagated Vulnerabilities in Forked Blockchain Projects [PDF] [Copy] [Kimi]

Authors: Xiao Yi (The Chinese University of Hong Kong) ; Yuzhou Fang (The Chinese University of Hong Kong) ; Daoyuan Wu (The Chinese University of Hong Kong) ; Lingxiao Jiang (Singapore Management University)

Due to the open-source nature of the blockchain ecosystem, it is common for new blockchains to fork or partially reuse the code of classic blockchains. For example, the popular Dogecoin, Litecoin, Binance BSC, and Polygon are all variants of Bitcoin/Ethereum. These “forked” blockchains thus could encounter similar vulnerabilities that are propagated from Bitcoin/Ethereum during forking or subsequently commit fetching. In this paper, we conduct a systematic study of detecting and investigating the propagated vulnerabilities in forked blockchain projects. To facilitate this study, we propose BlockScope, a novel tool that can effectively and efficiently detect multiple types of cloned vulnerabilities given an input of existing Bitcoin/Ethereum security patches. Specifically, BlockScope adopts similarity-based code match and designs a new way of calculating code similarity to cover all the syntax-wide variant (i.e., Type-1, Type-2, and Type-3) clones. Moreover, BlockScope automatically extracts and leverages the contexts of patch code to narrow down the search scope and locate only potentially relevant code for comparison. Our evaluation shows that BlockScope achieves good precision and high recall both at 91.8% (1.8 times higher recall than that in the state-of-the-art ReDeBug while with close precision). BlockScope allows us to discover 101 previously unknown vulnerabilities in 13 out of the 16 forked projects of Bitcoin and Ethereum, including 16 from Dogecoin, 6 from Litecoin, 1 from Binance BSC, and 4 from Optimism. We have reported all the vulnerabilities to their developers; 40 of them have been patched or accepted, 66 were acknowledged or under pending, and only 4 were rejected. We further investigate the propagation and patching processes of discovered vulnerabilities, and reveal three types of vulnerability propagation from source to forked projects, as well as the long delay (mostly over 200 days) for releasing patches in Bitcoin forks (vs. ∼100 days for Ethereum forks).

#12 Breaking and Fixing Virtual Channels: Domino Attack and Donner [PDF] [Copy] [Kimi]

Authors: Lukas Aumayr (TU Wien) ; Pedro Moreno-Sanchez (IMDEA Software Institute) ; Aniket Kate (Purdue University / Supra) ; Matteo Maffei (Christian Doppler Laboratory Blockchain Technologies for the Internet of Things / TU Wien)

Payment channel networks (PCNs) mitigate the scalability issues of current decentralized cryptocurrencies. They allow for arbitrarily many payments between users connected through a path of intermediate payment channels, while requiring interacting with the blockchain only to open and close the channels. Unfortunately, PCNs are (i) tailored to payments, excluding more complex smart contract functionalities, such as the oracle-enabling Discreet Log Contracts and (ii) their need for active participation from intermediaries may make payments unreliable, slower, expensive, and privacy-invasive. Virtual channels are among the most promising techniques to mitigate these issues, allowing two endpoints of a path to create a direct channel over the intermediaries without any interaction with the blockchain. After such a virtual channel is constructed, (i) the endpoints can use this direct channel for applications other than payments and (ii) the intermediaries are no longer involved in updates. In this work, we first introduce the Domino attack, a new DoS/griefing style attack that leverages virtual channels to destruct the PCN itself and is inherent to the design adopted by the existing Bitcoin-compatible virtual channels. We then demonstrate its severity by a quantitative analysis on a snapshot of the Lightning Network (LN), the most widely deployed PCN at present. We finally discuss other serious drawbacks of existing virtual channel designs, such as the support for only a single intermediary, a latency and blockchain overhead linear in the path length, or a non-constant storage overhead per user. We then present Donner, the first virtual channel construction that overcomes the shortcomings above, by relying on a novel design paradigm. We formally define and prove security and privacy properties in the Universal Composability framework. Our evaluation shows that Donner is efficient, reduces the on-chain number of transactions for disputes from linear in the path length to a single one, which is the key to prevent Domino attacks, and reduces the storage overhead from logarithmic in the path length to constant. Donner is Bitcoin-compatible and can be easily integrated in the LN.

#13 CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language Code Property Graph [PDF] [Copy] [Kimi1]

Authors: Faysal Hossain Shezan (University of Virginia) ; Zihao Su (University of Virginia) ; Mingqing Kang (Johns Hopkins University) ; Nicholas Phair (University of Virginia) ; Patrick William Thomas (University of Virginia) ; Michelangelo van Dam (in2it) ; Yinzhi Cao (Johns Hopkins University) ; Yuan Tian (UCLA)

WordPress, a well-known content management system (CMS), provides so-called plugins to augment default functionalities. One challenging problem of deploying WordPress plugins is that they may collect and process user data, such as Personal Identifiable Information (PII), which is usually regulated by laws such as General Data Protection Regulation (GDPR). To the best of our knowledge, no prior works have studied GDPR compliance in WordPress plugins, which often involve multiple program languages, such as PHP, JavaScript, HTML, and SQL. In this paper, we design CHKPLUG, the first automated GDPR checker of WordPress plugins for their compliance with GDPR articles related to PII. The key to CHKPLUG is to match WordPress plugin behavior with GDPR articles using graph queries to a novel cross-language code property graph (CCPG). Specifically, the CCPG models both inline language integration (such as PHP and HTML) and key-value-related connection (such as HTML and JavaScript). CHKPLUG reports a GDPR violation if certain patterns are found in the CCPG. We evaluated CHKPLUG with human-annotated WordPress plugins. Our evaluation shows that CHKPLUG achieves good performance with 98.8% TNR (True Negative Rate) and 89.3% TPR (True Positive Rate) in checking whether a certain WordPress plugin complies with GDPR. To investigate the current surface of the marketplace, we perform a measurement analysis which shows that 368 plugins violate data deletion regulations, meaning plugins do not provide any functionalities to erase user information from the website.

#14 Copy-on-Flip: Hardening ECC Memory Against Rowhammer Attacks [PDF] [Copy] [Kimi1]

Authors: Andrea Di Dio (Vrije Universiteit Amsterdam) ; Koen Koning (Intel) ; Herbert Bos (Vrije Universiteit Amsterdam) ; Cristiano Giuffrida (Vrije Universiteit Amsterdam)

Despite nearly decade-long mitigation efforts in academia and industry, the community is yet to find a practical solution to the Rowhammer vulnerability. Comprehensive software mitigations require complex changes to commodity systems, yielding significant run-time overhead and deterring practical adoption. Hardware mitigations, on the other hand, have generally grown more robust and efficient, but are difficult to deploy on commodity systems. Until recently, ECC memory implemented by the memory controller on server platforms seemed to provide the best of both worlds: use hardware features already on commodity systems to efficiently turn Rowhammer into a denial-of-service attack vector. Unfortunately, researchers have recently shown that attackers can perform one-bit-at-a-time memory templating and mount ECC-aware Rowhammer attacks. In this paper, we reconsider ECC memory as an avenue for Rowhammer mitigations and show that not all hope is lost. In particular, we show that it is feasible to devise a software-based design to both efficiently and effectively harden commodity ECC memory against ECC-aware Rowhammer attacks. To support this claim, we present Copy-on-Flip (CoF), an ECC-based software mitigation which uses a combination of memory _migration_ and _offlining_ to stop Rowhammer attacks on commodity server systems in a practical way. The key idea is to let the operating system interpose on all the error correction events and offline the vulnerable victim page as soon as the attacker has successfully templated a sufficient number of bit flips---while transparently migrating the victim data to a new page. We present a CoF prototype on Linux, where we also show it is feasible to operate simple memory management changes to support migration for the relevant user and kernel memory pages. Our evaluation shows CoF incurs low performance and memory overhead, while significantly reducing the Rowhammer attack surface. On typical benchmarks such as SPEC CPU2017 and Google Chrome, CoF reports a $<1.5%$ overhead, and, on extreme I/O-intensive scenarios (saturated nginx), up to $sim11%$.

#15 Cryptographic Oracle-based Conditional Payments [PDF] [Copy] [Kimi1]

Authors: Varun Madathil (North Carolina State University) ; Sri Aravinda Krishnan Thyagarajan (NTT Research) ; Dimitrios Vasilopoulos (IMDEA Software Institute) ; Lloyd Fournier (None) ; Giulio Malavolta (Max Planck Institute for Security and Privacy) ; Pedro Moreno-Sanchez (IMDEA Software Institute)

We consider a scenario where two mutually distrustful parties, Alice and Bob, want to perform a payment conditioned on the outcome of some real-world event. A semi-trusted oracle (or a threshold number of oracles, in a distributed trust setting) is entrusted to attest that such an outcome indeed occurred, and only then the payment is successfully made. Such oracle-based conditional (ObC) payments are ubiquitous in many real-world applications, like financial adjudication, pre-scheduled payments or trading, and are a necessary building block to introduce information about real-world events into blockchains. In this work we show how to realize ObC payments with provable security guarantees and efficient instantiations. To do this, we propose a new cryptographic primitive that we call verifiable witness encryption based on threshold signatures (VweTS): Users can encrypt signatures on payments that can be decrypted if a threshold number of signers (e.g., oracles) sign another message (e.g., the description of an event outcome). We require two security notions: (1) one-wayness that guarantees that without the threshold number of signatures, the ciphertext hides the encrypted signature, and (2) verifiability, that guarantees that a ciphertext that correctly verifies can be successfully decrypted to reveal the underlying signature. We present provably secure and efficient instantiations of VweTS where the encrypted signature can be some of the widely used schemes like Schnorr, ECDSA or BLS signatures. Our main technical innovation is a new batching technique for cut-and- choose, inspired by the work of Lindell-Riva on garbled circuits. Our VweTS instantiations can be readily used to realize ObC payments on virtually all cryptocurrencies of today in a fungible, cost-efficient, and scalable manner. The resulting ObC payments are the first to support distributed trust (i.e., multiple oracles) without requiring any form of synchrony or coordination among the users and the oracles. To demonstrate the practicality of our scheme, we present a prototype implementation and our benchmarks in commodity hardware show that the computation overhead is less than 25 seconds even for a threshold of 4-of-7 and a payment conditioned on 1024 different real-world event outcomes, while the communication overhead is below 2.3 MB

#16 DiffCSP: Finding Browser Bugs in Content Security Policy Enforcement through Differential Testing [PDF] [Copy] [Kimi]

Authors: Seongil Wi (KAIST) ; Trung Tin Nguyen (CISPA Helmholtz Center for Information Security ; Saarland University) ; Jihwan Kim (KAIST) ; Ben Stock (CISPA Helmholtz Center for Information Security) ; Sooel Son (KAIST)

The Content Security Policy (CSP) is one of the de facto security mechanisms that mitigate web threats. Many websites have been deploying CSPs mainly to mitigate cross-site scripting (XSS) attacks by instructing client browsers to constrain JavaScript (JS) execution. However, a browser bug in CSP enforcement enables an adversary to bypass a deployed CSP, posing a security threat. As the CSP specification evolves, CSP becomes more complicated in supporting an increasing number of directives, which brings additional complexity to implementing correct enforcement behaviors. Unfortunately, the finding of CSP enforcement bugs in a systematic way has been largely understudied. In this paper, we propose DiffCSP, the first differential testing framework to find CSP enforcement bugs involving JS execution. DiffCSP generates CSPs and a comprehensive set of HTML instances that exhibit all known ways of executing JS snippets. DiffCSP then executes each HTML instance for each generated policy across different browsers, thereby collecting inconsistent execution results. To analyze a large volume of the execution results, we leverage a decision tree and identify common causes of the observed inconsistencies. We demonstrate the efficacy of DiffCSP by finding 29 security bugs and eight functional bugs. We also show that three bugs are due to unclear descriptions of the CSP specification. We further identify the common root causes of CSP enforcement bugs, such as incorrect CSP inheritance and hash handling. We confirm the risky trend of client browsers deriving completely different interpretations from the same CSPs, which raises security concerns. Our study demonstrates the effectiveness of DiffCSP for identifying CSP enforcement bugs, and our findings have contributed to patching 12 security bugs in major browsers, including Chrome and Safari.

#17 Do Not Give a Dog Bread Every Time He Wags His Tail: Stealing Passwords through Content Queries (CONQUER) Attacks [PDF] [Copy] [Kimi1]

Authors: Chongqing Lei (Southeast University) ; Zhen Ling (Southeast University) ; Yue Zhang (Jinan University) ; Kai Dong (Southeast University) ; Kaizheng Liu (Southeast University) ; Junzhou Luo (Southeast University) ; Xinwen Fu (University of Massachusetts Lowell)

Android accessibility service was designed to assist individuals with disabilities in using Android devices. However, it has been exploited by attackers to steal user passwords due to design shortcomings. Google has implemented various countermeasures to make it difficult for these types of attacks to be successful on modern Android devices. In this paper, we present a new type of side channel attack called content queries (CONQUER) that can bypass these defenses. We discovered that Android does not prevent the content of passwords from being queried by the accessibility service, allowing malware with this service enabled to enumerate the combinations of content to brute force the password. While this attack seems simple to execute, there are several challenges that must be addressed in order to successfully launch it against real-world apps. These include the use of lazy query to differentiate targeted password strings, active query to determine the right timing for the attack, and timing- and state-based side channels to infer case-sensitive passwords. Our evaluation results demonstrate that the CONQUER attack is effective at stealing passwords, with an average one-time success rate of 64.91%. This attack also poses a threat to all Android versions from 4.1 to 12, and can be used against tens of thousands of apps. In addition, we analyzed the root cause of the CONQUER attack and discussed several countermeasures to mitigate the potential security risks it poses.

#18 DOITRUST: Dissecting On-chain Compromised Internet Domains via Graph Learning [PDF] [Copy] [Kimi]

Authors: Shuo Wang (CSIRO's Data61 & Cybersecurity CRC ; Australia) ; Mahathir Almashor (CSIRO's Data61 & Cybersecurity CRC ; Australia) ; Alsharif Abuadbba (CSIRO's Data61 & Cybersecurity CRC ; Australia) ; Ruoxi Sun (CSIRO's Data61) ; Minhui Xue (CSIRO's Data61) ; Calvin Wang (CSIRO's Data61) ; Raj Gaire (CSIRO's Data61 & Cybersecurity CRC ; Australia) ; Surya Nepal (CSIRO's Data61 & Cybersecurity CRC ; Australia) ; Seyit Camtepe (CSIRO's Data61 & Cybersecurity CRC ; Australia)

Traditional block/allow lists remain a significant defense against malicious websites, by limiting end-users' access to domain names. However, such lists are often incomplete and reactive in nature. In this work, we first introduce an expansion graph which creates organically grown Internet domain allow-lists based on trust transitivity by crawling hyperlinks. Then, we highlight the gap of monitoring nodes with such an expansion graph, where malicious nodes are buried deep along the paths from the compromised websites, termed as "on-chain compromise". The stealthiness (evasion of detection) and large-scale issues impede the application of existing web malicious analysis methods for identifying on-chain compromises within the sparsely labeled graph. To address the unique challenges of revealing the on-chain compromises, we propose a two-step integrated scheme, DoITrust, leveraging both individual node features and topology analysis: (i) we develop a semi-supervised suspicion prediction scheme to predict the probability of a node being relevant to targets of compromise (i.e., the denied nodes), including a novel node ranking approach as an efficient global propagation scheme to incorporate the topology information, and a scalable graph learning scheme to separate the global propagation from the training of the local prediction model, and (ii) based on the suspicion prediction results, efficient pruning strategies are proposed to further remove highly suspicious nodes from the crawled graph and analyze the underlying indicator of compromise. Experimental results show that DoITrust achieves 90% accuracy using less than 1% labeled nodes for the suspicion prediction, and its learning capability outperforms existing node-based and structure-based approaches. We also demonstrate that DoITrust is portable and practical. We manually review the detected compromised nodes, finding that at least 94.55% of them have suspicious content, and investigate the underlying indicator of on-chain compromise.

#19 Double and Nothing: Understanding and Detecting Cryptocurrency Giveaway Scams [PDF] [Copy] [Kimi1]

Authors: Xigao Li (Stony Brook University) ; Anurag Yepuri (Stony Brook University) ; Nick Nikiforakis (Stony Brook University)

As cryptocurrencies increase in popularity and users obtain and manage their own assets, attackers are pivoting from just abusing cryptocurrencies as a payment mechanism, to stealing crypto assets from end users. In this paper, we report on the first large-scale analysis of cryptocurrency giveaway scams. Giveaway scams are deceptively simple scams where attackers set up webpages advertising fake events and promising users to double or triple the funds that they send to a specific wallet address. To understand the population of these scams in the wild we design and implement CryptoScamTracker, a tool that uses Certificate Transparency logs to identify likely giveaway scams. Through a 6-month-long experiment, CryptoScamTracker identified a total of 10,079 giveaway scam websites targeting users of all popular cryptocurrencies. Next to analyzing the hosting and domain preferences of giveaway scammers, we perform the first quantitative analysis of stolen funds using the public blockchains of the abused cryptocurrencies, extracting the transactions corresponding to 2,266 wallets belonging to scammers. We find that just for the scams discovered in our reporting period, attackers have stolen the equivalent of tens of millions of dollars, organizing large-scale campaigns across different cryptocurrencies. Lastly, we find evidence that attackers try to re-victimize users by offering fund-recovery services and that some victims send funds multiple times to the same scammers.

#20 Drone Security and the Mysterious Case of DJI's DroneID [PDF] [Copy] [Kimi1]

Authors: Nico Schiller (Ruhr-Universität Bochum) ; Merlin Chlosta (CISPA Helmholtz Center for Information Security) ; Moritz Schloegel (Ruhr-Universität Bochum) ; Nils Bars (Ruhr University Bochum) ; Thorsten Eisenhofer (Ruhr University Bochum) ; Tobias Scharnowski (Ruhr-University Bochum) ; Felix Domke (Independent) ; Lea Schönherr (CISPA Helmholtz Center for Information Security) ; Thorsten Holz (CISPA Helmholtz Center for Information Security)

Consumer drones enable high-class aerial video photography, promise to reform the logistics industry, and are already used for humanitarian rescue operations and during armed conflicts. Contrasting their widespread adoption and high popularity, the low entry barrier for air mobility---a traditionally heavily regulated sector---poses many risks to safety, security, and privacy. Malicious parties could, for example, (mis-)use drones for surveillance, transportation of illegal goods, or cause economic damage by intruding the closed airspace above airports. To prevent harm, drone manufacturers employ several countermeasures to enforce safe and secure use of drones, e.g., they impose software limits regarding speed and altitude, or use geofencing to implement no-fly zones around airports or prisons. Complementing traditional countermeasures, drones from the market leader DJI implement a tracking protocol called DroneID, which is designed to transmit the position of both the drone and its operator to authorized entities such as law enforcement or operators of critical infrastructures. In this paper, we analyze security and privacy claims for drones, focusing on the leading manufacturer DJI with a market share of 94%. We first systemize the drone attack surface and investigate an attacker capable of eavesdropping on the drone's over-the-air data traffic. Based on reverse engineering of DJI firmware, we design and implement a decoder for DJI's proprietary tracking protocol DroneID, using only cheap COTS hardware. We show that the transmitted data is not encrypted, but accessible to anyone, compromising the drone operator's privacy. Second, we conduct a comprehensive analysis of drone security: Using a combination of reverse engineering, a novel fuzzing approach tailored to DJI's communication protocol, and hardware analysis, we uncover several critical flaws in drone firmware that allow attackers to gain elevated privileges on two different DJI drones and their remote control. Such root access paves the way to disable or bypass countermeasures and abuse drones. In total, we found 16 vulnerabilities, ranging from denial of service to arbitrary code execution. 14 of these bugs can be triggered remotely via the operator's smartphone, allowing us to crash the drone mid-flight.

#21 EdgeTDC: On the Security of Time Difference of Arrival Measurements in CAN Bus Systems [PDF] [Copy] [Kimi1]

Authors: Marc Roeschlin (ETH Zurich ; Switzerland) ; Giovanni Camurati (ETH Zurich ; Switzerland) ; Pascal Brunner (ETH Zurich ; Switzerland) ; Mridula Singh (CISPA Helmholtz Center for Information Security) ; Srdjan Capkun (ETH Zurich ; Switzerland)

A Controller Area Network (CAN bus) is a message-based protocol for intra-vehicle communication designed mainly with robustness and safety in mind. In real-world deployments, CAN bus does not offer common security features such as message authentication. Due to the fact that automotive suppliers need to guarantee interoperability, most manufacturers rely on a decade-old standard (ISO 11898) and changing the format by introducing MACs is impractical. Research has therefore suggested to address this lack of authentication with CAN bus Intrusion Detection Systems (IDSs) that augment the bus with separate modules. IDSs attribute messages to the respective sender by measuring physical-layer features of the transmitted frame. Those features are based on timings, voltage levels, transients—and, as of recently, Time Difference of Arrival (TDoA) measurements. In this work, we show that TDoA-based approaches presented in prior art are vulnerable to novel spoofing and poisoning attacks. We describe how those proposals can be fixed and present our own method called EdgeTDC. Unlike existing methods, EdgeTDC does not rely on Analog-to-digital converters (ADCs) with high sampling rate and high dynamic range to capture the signals at sample level granularity. Our method uses time-to-digital converters (TDCs) to detect the edges and measure their timings. Despite being inexpensive to implement, TDCs offer low latency, high location precision and the ability to measure every single edge (rising and falling) in a frame. Measuring each edge makes analog sampling redundant and allows the calculation of statistics that can even detect tampering with parts of a message. Through extensive experimentation, we show that EdgeTDC can successfully thwart masquerading attacks in the CAN system of modern vehicles.

#22 Fine-Grained Trackability in Protocol Executions [PDF1] [Copy] [Kimi1]

Authors: Ksenia Budykho (Surrey Centre for Cyber Security ; University of Surrey ; UK) ; Ioana Boureanu (Surrey Centre for Cyber Security ; University of Surrey ; UK) ; Steve Wesemeyer (Surrey Centre for Cyber Security ; University of Surrey ; UK) ; Daniel Romero (NCC Group) ; Matt Lewis (NCC Group) ; Yogaratnam Rahulan (5G/6G Innovation Centre - 5GIC/6GIC ; University of Surrey ; UK) ; Fortunat Rajaona (Surrey Centre for Cyber Security ; University of Surrey ; UK) ; Steve Schneider (Surrey Centre for Cyber Security ; University of Surrey ; UK)

We introduce a new framework, TrackDev, for encoding and analysing the tracing or "tracking" of an entity (e.g., a device) via its executions of a protocol or its usages of a system. TrackDev considers multiple dimensions combined: whether the attacker is active or passive, whether an entity is trackable in its every single appearances or just in a compound set thereof, and whether the entity can be explicitly or implicitly identified. TrackDev can be applied to most identification-based systems. TrackDev is to be applied in practice, over actual executions of systems; to this end, we test TrackDev on real-life traffic for two well-known protocols, the LoRaWAN Join and the 5G handovers, showing new trackability attacks therein and proposing countermeasures. We study the strength of TrackDev's various trackability properties and show that many of our notions are incomparable amongst each other, thus justifying the fine-grained nature of TrackDev. Finally, we detail how the main thrust of TrackDev can be mechanised in formal-verification tools, without any loss; we exemplify this fully on the LoRaWAN Join, in the Tamarin prover. In this process, we also uncover and discuss within two important aspects: (a) TrackDev’s separation between "explicit" and "implicit" trackability offers new formal-verification insights; (b) our analyses of the LoRaWAN Join protocol in Tamarin against TrackDev as well as against existing approximations of unlinkability by Baelde et al. concretely show that the latter approximations can be coarser than our notions.

#23 Focusing on Pinocchio's Nose: A Gradients Scrutinizer to Thwart Split-Learning Hijacking Attacks Using Intrinsic Attributes [PDF] [Copy] [Kimi]

Authors: Jiayun Fu (Huazhong University of Science and Technology) ; Xiaojing Ma (Huazhong University of Science and Technology) ; Bin B. Zhu (Microsoft Research Asia) ; Pingyi Hu (Huazhong University of Science and Technology) ; Ruixin Zhao (Huazhong University of Science and Technology) ; Yaru Jia (Huazhong University of Science and Technology) ; Peng Xu (Huazhong University of Science and Technology) ; Hai Jin (Huazhong University of Science and Technology) ; Dongmei Zhang (Microsoft Research)

Split learning is privacy-preserving distributed learning that has gained momentum recently. It also faces new security challenges. FSHA is a serious threat to split learning. In FSHA, a malicious server hijacks training to trick clients to train the encoder of an autoencoder instead of a classification model. Intermediate results sent to the server by a client are actually latent codes of private training samples, which can be reconstructed with high fidelity from the received codes with the decoder of the autoencoder. SplitGuard is the only existing effective defense against hijacking attacks. It is an active method that injects falsely labeled data to incur abnormal behaviors to detect hijacking attacks. Such injection also incurs an adverse impact on honest training of intended models. In this paper, we first show that SplitGuard is vulnerable to an adaptive hijacking attack named SplitSpy. SplitSpy exploits the same property that SplitGuard exploits to detect hijacking attacks. In SplitSpy, a malicious server maintains a shadow model that performs the intended task to detect falsely labeled data and evade SplitGuard. Our experimental evaluation indicates that SplitSpy can effectively evade SplitGuard. Then we propose a novel passive detection method, named Gradients Scrutinizer, which relies on intrinsic differences between gradients from an intended model and those from a malicious model: the expected similarity among gradients of same-label samples differs from the expected similarity among gradients of different-label samples for an intended model, while they are the same for a malicious model. This intrinsic distinguishability enables Gradients Scrutinizer to effectively detect split-learning hijacking attacks without tampering with honest training of intended models. Our extensive evaluation indicates that Gradients Scrutinizer can effectively thwart both known split-learning hijacking attacks and adaptive counterattacks against it.

#24 Folk Models of Misinformation on Social Media [PDF] [Copy] [Kimi]

Authors: Filipo Sharevski (DePaul University) ; Amy Devine (DePaul University) ; Emma Pieroni (DePaul University) ; Peter Jachim (DePaul University)

In this paper we investigate what textit{folk models of misinformation} exist on social media with a sample of 235 social media users. Work on social media misinformation does not investigate how ordinary users deal with it; rather, the focus is mostly on the anxiety, tensions, or divisions misinformation creates. Studying only the structural aspects also overlooks how misinformation is internalized by users on social media and thus is quick to prescribe "inoculation" strategies for the presumed lack of immunity to misinformation. How users grapple with social media content to develop "natural immunity" as a precursor to misinformation resilience, however, remains an open question. We have identified at least five textit{folk models} that conceptualize misinformation as either: textit{political (counter)argumentation}, textit{out-of-context narratives}, textit{inherently fallacious information}, textit{external propaganda}, or simply textit{entertainment}. We use the rich conceptualizations embodied in these folk models to uncover how social media users minimize adverse reactions to misinformation encounters in their everyday lives.

#25 FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities [PDF] [Copy] [Kimi1]

Authors: Samuel Groß (Google) ; Simon Koch (TU Braunschweig) ; Lukas Bernhard (Ruhr-University Bochum) ; Thorsten Holz (CISPA Helmholtz Center for Information Security) ; Martin Johns (TU Braunschweig)

JavaScript has become an essential part of the Internet infrastructure, and today's interactive web applications would be inconceivable without this programming language. On the downside, this interactivity implies that web applications rely on an ever-increasing amount of computationally intensive JavaScript code, which burdens the JavaScript engine responsible for efficiently executing the code. To meet these rising performance demands, modern JavaScript engines ship with sophisticated just-in-time (JIT) compilers. However, JIT compilers are a complex technology and, consequently, provide a broad attack surface for potential faults that might even be security-critical. Previous work on discovering software faults in JavaScript engines found many vulnerabilities, often using fuzz testing. Unfortunately, these fuzzing approaches are not designed to generate source code that actually triggers JIT semantics. Consequently, JIT vulnerabilities are unlikely to be discovered by existing methods. In this paper, we close this gap and present the first fuzzer that focuses on JIT vulnerabilities. More specifically, we present the design and implementation of an intermediate representation (IR) that focuses on discovering JIT compiler vulnerabilities. We implemented a complete prototype of the proposed approach and evaluated our fuzzer over a period of six months. In total, we discovered 17 confirmed security vulnerabilities. Our results show that targeted JIT fuzzing is possible and a dangerously neglected gap in fuzzing coverage for JavaScript engines.