SACMAT 2024: Proceedings of the 29th ACM Symposium on Access Control Models and Technologies
Full Citation in the ACM Digital LibrarySESSION: Keynote Talks
AI/ML, Graphs and Access Control: Towards Holistic Identity and Access Management
Vulnerabilities in identity and access management (IAM) are one of the most common reasons for data breaches leading to adversarial impacts on security, privacy and compliance postures. Account breaches, incorrectly designed access control policies, weaknesses in authentication and credential management, vulnerable session management are some of the several security issues that lead to eventual compromise of the crown jewels leading to data breaches. The lifecycles of subjects and their identities, of objects and re- sources, and of the permissions and authorization policies are in- tertwined in a complex manner for each specific scenario. Often subjects, objects and permissions often are hard to be defined or isolated from each other, especially in the context of machine learn-ing. The evolution of these entities, and how their provenance is analyzed often is essential not only for forensic analysis of a breach but also should be a proactive ongoing process.
In order to manage the security issues and risks thereof, holistic end-to-end identity and access management in a secure and privacy- preserving manner is the need of yesterday, today and of the future. In the past couple of decades, we have encountered this problem time and again in various contexts in the settings of academic and industry research and in development/deployment of products, services and processes.
Three elements are the key ingredients in order to address this problem in a holistic manner: (1) graphs, (2) machine learning, and (3) decentralized computing (i.e., web3, blockchains). Further, with the advent of generative AI and large language models, the question arises about what problems they can help solve, or they can excerbate further, or what new challenges they can introduce. In this talk, I plan to delve into a discussion of the following: (a) the holistic and end-to-end nature of IAM, (b) the interplay between these three elements - graphs, machine learning, Web3 as well as generative AI, and how they can help, and (c) the research challenges that need to be addressed in order to reduce the security, privacy and compliance risks in identity and access management.
Cryptographic Enforcement of Access Control Policies: Models, Applications, and Future Directions
Cryptographic enforcement of access control policies is a rapidly evolving field with ongoing research and development aimed at addressing emerging security challenges and requirements.
Among the different techniques to cryptographically enforce access control policies, hierarchical key assignment schemes play a central role, since they can be used in a variety of application domains. In this talk, we give an overview of such a cryptographic primitive, by discussing different models, applications and future research directions.
Trustworthy Artificial Intelligence for Securing Transportation Systems
Artificial Intelligence (AI) techniques are being applied to numerous applications from Healthcare to Cyber Security to Finance. For example, Machine Learning (ML) algorithms are being applied to solve security problems such as malware analysis and insider threat detection. However, there are many challenges in applying ML algorithms for various applications. For example, (i) the ML algorithms may violate the privacy of individuals. This is because we can gather massive amounts of data and apply ML algorithms to the data to extract highly sensitive information. (ii) ML algorithms may show bias and be unfair to various segments of the population. (iii) ML algorithms themselves may be attacked possibly resulting in catastrophic errors including in cyber-physical systems such as transportation systems. Finally, (iv) the ML algorithms must be safe and not harm society. Therefore, when ML algorithms are applied to transportation systems for handling congestion, preventing accidents, and giving advice to drivers, we must ensure that they are secure, ensure privacy and fairness, as well as provide for the safe operation of the transportation systems. Other AY techniques such as Generative AI (GenAI) are also being applied not only to secure systems design but also to determine the attacks and potential solutions. This presentation is divided into two parts. First, we describe our research over the past decade on Trustworthy ML systems. These are systems that are secure as well as ensure privacy, fairness, and safety. We discuss our ensemble-based ML models for detecting attacks as well as our research on developing Adversarial Machine Learning techniques. We also discuss securing the Internet of Transportation systems that are based on traditional methods such as Extended Kalman Filters to detect cyberattacks. Second, Second, we discuss our work on Finally, we discuss the research we recently started as part of the USDOT National University Technology Center TraCR (Transportation Cybersecurity and Resiliency) led by Clemson University. In particular, we describe (i) the application of federated machine learning techniques for detecting attacks in transportation systems; (ii) publishing synthetic transportation data sets that preserve privacy, (iii) fairness algorithms for transportation systems, and (iv) examining how GenAI systems are being integrated with transportation systems to provide security. Our focus includes the following: · Data Privacy: We are designing a Privacy-aware Policy-based Data Management Framework for Transportation Systems. Our work involves collecting the requisite data and developing analysis tools to identify and quantify privacy risks. Existing privacy-preserving, differentially private synthetic data generation techniques, which tailor data utility for generic ML accuracy, are not well suited for specific applications. We are developing synthetic data generation tools for transportation systems applications. We will develop new ML algorithms that can leverage these datasets. · Fairness: We have developed a novel adaptive fairness-aware online meta-learning algorithm, FairSAOML, which adapts to changing environments in both bias control and model precision. Our current work is focusing on adapting our framework to fairness in transportation systems. and control bias over time, especially ensuring group fairness across different protected sub-populations; identifying interesting attributes using explainable AI techniques that might help to mitigate bias and develop equitable algorithms. We have also developed a second system, FairDolce, that recognizes objects involving fairness constraints in a changing environment. We are adapting it to transportation applications. For example, pedestrian detection (whether or not the object being seen is a pedestrian) must be fair with respect to the race or gender of the individuals being detected under changing environments (e.g., rainy, cloudy sunny). Adversarial ML: Our prior work on adversarial ML models worked on traditional datasets such as network traffic data. Our current focus is on adapting our approach to AV-based sensor data. Our ML models are being applied to sensor data for object recognition and traffic management. These ML models may be attacked by the adversary. We will study various attack models and investigate ways of how interactions may occur between the model and the adversary and subsequently develop appropriate adversarial ML models that operate on the AV sensor data. · Attack Detection - Smart vehicles are often exposed to various attacks making it difficult for manufacturers to collaboratively train anomaly/attack detection models. Yet it would be ideal if all the data available across manufacturers could be used in building robust attack detection systems. To achieve this, we developed FAST-SV, which incorporates federated learning in conjunction with augmentation techniques to build a highly performant attack detection system for smart cars. Safety: Safety has been studied for cyber-physical systems and formal methods have been applied to specify safety properties and subsequently verify that the system satisfies the specifications. However, our goal is to ensure that the ML algorithms utilized by the transportation systems are safe. This would involve developing an AI Governance framework that would require transparency and explainability (among others) of the ML algorithms utilized by the transportation system.
SESSION: Regular Track 1 (Privacy)
ToneCheck: Unveiling the Impact of Dialects in Privacy Policy
Users frequently struggle to decipher privacy policies, facing challenges due to the legalese often present in privacy policies, leaving trust and comprehension shrouded in ambiguity. This study dives into the transformative power of language, exploring how different linguistic tones can bridge the gap between legal, technical jargon, and genuine user engagement-through a comparative analysis involving diverse focus groups, immersing them in three distinct policy variations: legalistic, casual, and empathetic. We explored how these tones reshape the user experience and bridge the gap between legal discourse and comprehension. Analysis of the data revealed significant associations between linguistic tone and user trust and comprehension. The adoption of an empathetic tone significantly enhanced user trust, as evidenced by a 40.4% increase compared to alternative language styles. This preference highlights the human desire for genuine connection, even in the intricate domain of data privacy. Furthermore, comprehension indices arise for both empathetic and casual tones, leaving legalistic language lagging far behind. This suggests a clear path towards user-friendly policies, where clarity exceeds complexity. Our exploration goes beyond mere compliance. We illustrate the complex gap between subtle linguistic shifts and user perception. By deciphering the language that resonates with trust and understanding, We plant the seeds for the development of privacy policies that not only meet legal requirements but also enhance user trust and comprehension.
Make Split, not Hijack: Preventing Feature-Space Hijacking Attacks in Split Learning
The popularity of Machine Learning (ML) makes the privacy of sensitive data more imperative than ever. Collaborative learning techniques like Split Learning (SL) aim to protect client data while enhancing ML processes. Though promising, SL has been proved to be vulnerable to a plethora of attacks, thus raising concerns about its effectiveness on data privacy. In this work, we introduce a hybrid approach combining SL and Function Secret Sharing (FSS) to ensure client data privacy. The client adds a random mask to the activation map before sending it to the servers. The servers cannot access the original function but instead work with shares generated using FSS. Consequently, during both forward and backward propagation, the servers cannot reconstruct the client's raw data from the activation map. Furthermore, through visual invertibility, we demonstrate that the server is incapable of reconstructing the raw image data from the activation map when using FSS. It enhances privacy by reducing privacy leakage compared to other SL-based approaches where the server can access client input information. Our approach also ensures security against feature space hijacking attack, protecting sensitive information from potential manipulation. Our protocols yield promising results, reducing communication overhead by over 2× and training time by over 7× compared to the same model with FSS, without any SL. Also, we show that our approach achieves > 96% accuracy and remains equivalent to the plaintext models.
Making Privacy-preserving Federated Graph Analytics Practical (for Certain Queries)
Privacy-preserving federated graph analytics is an emerging area of research. The goal is to run graph analytics queries over a set of devices that are organized as a graph while keeping the raw data on the devices rather than centralizing it. Further, no entity may learn any new information except for the final query result. For instance, a device may not learn a neighbor's data. The state-of-the-art prior work for this problem provides privacy guarantees for a broad set of queries in a strong threat model where the devices can be malicious. However, it imposes an impractical overhead. For example, for a certain query, each device locally requires over 8.79 hours of CPU time and 5.73 GiBs of network transfers. This paper presents Colo, a new, low-cost system for privacy-preserving federated graph analytics that requires minutes of CPU time and a few MiBs in network transfers, for a particular subset of queries. At the heart of Colo is a new secure computation protocol that enables a device to securely and efficiently evaluate a graph query in its local neighborhood while hiding device data, edge data, and topology data. An implementation and evaluation of Colo shows that for running a variety of COVID-19 queries over a population of 1M devices, it requires less than 8.4 minutes of a device's CPU time and 4.93 MiBs in network transfers - improvements of up to three orders of magnitude.
SESSION: Work-In-Progress Track
WiP: Enhancing the Comprehension of XACML Policies
Policy comprehension is crucial for ensuring data protection. Yet, policies written in flexible and expressive languages such as XACML are not easy to comprehend. In this work, we propose a visualization framework to facilitate the comprehension of XACML policies and their evaluation. Our framework shows a tree representation of the XACML policies to be enforced and highlights the contribution of its policy elements to the overall access decision, thus supporting the understanding of how this decision resulted from the interplay between possibly conflicting access requirements. We implemented our visualization framework as an extension to SAFAX, an XACML-based framework that offers authorization as a service.
Defending Multi-Cloud Applications Against Man-in-the-Middle Attacks
Multi-cloud applications have become ubiquitous in today's organizations. Multi-cloud applications are being deployed across cloud service provider platforms to deliver services to all aspects of business. With the expansive use of multi-cloud environments, security is at the forefront of concerns when deploying and managing access to multi-cloud applications and the expanded attack surface of these applications. Attackers can exploit vulnerabilities in multi-cloud environments that expose privileged information to inevitable attack.
In this paper we develop a multi-cloud victim web application deployed as component services. These services are deployed on different cloud service providers. Being deployed on the different cloud service providers expands the attack surface of the multi-cloud victim web application. Using the victim multi-cloud application, we demonstrate a man-in-the-middle attack showing the stealing of privileged credentials. Utilizing ParrotOS as the exploitation server, we demonstrate an attack on an application deployed across three cloud service providers: AWS, Azure, and Rackspace. Having successfully attacked the application, we then implement mitigations and verify the protection by attacking the protected application.
SecureCheck: User-Centric and Geolocation-Aware Access Mediation Contracts for Sharing Private Data
Data oversharing is a critical issue in today's technologically driven society. Numerous entities, i.e., corporations, governments, criminal groups, are collecting individuals' data. One potential cause is that current systems, such as verification systems, do not prioritize the minimization of exchanged data. To address this issue, we propose SecureCheck, a novel privacy-enhancing technology (PET) framework that prioritizes data minimization. We aim to ensure that individuals control technology and its access to themselves, and not technology controlling individuals or their data. To that end, our proposed framework is comprised of two components: a novel access control model, called access mediation contracts, that enables users to negotiate with third parties over what data is used in a verification event, and a novel recommendation system that recommends the access mediation contracts in situationally-aware manner using geolocation data. As a part of ongoing work, we are developing a privacy calculus model detailing the decision process for data exchange. Also, we are conducting an exploratory study to better identify how to resolve conflicts between data owners and verifiers. Finally, we are actively working towards VaxCheck, a prototype implementation of SecureCheck focused on vaccine verification systems, so we can assess its effectiveness and suitability for future deployments in practice.
SESSION: Regular Track 2 (Policy Analysis and Validation)
Static and Dynamic Analysis of a Usage Control System
The ability to exchange data while maintaining sovereignty is fundamental to emerging decentralized data-driven ecosystems. Data sovereignty refers to the entity's capability to be self-determined concerning data usage. As such, a data usage control system (UCON) is critical for sovereignty. UCON, a generalization of attribute-based access control, enforces continuous authorization, allowing attribute mutability after access is granted. In theory, UCON comprises a policy language to express constraints and obligations of data usage, and a technology to evaluate and enforce them. In practice, realizing the above is challenging and poses trust concerns. Partly, this is due to the complexity of UCON (continuous authorization, obligations) and the advanced usage constraints (stemming from, e.g., regulations or business contracts) combined with the decentralized nature of data ecosystems that allow different actors (e.g., data provider, security engineers) to author policies, and operate UCON. To that end, we propose to aid actors with automated policy analysis and verification methods. We present a new policy analysis method based on the combination of symbolic execution for policy evaluation and SMT solving to compute concrete scenarios answering queries on the policies. Our approach supports symbolic queries, where attribute values may be concrete values, a range of values, or symbolic variables. We also propose a monitoring approach using RTLola tool to verify the correctness of UCON's behavior in terms of decisions, obligations, and user-specified properties. To monitor obligations, we define their essential parameters and show how to monitor their fulfillment based on the configuration. We also present eight templates that allow users to generate the most important properties for monitoring UCON.
SPRT: Automatically Adjusting SELinux Policy for Vulnerability Mitigation
Nowadays, SELinux has been widely applied in Linux systems to enforce security policy and provide flexible MAC. However, improperly configured rules in policies may cause illegal operations and serious security problems to the system. Up till now, it is a challenging task to analyze and modify anomalous rules in policy, since policy rules are massive and semantically complex. In this paper, we propose SPRT, an architecture for adjusting SELinux policy automatically to mitigate vulnerabilities caused by misconfigured policy. Based on the features of security description text of the vulnerabilities in CVE repository, we innovatively propose a criteria for classification of vulnerabilities and adjust SELinux policy according to the classification results, providing a new perspective on the field of policy adjustment. SPRT uses NLP techniques to train the prototype network model to automatically classify vulnerabilities into three categories. Furthermore, SPRT constructs a knowledge base to identify the mapping of policy, vulnerabilities and rules to be modified. It helps modify the rules in policy based on the classification results and audit logs to mitigate the potential impact of vulnerabilities. Our evaluation shows SPRT is effective in vulnerability mitigation, both in terms of fixing misconfigured policies and suppressing attacks generated by vulnerabilities. We collect policies in SELinux that involve the file-label mapping relationship, type transition policy rules and type enforcement policy rules, amounting to around 130,000 rules in all. Additionally, we analyze more than 400 security description texts of vulnerabilities. In our experiments, we compare other three supervised learning models with SPRT and demonstrate that SPRT can automatically classify vulnerabilities with a high accuracy of 92.84%. Additionally, SPRT provides effective policy adjustment to mitigate the damage that 90.47% of vulnerabilities resulting from misconfigured policies cause.
Utilizing Threat Partitioning for More Practical Network Anomaly Detection
Anomaly-based network intrusion detection would appear on the surface to be ideal for detection of zero-day network threats. Yet in practice, their often unacceptably high false positive rates keep them on the sideline in favor of signature-based methods, which typically detect known threats. We argue that an anomaly-based network intrusion detection system should not only be specialized to a specific class of related threats, but characteristics of the threat class itself should be utilized when designing both the detection system and structuring the network data to use with the system. To this end, we take two common network threat classes, DDoS-as-a-Smokescreen (DaaSS) and SYN flood, and analyze their characteristics for structure that we can use to specialize anomaly detection. We partition these threat classes into known behavior and unknown behavior, leaving the latter open-ended. Through experimentation on multiple datasets, we show that our proposed detection system based on this threat partitioning approach is capable of detecting DaaSS attacks and zero-day SYN flood variants with very low false positive rates, even in the face of concept drift, and can do so without having to collect large amounts of benign network traffic for training.
SESSION: Regular Track 3 (LLMs and Access Control Management)
Prompting LLM to Enforce and Validate CIS Critical Security Control
Proper security control enforcement reduces the attack surface and protects the organizations against attacks. Organizations like NIST and CIS (Center for Internet Security) provide critical security controls (CSCs) as a guideline to enforce cyber security. Automated enforcement and measurability mechanisms for these CSCs still need to be developed. Analyzing the implementations of security products to validate security control enforcement is non-trivial. Moreover, manually analyzing and developing measures and metrics to monitor, and implementing those monitoring mechanisms are resource-intensive tasks and massively dependent on the security analyst's expertise and knowledge. To tackle those problems, we use large language models (LLMs) as a knowledge base and reasoner to extract measures, metrics, and monitoring mechanism implementation steps from security control descriptions to reduce the dependency on security analysts. Our approach used few-shot learning with chain-of-thought (CoT) prompting to generate measures and metrics and generated knowledge prompting for metrics implementation. Our evaluation shows that prompt engineering to extract measures, metrics, and monitoring implementation mechanisms can reduce dependency on humans and semi-automate the extraction process. We also demonstrate metric implementation steps using generated knowledge prompting with LLMs.
Pairing Human and Artificial Intelligence: Enforcing Access Control Policies with LLMs and Formal Specifications
Large Language Models (LLMs), such as ChatGPT and Google Bard, have performed interestingly well when assisting developers on computer programming tasks, a.k.a., coding, thus potentially resulting in convenient and faster software constructions. This new approach significantly enhances efficiency but also presents challenges in unsupervised code construction with limited security guarantees. LLMs excel in producing code with accurate grammar, yet they are not specifically trained to guarantee the security of the code. In this paper, we provide an initial exploration into using formal software specifications as a starting point for software construction, allowing developers to translate descriptions of security-related behavior into natural language instructions for LLMs, a.k.a., prompts. In addition, we leveraged automated verification tools to evaluate the code produced against the aforementioned specifications , following a modular, step-by-step software construction process. For our study, we leveraged Role-based Access Control (RBAC), a mature security model, and the Java Modeling Language (JML), a behavioral specification language for Java. We test our approach on different publicly-available LLMs, namely, OpenAI ChatGPT 4.0, Google Bard, and Microsoft CoPilot. We provide a description of two applications-a security-sensitive Banking application employing RBAC and an RBAC API module itself-, the corresponding JML specifications, as well as a description of the prompts, the generated code, the verification results, as well as a series of interesting insights for practitioners interested in further exploring the use of LLMs for securely constructing applications.
SESSION: Blue Sky/Vision Track
BlueSky: How to Raise a Robot - A Case for Neuro-Symbolic AI in Constrained Task Planning for Humanoid Assistive Robots
Humanoid robots will be able to assist humans in their daily life, in particular due to their versatile action capabilities. However, while these robots need a certain degree of autonomy to learn and explore, they also should respect various constraints, for access control and beyond. We explore the novel field of incorporating privacy, security, and access control constraints with robot task planning approaches. We report preliminary results on the classical symbolic approach, deep-learned neural networks, and modern ideas using large language models as knowledge base. From analyzing their trade-offs, we conclude that a hybrid approach is necessary, and thereby present a new use case for the emerging field of neuro-symbolic artificial intelligence.
SESSION: Regular Track 4 (Access Control Framework)
A Bargaining-Game Framework for Multi-Party Access Control
Multi-party access control is emerging to protect shared resources in collaborative environments. Existing multi-party access control models often lack essential features to address the challenges characterizing collaborative decision-making. Collaborative access decision-making requires mechanisms that optimally account for the access requirements of all parties without requiring user intervention at evaluation time. This work fills these gaps by proposing a framework for multi-party access control based on game theory. To this end, we identify the decision factors influencing access decision-making in collaborative environments and propose two bargaining models - a cooperative model and a non-cooperative model - to investigate the impact of different cooperation assumptions on collaborative access decision-making. Our framework ensures fairness by considering the access requirements of all controllers equally, achieves optimality by relying on best response strategies, and guarantees termination. Our evaluation shows that different cooperation assumptions significantly impact the performance and outcome of collaborative access decision-making.
A Self-Sovereign Identity Approach to Decentralized Access Control with Transitive Delegations
In this paper, we introduce a new decentralized access control framework with transitive delegation capabilities that tackles the performance and scalability limitations of the existing state-of-the-art solutions. In order to accomplish this, the proposed solution is anchored in the self-sovereign identity (SSI) paradigm, which embodies a distributed identity management system. By adopting this paradigm, we obviate slow cryptographic premises such as identity-based encryption (IBE) that were used in prior work. Furthermore, we enhance the existing verifiable credentials (VCs) from this paradigm by introducing our own decentralized permission objects to support the concept of transitive delegations. This concept allows delegates to further delegate their access to resources with the same or fewer privileges to other entities within the framework. This renders our solution suitable for diverse scenarios, including applications in decentralized building access management. To the best of our knowledge, we are the first to introduce the concept of transitive delegations in this paradigm. Finally, our performance experiments show a performance enhancement of three orders of magnitude compared to the prevailing state-of-the-art solutions.
Obligation Management Framework for Usage Control
Obligations were introduced in access and usage control as a mechanism to specify mandatory actions to be fulfilled as part of authorization. In this paper, we address challenges related to obligation management in access and usage control, focusing on the Abbreviated Language For Authorization (ALFA) and eXtensible Access Control Markup Language (XACML) standards. Firstly, we provide a comprehensive analysis of Combining Algorithms (CAs) to determine their influence on the selection and ordering of obligations and identify nondeterminism. We then propose solutions to eliminate such nondeterminism enabling policy authors to explicitly specify the intended behavior. Secondly, we discuss the recurrence of obligations in usage control that occurs due to policy re-evaluations, highlighting the need to execute some obligations only once. We address this problem by introducing a parameter that enables policy authors to explicitly specify whether they intend an obligation to recur or not. Thirdly, we highlight an ambiguity in obligation applicability to lifecycle phases (e.g., ongoing) in usage control, arising from the lack of explicit associations between obligations and phases in particular cases. To address this issue, we introduce a parameter that explicitly specifies the scope of an obligation, allowing policy authors to restrict obligations to a single phase or apply them to the entire authorization. Finally, we extend the functionality of the Obligation Manager (OM) component to combine all three solutions, providing deterministic obligation management.
SESSION: Regular Track 5 (Policy Management and Enforcement)
Converting Rule-Based Access Control Policies: From Complemented Conditions to Deny Rules
Using access control policy rules with deny effects (i.e., negative authorization) can be preferred to using complemented conditions in the rules as they are often easier to comprehend in the context of large policies. However, the two constructs have different impacts on the expressiveness of a rule-based access control model. We investigate whether policies expressible using complemented conditions can be expressed using deny rules instead. The answer to this question is not always affirmative. In this paper, we propose a practical approach to address this problem for a given policy. In particular, we develop theoretical results that allow us to pose the problem as a set of queries to an SAT solver. Our experimental results using an off-the-shelf SAT solver demonstrate the feasibility of our approach and offer insights into its performance based on access control policies from multiple domains.
Hierarchical Key Assignment Schemes with Key Rotation
Hierarchical structures are frequently used to manage access to sensitive data in various contexts, ranging from organizational settings to IoT networks.
A Hierarchical Key Assignment Scheme (HKAS) is designed to cryptographically enforce access control in hierarchical structures. It operates by assigning secrets and encryption keys to a set of classes within a partially ordered hierarchy. This approach ensures that the secret of a higher-level class can be used to efficiently derive keys for all classes positioned at a lower level in the hierarchy.
In this paper, we introduce a novel cryptographic primitive that we name HKAS with Key Rotation (KR-HKAS). This extension enhances the current HKAS framework by enabling a provably secure mechanism for periodically rotating both encryption keys and secrets, without necessitating a complete setup reset. This proactive approach effectively mitigates the risk of security breaches due to compromised cryptographic material, aligning with the best security practice.
FE[r]Chain: Enforcing Fairness in Blockchain Data Exchanges Through Verifiable Functional Encryption
Functional Encryption (FE) allows users to extract specific function-related information from encrypted data while preserving the privacy of the underlying plaintext. Though significant research has been devoted to developing secure and efficient Multi-Input Functional Encryption schemes supporting diverse functions, there remains a noticeable research gap in the development of verifiable FE schemes. Functionality and performance have received considerable attention, however, the crucial aspect of verifiability in FE has been relatively understudied. Another important aspect that prior research in FE with outsourced decryption has not adequately addressed is the fairness of the data-for-money exchange between a curator and an analyst. This paper focuses on addressing these gaps by proposing a verifiable FE scheme for inner product computation. The scheme not only supports the multi-client setting but also extends its functionality to accommodate multiple users -- an essential feature in modern privacy-respecting services. Additionally, it demonstrates how this FE scheme can be effectively utilized to ensure fairness and atomicity in a payment protocol, further enhancing the trustworthiness of data exchanges.
Circles of Trust: A Voice-Based Authorization Scheme for Securing IoT Smart Homes
Smart homes, powered by a plethora of Internet of Things (IoT) devices, such as smart thermostats, lights, and TVs, have gained immense popularity due to their simple voice command control, making them user-friendly for homeowners and their families. However, these voice commands can be potentially misused, especially by unauthorized individuals, like visitors or thieves, who may have not been previously authorized to manipulate security sensitive devices, e.g., smart locks. To address this issue, we propose a novel approach called Circles of Trust (CoT), rooted in a well-known namesake psychological concept which associates relationships to a specific degree of trust. This concept can be applied to an authorization framework, by linking relationships to an access level, e.g., homeowners and their spouses can be fully-trusted, whereas visitors and children may not. CoT can be further visualized as a multi-layered circle, where the most privileged user, e.g., a homeowner, is placed within the innermost layer, and therefore has access to every single device within a smart home via voice commands. Each subsequent layer has fewer access privileges than the previous, granting users in outer layers, like visitors, limited capabilities to manipulate devices. CoT aims to preserve the ease-of-access and convenience of smart home devices by integrating voice-based security policies, eliminating the need for graphical user interfaces (GUIs). The proposed CoT implementation includes three main components: the voice-to-text module, authorization engine, and IoT orchestrator. Following a prototype implementation, a user study and questionnaire will assess the user-friendliness and device convenience.