Event Schedule


*Note: The Friday tour is ONLY for registered participants.
Time Monday
(November 6)
Tuesday
(November 7)
Wednesday
(November 8)
Thursday
(November 9)
Friday
(November 10)
08:30 AM to 09:30 AM
Check-in & Breakfast
Welcome Remarks (9:00 – 9:30)
Check-in & Breakfast
Check-in & Breakfast
Check-in & Breakfast
Field Trip*
Check-in & Breakfast
Starts at 8AM
Trip Registration QR Code
09:30 AM to 11:00 AM Keynote
Keynote 1
Keynote
Keynote 2
Keynote
Keynote 5
Guest
Guest Speaker Session 7
11:00 AM to 11:30 AM
Coffee Break & Networking
11:30 AM to 01:00 PM Guest
Guest Speaker Session 1
Guest
Guest Speaker Session 3
Guest
Guest Speaker Session 4
Guest
Guest Speaker Session 8
01:00 PM to 02:00 PM
Lunch
02:00 PM to 03:30 PM Guest
Guest Speaker Session 2
Keynote
Keynote 3
Hands-on
Guest Speaker Session 5
Keynote
Keynote 6
03:30 PM to 04:00 PM
Coffee Break & Networking
04:00 PM to 05:30 PM Posters
Student Poster Session & Networking Reception
Keynote
Keynote 4
Hands-on
Guest Speaker Session 6
Awards & Appreciation Ceremony
Closing Remarks & Farewell
05:30 PM to 06:00 PM Posters
Student Poster Session & Networking Reception
Break
06:00 PM to 08:00 PM Posters
Student Poster Session & Networking Reception
Buffet Dinner
Buffet Dinner



Session Information

Keynote Keynote 1
Farinaz Koushanfar, Professor, University of California, San Diego
Title - Automated Cryptographically-Secure Private Computing: From Logic and Mixed-Protocol Optimization to Centralized and Federated ML Customization

Abstract - Over the last four decades, much research effort has been dedicated to designing cryptographically-secure methods for computing on encrypted data. However, despite the great progress in research, adoption of the sophisticated crypto methodologies has been rather slow and limited in practical settings. Presently used heuristic and trusted third party solutions fall short in guaranteeing the privacy requirements for the contemporary massive datasets, complex AI algorithms, and the emerging collaborative/distributed computing scenarios such as blockchains. In this talk, we outline the challenges in the state-of-the-art protocols for computing on encrypted data with an emphasis on the emerging centralized, federated, and distributed learning scenarios. We discuss how in recent years, giant strides have been made in this field by leveraging optimization and design automation methods including logic synthesis, protocol selection, and automated co-design/co-optimization of cryptographic protocols, learning algorithm, software, and hardware. Proof of concept would be demonstrated in the design of the present state-of-the-art frameworks for cryptographically-secure deep learning on encrypted data. We conclude by discussing the practical challenges in the emerging private robust learning and distributed/ federated computing scenarios as well as the opportunities ahead.

Farinaz Koushanfar is the Jacobs Family Scholar Professor of Electrical and Computer Engineering (ECE) at the University of California San Diego (UCSD), where she is the founding co-director of the UCSD Center for Machine-Intelligence, Computing & Security (MICS). She is also a research scientist at Chainlink Labs. Her research addresses several aspects of secure and efficient computing, with a focus on robust machine learning under resource constraints, AI-based optimization, hardware and system security, intellectual property (IP) protection, as well as privacy-preserving computing. Dr. Koushanfar is a fellow of the Kavli Frontiers of the National Academy of Sciences, and a fellow of IEEE / ACM. She has received a number of awards and honors including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, MIT Technology Review TR-35, Qualcomm Innovation Awards, Intel Collaborative Awards, Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO, as well as several best paper awards.

Speaker

Keynote Keynote 2
Gene Tsudik, Distinguished Professor, University of California, Irvine
Title - Caveat (IoT) Emtor: Privacy-Aware IoT Sensing and Actuation [Slides]

Abstract - As many types of IoT devices worm their way into numerous settings and many aspects of our daily lives, awareness of their presence and functionality becomes a source of major concern. Hidden IoT devices can snoop (via sensing) on nearby unsuspecting users, and impact the environment where unaware users are present, via actuation. This prompts, respectively, privacy and security/safety issues. The dangers of hidden IoT devices have been recognized and prior research suggested some means of mitigation, mostly based on traffic analysis or using specialized hardware to uncover devices. While such approaches are partially effective, there is currently no comprehensive approach to IoT device transparency. Prompted in part by recent privacy regulations (GDPR and CCPA), this work motivates and constructs a privacy-agile Root-of-Trust architecture for IoT devices, called PAISA: Privacy-Agile IoT Sensing and Actuation. It guarantees timely and secure announcements of nearby IoT devices’ presence and their capabilities. PAISA has two components: one on the IoT device that guarantees periodic announcements of its presence even if all device software is compromised, and the other on the user device, which captures and processes announcements. PAISA requires no hardware modifications; it uses a popular off-the-shelf Trusted Execution Environment (TEE) – ARM TrustZone. To demonstrate its viability, PAISA is instantiated as an open-source prototype which includes: an IoT device that makes announcements via IEEE 802.11 WiFi beacons and an Android smartphone-based app that captures and processes announcements. We also discuss security and performance of PAISA and its prototype.

Gene Tsudik is a Distinguished Professor of Computer Science at the University of California, Irvine (UCI). He obtained his PhD in Computer Science from USC. Before coming to UCI in 2000, he was at the IBM Zurich Research Laboratory (1991-1996) and USC/ISI (1996-2000). His research interests include numerous topics in security, privacy and applied cryptography. Gene Tsudik is a Fulbright Scholar, Fulbright Specialist (thrice), a fellow of ACM, IEEE, AAAS, IFIP and a foreign member of Academia Europaea. From 2009 to 2015 he served as Editor-in-Chief of ACM TOPS. He was the recipient of the 2017 ACM SIGSAC Outstanding Contribution Award, and the 2020 IFIP Jean-Claude Laprie Award. His magnum opus is the first ever rhyming crypto-poem published as a refereed paper. Gene Tsudik is unfriendly to machine learning, blockchains, and differential privacy. He has no social media presence.

Speaker

Keynote Keynote 3
Ahmad Reza-Sadeghi, Professor, TU-Darmstadt
Title - Don't ChatGPT Me!: Towards Unmasking the Wordsmith

Abstract - The hype surrounding Large Language Models (LLMs) has captivated countless individuals, fostering the belief that these models possess an almost magical ability to solve diverse problems. While LLMs, such as ChatGPT, offer numerous benefits, they also raise significant concerns like generating misinformation and plagiarism. Consequently, identifying AI-generated content has become an appealing area of research. However, current text detection methods face limitations in accurately discerning ChatGPT content. Indeed, our assessment of the efficacy of existing language detectors in distinguishing ChatGPT-generated texts reveals that none of the evaluated detectors consistently achieves high detection rates, as the highest accuracy achieved was 47%. In this talk, we first overview the existing AI-based text detectors, particularly those claiming to detect ChtaGPT-generated texts. Then, we present our research effort to develop a robust ChatGPT detector, which aims to capture distinctive biases in text composition present in human and AI-generated content and human adaptations to elude detection. Drawing inspiration from the multifaceted nature of human communication, which starkly contrasts the standardized interaction patterns of machines and physical phenomena, we employ various techniques to address these challenges. We use a benchmark dataset encompassing mixed prompts from ChatGPT and humans, spanning diverse domains, to evaluate our detector. Lastly, we discuss open problems that are currently engaging our attention.

Prof. Dr.-Ing. Ahmad-Reza Sadeghi is a Full Professor of Computer Science at the Technical University of Darmstadt, Germany, where he heads the System Security Lab. Since 2012 Prof. Sadeghi has established a long-term cooperation with Intel. It has already emerged in several Collaborative Research Centers on various topics, such as Secure Computing in Mobile and Embedded Systems, Autonomous and Resilient Systems, and Private AI. Moreover, he also established the Open Lab for Sustainable Security and Safety (OpenS3 Lab) with Huawei in 2019. He received his Ph.D. in Computer Science with a focus on Cryptography from the University of Saarland, Germany. Before academia, he worked for several years in the Research and Development of the Telecommunications industry, amongst others, Ericsson. He has been leading and involved in many national and international research and development projects in the design and implementation of Trustworthy Computing Platforms, Hardware-assisted Security, IoT Security and Privacy, Applied Cryptography, and Trustworthy AI. Prof. Sadeghi has been serving as General or Program Chair and Program Committee member of major Information Security and Privacy and Design and Automation venues, such as ACM CCS, IEEE Security & Privacy, NDSS, USENIX Security, DAC, DATE, and ICCAD. He was Editor-In-Chief of IEEE Security and Privacy Magazine. Ahmad served on several editorial boards, such as ACM Transactions on Information & System Security (TISSEC), as Guest editor of the IEEE TCAD, ACM Books, and ACM DIOT. He is on the editorial board of ACM TODAES and ACM DTRAP. In 2008 Prof. Sadeghi was awarded the renowned German prize “Karl Heinz Beckurts” for his research on Trusted and Trustworthy Computing technology and its transfer to industrial practice. The award honors excellent scientific achievements with a high impact on industrial innovations in Germany. In 2010 his group received the German IT Security Competition Award. In 2018 he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and pioneering contributions in content protection, mobile security, and hardware-assisted security. SIGSAC is ACM’s Special Interest Group on Security, Audit, and Control. In 2021 he was honored with the Intel Academic Leadership Award at USENIX Security for his influential research in information and computer security, particularly hardware-assisted security. In 2022 he received the prestigious European Research Council (ERC) Advanced Grant.

Speaker

Keynote Keynote 4
Alexandra Dmitrienko, Professor, University of Würzburg
Title - Security and Privacy Challenges of Federated Learning Systems and Applications

Abstract - Machine Learning (ML) methods are getting more mature and increasingly deployed in all areas of our lives to assist users in various classification and decision-making tasks. This seminar will showcase, as an example, the advantages ML can bring to applications dedicated to detecting security threats on mobile platforms. On the other hand, we will also delve into the security and privacy concerns associated with the utilization of ML methods. Specifically, we will focus on Federated Learning (FL), a distributed version of ML that can provide enhanced privacy preservation when training ML models. We will thoroughly evaluate the security and privacy risks associated with FL and then delve deeper into targeted and untargeted poisoning attacks and countermeasures. We will pay special attention to open challenges, that include distinguishing poisoned and benign but unusual models, for instance models trained on datasets with different data distributions, and adaptive attackers, who, once they know the detection method, can add an additional training loss to minimize any changes in the detection metric, and, hence, evade detection. To initiate further discussions, we will outline open research directions.

Alexandra Dmitrienko is an Associate Professor and head of the Secure Software Systems group at the University of Wuerzburg in Germany. Before taking her current faculty position in 2018, she collected an extensive background in security institutions in Germany and Switzerland, including Ruhr-University Bochum (2008-2011), Fraunhofer Institute for Information Security in Darmstadt (2011-2015), and ETH Zurich (2016-2017). She earned her PhD in Security and Information Technology from TU Darmstadt (2015), where her dissertation focused on the security and privacy of mobile systems and applications, and was recognized with awards from the European Research Consortium in Informatics and Mathematics (ERCIM STM WG 2016 Award) and Intel (Intel Doctoral Student Honor Award, 2013). Over the years, her research interests spawned across various topics such as secure software engineering, systems security and privacy, security and privacy of mobile, cyber-physical, and distributed systems. Today, her recent research also largely focuses on security and privacy aspects of Artificial Intelligence methods.

Speaker

Keynote Keynote 5
Eric Adolphe, CEO, Forward Edge-AI, Inc.
Title - Transforming Influence and Engagement of Global Populations

Abstract - The BIONIC initiative was formed in late 2021 to address a perceived gap in counter Influence Operations that exists in our national security and digital domains. The project successfully integrated tradecraft and mission understanding expertise around breakthrough commercial capabilities that can be integrated/interoperable inside a Teams of Teams engagement architecture. The project studied novel Cognitive/Behavioral based identification and engagement. Core Expertise derived from non-PII, non-PHI public engagements that resulted in one of the largest private data sets on the values that motivate people to engage with content. BIONIC identified cognitive digital fingerprints, and methods used by foreign adversaries to influence population segments and concluded with recommendations for countering both Public and National Security malign activity. During this talk, I will discuss two main areas of contribution in this field. First, I will discuss a patented enterprise knowledge graph developed to enable machines to understand the concepts of misinformation and disinformation. Second, I will share the results from a BIONIC test to leverage the Knowledge Graph and Large Language Models (LLMs) to counter hate speech used by internal and external agents to disrupt democratic institutions.

Eric is a serial entrepreneur who has held CEO/Founder positions in four (4) startups and successfully navigated the valley of death. Eric is a National Inventors Hall of Fame honoree, the first Black American Small Business Innovation Research (SBIR) Tibbetts Award winner, and an SBIR Hall of Fame Inductee. Eric serves as a volunteer and appointed member of the Small Business Administration's (SBA) Invention, Innovation, and Entrepreneurship Advisory Committee (IIEAC). The advisory committee’s objective is to strengthen the innovation ecosystem and lab-to-market translation. Eric's background is in electronics and computer engineering, and he has experience developing mission and safety critical systems for civilian aviation, national security, and defense customers. One of Eric's SBIR products was cited as instrumental in successfully constructing the International Space Station and is used today as a mission critical safety system for commercial launch operations of the Atlas V rocket system. As a result, Eric received one of NASA’s highest civilian honors. Eric holds a Bachelor of Engineering in Electrical Engineering Degree from City College of New York; and a Juris Doctor degree from the Catholic University of America, Columbus School of Law, with core concentrations in Computer and Privacy Law, and the Law of Outer Space.

Speaker

Keynote Keynote 6
Jeyavijayan Rajendran, Associate Professor, Texas A&M University
Title - Hardware Fuzzing -- Why? What? How?

Abstract - Hardware is at the heart of computing systems. For decades, software was considered error-prone and vulnerable. However, recent years have seen increased attacks exploiting hardware vulnerabilities and exploits, which even traditional software-based protections cannot prevent. In this talk, I will describe what hardware vulnerabilities look like in hardware "programming languages," such as Verilog and VHDL. Then, I will explain a new and radical approach called hardware fuzzing for finding these vulnerabilities. Finally, I will detail how these new fuzzing techniques can be efficiently combined with existing functional verification and validation approaches.

Jeyavijayan (JV) Rajendran is an Associate Professor and an ASCEND Fellow in the Department of Electrical and Computer Engineering at Texas A&M University. He obtained his Ph.D. degree from New York University in August 2015. His research interests include hardware security and computer security. His research has won the NSF CAREER Award in 2017, ONR Young Investigator Award in 2022, the IEEE CEDA Ernest Kuh Early Career Award in 2021, the ACM SIGDA Outstanding Young Faculty Award in 2019, the Intel Academic Leadership Award, the ACM SIGDA Outstanding Ph.D. Dissertation Award in 2017, and the Alexander Hessel Award for the Best Ph.D. Dissertation in the Electrical and Computer Engineering Department at NYU in 2016, along with several best student paper awards. He organizes and has co‐founded Hack@DAC, a student security competition co-located with DAC, and SUSHI.

Speaker

Guest Guest Speaker Session 1
Ram Krishnan, Professor, UTSA - ECE
Title - Toward Machine Learning Based Access Control [Slides]

Abstract - In this talk, I will present how machine learning is revolutionizing one of the foundational fields of cybersecurity: access control. I will introduce the problem of access control and its various facets. We will then review how ML is used in each of those facets. A common trait of current access control approaches is the challenging need to engineer abstract and intuitive access control models. This entails designing access control information in the form of roles (RBAC), attributes (ABAC), or relationships (ReBAC) as the case may be, and subsequently, designing access control rules. This framework has its benefits but has significant limitations in the context of modern systems that are dynamic, complex, and large-scale, due to which it is difficult to maintain an accurate access control state in the system for a human administrator. I will present some of our team's research in the arena of machine learning based access control. We will also discuss major challenges and approaches to make progress.

Ram Krishnan is a Professor of Electrical and Computer Engineering at the University of Texas at San Antonio, where he holds Microsoft President’s Endowed Professorship. His research focuses on (a) applying machine learning to strengthen cybersecurity of complex systems and (b) developing novel techniques to address security/privacy concerns in machine learning. He actively works on topics such as using deep learning techniques for runtime malware detection in cloud systems and automating identity and access control administration, security and privacy enhanced machine learning and defending against adversarial attacks in deep neural networks. He is a recipient of NSF CAREER award (2016), the University of Texas System Regents’ Outstanding Teaching Award (2015) and the UTSA President’s Distinguished Award for Research Achievement (2016). He received his PhD from George Mason University in 2010.

Speaker

Guest Guest Speaker Session 2
Anthony Rios, Assistant Professor, UTSA - ISCS and Associate Director, Cyber Center for Security and Analytics
Title - Learning to Measure and Mitigate Biases in Natural Langauge Processing Models

Abstract - Natural language processing (NLP) and Artificial Intelligence have many uses in critical business, healthcare, and cybersecurity applications. For example, NLP plays a crucial role in cybersecurity. NLP is instrumental in tasks such as behavioral anomaly detection (e.g., identifying suspicious user activity), biometrics, general authorship attribution, and extracting insights from social media to assess the digital well-being of individuals and organizations. While NLP can potentially provide valuable tools to organizations for social good, they are equally capable of causing harm at scale. Specifically, researchers have raised concerns about bias, fairness, and general robustness as NLP methods become more powerful. Therefore, this talk will have three main parts. First, I will present studies on bias in critical NLP-related applications and discuss how these systems can cause harm. Second, I will discuss how to measure biases in various NLP systems. To measure biases, fine-detailed demographic information is required. However, this data is often unknown because of privacy concerns with sharing it by the users or simply because it is not actively collected. Hence, a major focus will be measuring systems' bias when demographic information is unavailable. Third, I will discuss recent approaches to mitigate biases in real-world NLP systems when demographic information is both available and unavailable.

Dr. Anthony Rios is an Assistant Professor in the Department of Information Systems and Cyber Security at the University of Texas at San Antonio. He also serves as the Associate Director for the Cyber Center for Security and Analytics, and is a core faculty member at the School of Data Science. Dr. Rios earned his Ph.D. in Computer Science from the University of Kentucky in 2018 and his B.S. in Computer Science from Georgetown College in 2011. His research focuses on natural language processing (NLP) and machine learning, particularly in biomedical and social applications. He has published several technical papers in reputable conferences and journals like AAAI, EMNLP, NAACL, AMIA, JAMIA, and Bioinformatics. Various partners, including the NSF, NSA, and industry collaborators, support Dr. Rios' research. He has also received recognition for his work, including awards like the NSF CRII and CAREER awards and acknowledgments (e.g., best technical paper finalist) for his papers in conferences like AMIA 2014 and ICHI 2015. Additionally, his work has been acknowledged with the editor's choice award in JAMIA.

Speaker

Guest Guest Speaker Session 3
Paul Rad, Associate Professor, UTSA - CS/ISCS
Title - Content Safeguarding: Reasoning with Conditional Vision Language Model and Obfuscating Unsafe Content Counterfactually [Slides]

Abstract - Social media platforms are being increasingly used by malicious actors to share unsafe and harmful content. Consequently, major platforms use artificial intelligence (AI) and human moderation to obfuscate such contents to make them safer. Two critical needs for obfuscating unsafe images is that an accurate rationale for obfuscating must be provided, and the sensitive regions should be obfuscated (e.g. blurring) for users’ safety. This process involves addressing two key problems: (1) the reason for obfuscating unsafe content demands the platform to provide an accurate rationale that must be grounded in unsafe content-specific attributes, and (2) the unsafe regions in the content must be minimally obfuscated while still depicting the safe regions. In this study, we address these key issues by first performing multimodal reasoning by designing a visual reasoning model (VLM) conditioned on pre-trained unsafe image classifiers to provide an accurate rationale grounded in unsafe image attributes, and then proposing a counterfactual explanation algorithm that minimally identifies and obfuscates unsafe regions for safe viewing, by first utilizing an unsafe image classifier attribution matrix to guide segmentation for a more optimal subregion segmentation followed by an informed greedy search to determine the minimum number of subregions required to modify the classifier’s output based on attribution score.

Peyman NajafiRad (Paul Rad), a distinguished academician and researcher, holds the esteemed position of Senior member at the National Academy of Inventors. He is widely recognized for his expertise in artificial intelligence (AI), cloud computing, and cyber security. Paul serves as the Founder and Director of the Secure AI and Autonomy Laboratory and is also a Co-founder and Assistant Director of the Open Cloud Institute. In these roles, he leads cutting-edge research and development endeavors focused on leveraging large language models and generative AI to address and solve complex cyber security challenges. Paul holds the position of Associate Professor with a joint appointment in the Departments of Computer Science and Information Systems and Cyber Security (ISCS). His specialization lies in machine learning and reinforcement learning, with a specific focus on knowledge representation, causality, and decision-making using graphical and probabilistic multimodal models. Paul's research covers a wide range of applications, including natural language understanding, computer code analysis, computer vision, and cyber analytics. Prior to his academic pursuits, he served as the Vice President of Private Cloud at Rackspace, where he successfully led R&D in the development of advanced private cloud platforms tailored for large enterprises, utilizing OpenStack. Paul has provided expert advice, guidance, and collaboration to numerous government agencies and enterprises on cyber infrastructure and AI-related projects, showcasing his technical mastery and innovative approach. His partnerships with esteemed government and industry leaders including NSF, DoD, Microsoft, Cisco, Facebook, Raytheon, and Schlumberger serve as a testament to his expertise in navigating the intricate technical challenges of AI research. Paul's groundbreaking expertise has also been recognized with an NSF for the commercialization of his research. His technical mastery forms the foundational framework of GenML, a powerful generative AI model created to streamline the development and deployment of generative AI and large language models.

Speaker

Guest Guest Speaker Session 4
Bimal Viswanath, Assistant Professor, Virginia Tech
Title - Investigating Foundation Models Through the Lens of Security

Abstract - Foundation models are trained to recognize patterns in broad data, which can then be applied to a range of downstream tasks with minimal adaptation. Such models are seeing wide-spread use in a variety of NLP and computer vision tasks. Can foundation models simplify and enhance the performance of ML-based security pipelines? How would the threat landscape change if an adversary leveraged foundation models? Do we need to rethink the design of foundation models for security applications? I will try to answer the above questions, by investigating foundation models in the context of three different security problems: (1) Deepfake image detection: Recent research highlighted the strengths of using a foundation model to improve generalization performance of deepfake image detectors. This advance significantly simplifies the development of such defenses while promising superior performance. We take a closer look at the integration of foundation model technology into these defenses, and test their performance on real-world deepfake datasets. We identify serious limitations and present directions for further improvement. I will also discuss the implications of an adaptive attacker who uses foundation models, and how this can tilt the arms race in favor of the attacker. (2) Mitigating toxicity in open-domain chatbots: Identifying toxic conversations in a dialog dataset using an unsupervised learning scheme is a challenging problem. We study the use of foundation models for this task. Our work highlights promising directions to build chatbot training pipelines that are resilient to injection of toxicity. (3) Improving performance of network and application security classifiers: Recently, LLM-based foundation models have been proposed as a way to create synthetic data to enhance the training datasets of tabular data tasks. This has the potential to address data challenges in several ML-based network and application security tasks. Our findings highlight the limitations of these models for security applications and the need for foundation models that are tailor-made for security.

Bimal Viswanath is an Assistant Professor of Computer Science at Virginia Tech. His research interests are in security and his ongoing work investigates machine learning systems through the lens of security. He uses data-driven methods to understand new threats raised by advances in machine learning, and also investigates how machine learning can improve security of online services. He obtained his PhD from the Max Planck Institute for Software Systems, and MS from IIT Madras. He also worked as a Researcher at Nokia Bell Labs before starting an academic position.

Speaker

Hands-on Guest Speaker Session 5
Alessandro Pegoraro, PhD Student, TU-Darmstadt
Title - The Hitchhiker's Guide to the Privacy and Security of Federated Learning [Slides]

Abstract - The widespread and increasing deployment of Artificial Intelligence (AI), also enlarges the attack surface and requires new security and privacy-enhancing methodologies and technologies. One approach that has been gaining significant growth in recent years is Federated Learning (FL), which enables multiple parties to collaborate in training a neural network model while maintaining the privacy of their individual data. The tutorial is centered around the examination of privacy and security threats in federated learning systems, with more focus on security attacks which fall into two main categories, targeted and untargeted attacks. Targeted (backdoor) poisoning attacks involve the insertion of a backdoor trigger into the global model through the inclusion of malicious data in local datasets or manipulation of training process hyper-parameters, should the adversary gain control over clients’ training phase. On the other hand, untargeted attacks aim to impede the convergence of the global model. The primary objective of this tutorial is to investigate various types of backdoor attacks and the defense solutions that can be employed to mitigate their impact. The tutorial is accompanied by a practical hands-on exercises where the participants learn how to launch backdoor attacks and implement a defense mechanism that are sufficiently robust and resilient to security attacks that compromise the integrity of the FL model.

Alessandro Pegoraro is a PhD student at the System Security Lab, Technical University of Darmstadt, in Germany, since the year 2022. Prior to his PhD studies, he received a master degree in Computer Science from UNIPD Padua. His research focuses on the security of Federated Learning, in particular the mitigation of poisoning attacks. He also worked on the security of centralized learning algorithms such as IP protection in Deep Learning and detecting AI generated texts using energy based algorithms.

Speaker

Hands-on Guest Speaker Session 6
Phillip Rieger, PhD Student, TU-Darmstadt
Title - The Hitchhiker's Guide to the Privacy and Security of Federated Learning [Slides]

Abstract - The widespread and increasing deployment of Artificial Intelligence (AI), also enlarges the attack surface and requires new security and privacy-enhancing methodologies and technologies. One approach that has been gaining significant growth in recent years is Federated Learning (FL), which enables multiple parties to collaborate in training a neural network model while maintaining the privacy of their individual data. The tutorial is centered around the examination of privacy and security threats in federated learning systems, with more focus on security attacks which fall into two main categories, targeted and untargeted attacks. Targeted (backdoor) poisoning attacks involve the insertion of a backdoor trigger into the global model through the inclusion of malicious data in local datasets or manipulation of training process hyper-parameters, should the adversary gain control over clients’ training phase. On the other hand, untargeted attacks aim to impede the convergence of the global model. The primary objective of this tutorial is to investigate various types of backdoor attacks and the defense solutions that can be employed to mitigate their impact. The tutorial is accompanied by a practical hands-on exercises where the participants learn how to launch backdoor attacks and implement a defense mechanism that are sufficiently robust and resilient to security attacks that compromise the integrity of the FL model.

Phillip Rieger is a PhD student at the System Security Lab, Technical University of Darmstadt, in Germany, since the year 2020. Prior to his PhD studies, he received a master degree with distinction from TU Berlin. Besides applications of Deep Learning for security applications, he mainly works on the security and privacy of Deep Learning. A focus here is the security and privacy of Federated Learning, in particular techniques for mitigating poisoning attacks as well as privacy preserving aggregation algorithms.

Speaker

Guest Guest Speaker Session 7
Elias Bou-harb, Associate Professor, Louisiana State University
Title - Jbeil: Temporal Graph-Based Inductive Learning to Infer Lateral Movement in Evolving Enterprise Networks [Slides]

Abstract - Lateral Movement (LM) is one of the core stages of advanced persistent threats which continues to compromise the security posture of enterprise networks at large. Recent research work have employed Graph Neural Network (GNN) techniques to detect LM in intricate networks. Such approaches employ transductive graph learning, where fixed graphs with full nodes’ visibility are employed in the training phase, along with ingesting benign data. These two assumptions in real-world setups (i) do not take into consideration the evolving nature of enterprise networks where dynamic features and connectivity prevail among hosts, users, virtualized environments, and applications, and (ii) hinder the effectiveness of detecting LM by solely training on normal data, especially given the evasive, stealthy, and benign-like behaviors of contemporary malicious maneuvers. Additionally, (iii) complex networks typically do not have the entire visibility of their run-time network processes, and if they do, they often fall short in dynamically tracking LM due to latency issues with passive data analysis. To this end, this paper proposes Jbeil, a data-driven framework for self-supervised deep learning on evolving networks represented as sequences of authentication timed events. The premise of the work lies in applying an encoder on a continuous-time evolving graph to produce the embedding of the visible graph nodes for each time epoch, and a decoder that leverages these embeddings to perform LM link prediction on unseen nodes. Additionally, we enclose a threat sample augmentation mechanism within Jbeil to ensure a well-informed notion on advanced LM attacks. We evaluate Jbeil using authentication timed events from the Los Alamos network which achieves an AUC score of 99.73% and a recall score of 99.25% in predicting LM paths, even when 30% of the nodes/edges are not present in the training phase. Additionally, we assess different realistic attack scenarios and demonstrate the potential of Jbeil in predicting LM paths with an AUC score of 99% in its inductive and transductive settings, out performing the state-of-the-art by a significant margin.

Elias Bou-Harb (Senior Member, IEEE) received his postdoctoral training at Carnegie Mellon University and his Ph.D. degree in computer science from Concordia University, Montreal, Canada. He is currently an associate professor with the department of computer science at Louisiana State University, specializing in cyber security and data science as applicable to national security challenges. Previously, he acted as the director of the cyber center for security and analytics at the University of Texas at San Antonio, where he led and organized university-wide cyber security research, development, and training initiatives. Dr. Bou-Harb has authored more than 150 refereed publications in leading venues and has acquired significant state and federal cyber security research grants. His research and development activities focus on operational cyber security, cyber forensics, critical infrastructure security, empirical data analytics, digital investigations, network security, and network management. He is the recipient of five best research paper awards, including the ACM’s best digital forensics research paper.

Speaker

Guest Guest Speaker Session 8
Panagiotis (Panos) Markopoulos, Associate Professor, UTSA - ECE/CS
Title - Tensor Methods for Efficient and Robust Machine Learning

Abstract - Tensors are the generalization of vectors and matrices to arrays of higher order. Similar to matrices, tensors capture and preserve inherent correlations between measurements across sensors, sensor configurations, and sensing modalities. More recently, the use of tensors has extended to modeling, training, and processing neural network parameters, such as in convolutional filters and fully connected layers. Similar to matrices, tensors are amenable to latent-factor analysis, that can serve multiple purposes, including compression, feature extraction, visualization, and denoising. In this talk, we will focus on deep learning and show how tensor factorization methods have been used for delivering effective solutions for efficient and robust machine learning.

Dr. Panagiotis (Panos P.) Markopoulos, Ph.D., is an Associate Professor and Margie and Bill Klesse Endowed Professor with the Department of Electrical and Computer Engineering and the Department of Computer Science, at The University of Texas at San Antonio (UTSA). He is the founding director of the Machine Learning Optimization and Signal Processing (MELOS) Laboratory. He is also a core faculty member of the UTSA School of Data Science and MATRIX: The UTSA AI Consortium for Human Well-Being. In the Summers of 2018, 2020, and 2021, he was a Visiting Research Faculty with the U.S. Air Force Research Laboratory (AFRL), Information Directorate, in Rome NY. His expertise is in the areas of machine learning, data analysis, and adaptive signal processing. Together with students and collaborators, Dr. Markopoulos has co-authored more than 70 journal and conference articles and 3 book chapters. Since 2016, his research has been funded by sponsors including the National Science Foundation (NSF) and the Air Force Research Laboratory (AFRL). In October 2019, Dr. Markopoulos received the AFOSR Young Investigator Program (YIP) Award. Dr. Markopoulos is a Senior Member of IEEE and he serves as an Associate Editor of the IEEE Transactions on Artificial Intelligence and a Member of the IEEE Signal Processing Society Education Board.

Speaker

Poster Information

Registered participants are eligible to sign up for the poster session scheduled on Monday (November 6, 2023). If you are a registered participant and interested, click here.

Trip Information

Registered participants are eligible to sign up for the field trip to New Braunfels scheduled on Friday (November 10, 2023). If you are a registered participant and interested, click here. The agenda for the trip can be found below.

Time Friday, November 10, 2023
08:00 AM to 08:45 AM
Check-in and Breakfast at San Pedro 1
09:00 AM to 09:45 AM
Travel to New Braunfels
10:00 AM to 12:00 PM
Continental Plant Visit
12:10 PM to 01:45 PM
Lunch at Krauses Cafe & Biergarten - A Unique Dining Experience
02:00 PM to 03:00 PM
Sophienburg Museum and Archives Visit
03:10 PM to 03:50 PM
Travel back to San Pedro 1