The 1st Workshop on Trustworthy Learning on Graphs (TrustLOG)

Colocated with the 31st ACM International Conference on Information and Knowledge Management

About

Learning on graphs is at the core of many domains, ranging from information retrieval, social network analysis to transportation and computational chemistry. Years of research in this area have developed a wealth of theories, algorithms, and open-source systems for a variety of learning tasks. State-of-the-art graph learning models have been widely deployed in various real-world applications, often delivering superior empirical performance in answering what/who questions. For example, what are the most relevant web pages with respect to a user query? Who can be grouped into the same community? What items should we recommend to best-fit user preferences? Despite the prosperous development of high-utility graph learning models, recent studies reveal that learning on graphs is not trustworthy in many aspects. For example, existing methods make decisions in a black-box manner, which hinders the end-users to understand and trust model decisions. Many commonly applied approaches are also found to be vulnerable to malicious attacks, biased against individuals from certain demographic groups, or insecure to information leakage. As such, a fundamental question largely remains nascent: how can we make learning algorithms on graphs trustworthy? To answer this question, it is crucial to propose a paradigm shift, from answering what/who to understanding how/why, e.g., how the ranking of webpages can be manipulated by the malicious link farms; why two seemingly different users are grouped into the same online community; how sensitive the recommendation results are due to the random noises or fake ratings.

There are many challenges involved in trustworthy learning on graphs, including:

  • Understanding the implications of non-IID graph data on the classic trustworthy machine learning;
  • Discovering graph-specific measurements and techniques for trustworthy learning;
  • Achieving trustworthy learning on graphs at scale;
  • Accommodating the heterogeneity of graph data;
  • Dealing with dynamically changing and/or temporal graphs.

This one-day workshop aims to bring together researchers and practitioners from different backgrounds to answer these research questions and enhance the trustworthiness of learning on graphs. The workshop will consist of contributed talks, contributed posters, invited talks and discussion panels on a wide variety of methods and applications. Work-in-progress papers, demos, and visionary papers are also welcomed. We will also include invited papers for both oral presentation and poster presentation. This workshop intends to share visions of investigating new approaches and methods at the intersection of trustworthy learning on graphs and real-world applications.

Call for Papers

Overview

We invite submissions on a broad range of trustworthy learning on graphs. We welcome many types of papers, such as (but are not limited to):

  • Research papers
  • Work-in-progress papers
  • Demo papers
  • Visionary papers/white papers
  • Appraisal papers of existing methods or tools
  • Evaluatory papers on assumptions, methods or tools
  • Relevant work that will be or have been published

Topics of interests include (but are not limited to):

  • Safety and robustness
  • Interpretability, explainability and transparency
  • Ethics
  • Accountability
  • Privacy preservation
  • Causal analysis
  • Environmental well-being
  • Industry applications of trustworthy learning on graphs
  • Datasets and benchmarks for trustworthy learning on graphs

Important Dates

  • Recommended paper submission deadline: September 2, 2022(full consideration)
  • Dynamic submission window: September 3, 2022 ~ October 10, 2022(closed after pool is filled)
  • Reviews period: September 11 - September 25, 2022
  • Final notification: October 2, 2022
  • Camera-ready submission: October 15, 2022
  • Workshop day: October 21, 2022

Paper Submission

Paper submissions are limited to a total of 5 pages for initial submission(up to 6 pages for final camera-ready submission), plus references or supplementary materials, and authors should only rely on the supplementary material to include minor details that do not fit in the 5-page main body. Manuscripts should be submitted in PDF format, using the ACM 2-column sigconf template. Paper reviews will be double-blind, and submissions that are not properly anonymized will be desk-rejected without review. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be posted on the workshop website and will not be included in the CIKM proceedings. Special issues in flagship academic journals are under consideration to host the extended versions of best/selected papers in the workshop.

Please submit to CMT via this link

ACM Policy Against Discrimination

All authors and participants must adhere the the ACM discrimination policy. For full details, please visit this site.As a published ACM author, you and your co-authors are subject to all ACM Publications Policies , including ACM's new Publications Policy on Research Involving Human Participants and Subjects.

Agenda (Tentative)

Rows highlighted in green are LIVE whereas rows highlighted in yellow are PRE-RECORDED. The poster session is distributed across multiple ZOOM.

8:00am~8:15am Opening Session Livestream (above)
8:15am~9:15am Keynote speaker: Dr. Nitesh Chawla, University of Notre Dame Click Here
9:15am~10:15am Keynote speaker: Dr. Stephan Günnemann, Technical University of Munich Click Here
10:15am~10:30am Coffee break
10:30am~11:30am Keynote speaker: Dr. Marinka Zitnik, Harvard University Click Here
11:30am~12:30pm Keynote speaker: Dr. Thomas Dietterich, Oregon State University Click Here
Lunch Break
1:30pm~2:30pm Keynote speaker: Dr. Haohan Wang, University of Illinois Urbana-Champaign Click Here
2:30pm~3:30pm Keynote speaker: Dr. Yinglong Xia, Meta Click Here
3:30pm~3:45pm Coffee break
3:45pm~4:45pm Keynote speaker: Dr. Shuiwang Ji, Texas A&M University Click Here
4:45pm~5:00pm Best paper award ceremony + final remarks
End

Keynote Speakers

Dr. Nitesh V. Chawla

Frank M. Freimann Professor, the University of Notre Dame

Nitesh V. Chawla is the Frank M. Freimann Professor of Computer Science and Engineering at the University of Notre Dame. He is the Founding Director of the Lucy Family Institute for Data and Society. He has also served as the director of the Center for Network and Data Science. He is a Fellow of IEEE. Chawla, who joined the Notre Dame faculty in 2007, is an expert in artificial intelligence, data science, and network science, and is motivated by the question of how technology can advance the common good through interdisciplinary research. As such, his research is not only at the frontier of fundamental methods and algorithms but is also making interdisciplinary advances through collaborations with faculty at Notre Dame and community, national, and international partners.

Dr. Thomas G. Dietterich

Distinguished Professor (Emeritus) and Director of Intelligent Systems, Oregon State University

Title: Models of System Competence: Lessons Learned in Computer Vision and Reinforcement Learning

Abstract: An important component of trustworthy systems is their capability to model their own domain of competence. Such systems can detect when they are incompetent to handle an input query or when they are unlikely to achieve a desired goal. They can then raise an exception and empower the user (or other software components) to take appropriate action. This talk will describe two forms of competence models. For classification tasks, we will discuss how anomaly detection methods can be applied to detect input queries that belong to classes not observed during training (“open category detection”). For reinforcement learning tasks, we will discuss calibrated prediction intervals that provide guarantees on the future performance of a learned policy.

For image classifiers based on deep learning, we will present the Familiarity Hypothesis, which claims that state-of-the-art deep anomaly detectors are detecting lack of familiarity rather than the presence of novelty. This leads to certain failure modes that are likely to arise in any application, including node or link anomaly detection, that employs learned representations. For reinforcement learning, we will describe methods for computing conformal prediction intervals to characterize future behavior and/or assess the probability of achieving goals.

Dr. Stephan Günnemann

Professor, Technical University of Munich

Title: Robustness of Graph Neural Networks: Some Lessons Learned

Abstract: Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many application domains. Despite their proliferation, our understanding of their robustness properties is still very limited. However, specifically in safety-critical environments and decision-making contexts involving humans, it is crucial to ensure the GNNs reliability. In my talk, I will discuss some lessons learned during our research on GNN robustness. Specifically, I will highlight challenges related to scalablity, evaluation practices, and meaningful certification approaches.

Dr. Shuiwang Ji

Professor, Texas A&M University

Title: GOOD: A Graph Out-of-Distribution Benchmark

Abstract: Out-of-distribution (OOD) learning deals with scenarios in which training and test data follow different distributions. Although general OOD problems have been intensively studied in machine learning, graph OOD is only an emerging area of research. Currently, there lacks a systematic benchmark tailored to graph OOD method evaluation. In this talk, I will describe our work on developing an OOD benchmark, known as GOOD, for graphs specifically. We explicitly make distinctions between covariate and concept shifts and design data splits that accurately reflect different shifts. We consider both graph and node prediction tasks as there are key differences in designing shifts. Overall, GOOD contains 11 datasets with 17 domain selections. When combined with covariate, concept, and no shifts, we obtain 51 different splits. We provide performance results on 10 commonly used baseline methods with 10 random runs. This results in 510 dataset-model combinations in total. Our results show significant performance gaps between in-distribution and OOD settings. Our results also shed light on different performance trends between covariate and concept shifts by different methods. Our GOOD benchmark is a growing project and expects to expand in both quantity and variety of resources as the area develops. The GOOD benchmark can be accessed via https://github.com/divelab/GOOD/.

Dr. Haohan Wang

Assistant Professor, University of Illinois Urbana-Champaign

Title: Building Causal Graphs into Multiple Aspects of Trustworthy Machine Learning

Abstract: As the development of trustworthy machine learning is proliferating along different dimensions of topics, the community has witnessed a great variety of different methods in many different aspects of trustworthy machine learning, such as robustness, explainability, and fairness. The first part of this talk will build upon this diversity of methods across different topics in trustworthy ML and seeks to sort out a common theme across different methods in different topics: the concepts of Pearl's causal hierarchy. Then, we continue to build a principled understanding of these ML methods in machine learning languages, in terms of generalization error bounds. Finally, we extend our understanding to a new method for the evaluation of models' robust performances with a surrogate oracle model through large, pertained models.

Dr. Yinglong Xia

Applied Research Scientist, Meta

Title: Trustworthy Graph Learning in Recommendation

Abstract: Learning on graphs is at the core of many domains and very aligned with our industrial practice in Meta. In this talk, I am planning to address the trustworthy machine learning on graph data such as that for audience-producer matching in real world recommendation systems. The talk will mainly introduce the trustworthy graph learning for a GCN framework, including the graph data preprocessing, the small producer aware GCN algorithm, and the scalable training on massive graphs for industrial use. Finally, we will discuss the opportunities of trustworthy machine learning on graph data in user understanding for modern recommendation.

Dr. Marinka Zitnik

Assistant Professor, Harvard University

Title: Trustworthy AI with GNN Explainers

Abstract: As Graph Neural Networks (GNNs) are increasingly used in critical real-world applications, several methods have been developed to explain GNN predictions. However, there has been little work on systematically analyzing the reliability of these methods. In this talk, we introduce a theoretical analysis of the reliability of state-of-the-art GNN explanation methods. First, we theoretically analyze various state-of-the-art GNN explanation methods with respect to several properties (e.g., faithfulness, stability, and fairness preservation) and establish upper bounds on the violation of these properties. Second, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. We introduce a synthetic graph data generator and the associated benchmark suite GraphXAI that can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. We empirically validate our theoretical results using extensive experimentation and provide insights into the behavior of state-of-the-art GNN explanation methods. Finally, I describe the applications of GNN explainers in therapeutic science. These are available through Therapeutics Data Commons (https://tdcommons.ai), an initiative to access and evaluate AI capability across therapeutic modalities and stages of drug discovery.

Organization

Organzing Chairs

idk

Jingrui He

Associate Professor

University of Illinois at Urbana-Champaign

idk

Jian Kang

Ph.D. Student

University of Illinois at Urbana-Champaign

idk

Bo Li

Assistant Professor

University of Illinois at Urbana-Champaign

idk

Jian Pei

Professor

Duke University

idk

Dawei Zhou

Assistant Professor
Virginia Tech
(Corresponding Organizer)

Publicity Chair

idk

Shuaicheng Zhang

Ph.D. Student

Virginia Tech

For questions, please contact us at trustlogworkshoporganizers@gmail.com