|
|
Bimonthly Since 1986 |
ISSN 1004-9037
|
|
|
|
|
Publication Details |
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
|
|
|
|
|
|
|
|
|
|
Abstract
There are numerous modern data science challenges that make use of graphs (networks) as a representation of data, including those dealing with social, biological, and communication networks. There has been a rise in the use of signal processing and ML techniques for graph-based data analysis in the past decade. The prevalence of graphs and graph-based learning challenges across a wide range of applications has increased the interest in exploring explainability in graph data science. Since identifying communities is the first order of business when mining graphs for insights, we'll utilise that as a lens through which to investigate the challenge of explaining graph data science. Communities are formed when people with shared interests get together, and they are dense subnetworks of the larger network. Though many approaches to community discovery work well with artificial networks that have a clear modular structure, the quality and impact of these algorithms' results when applied to real-world networks with a more nuanced modular structure are less certain. In this paper, motivated by recent advances in explainable AI and ML, we offer methods and metrics from network science to quantify three separate elements of explainability in the context of community detection: interpretability, replicability, and reproducibility.
Keyword
#
PDF Download (click here)
|
|
|
|
|