Toward to transparency of deep learning in medical imaging: Beyond quantitative to qualitative AI
Yoichi Hayashi, Dept. Computer Science, Meiji University

Abstract. Artificial intelligence (AI), particularly deep learning (DL), which involves automated feature extraction using deep neural networks (DNNs), is expected to be used increasingly by clinicians in the near future. AI can analyze medical images and patient data at a level not possible by a single physician; however, the resulting parameters are difficult to interpret. This so-called “black box” problem causes opaqueness in DL. The aim of the present study is to help realize the transparency of black box machine learning for medical imaging. To achieve this aim, we review the “black box” problem and the limitations of DL for medical imaging and attempt to reveal a paradigm shift in medical imaging in which diagnostic accuracy is surpassed to achieve explainability. DL in medical imaging still has considerable limitations. To interpret and apply DL to medical images effectively, sufficient expertise in computer science is required. Enabling almost every type of clinician, ranging from certificated medical specialists to paramedics to interpret elements of decision-making behind classification decisions. While DL algorithms have the opportunity to markedly enhance the clinical workflow of diagnosis, prognosis, and treatment, transparency is a vital component. Moreover, although rules can be extracted using the Re-RX (Recursive-Rule eXtraction) family, the classification accuracy is slightly lower than that using whole medical images trained by a convolutional neural networks; thus, to establish accountability, one of the most important issues in medical imaging is to explain the classification results clearly. Although more interpretable algorithms seem likely to be more readily accepted by medical professionals, it remains necessary to determine whether this could lead to increased clinical effectiveness. For the acceptance of AI by physicians in areas such as medical imaging, not only quantitative, but also qualitative algorithmic performance such as rule extraction should be improved.

Yoichi Hayashi received the Dr. Eng. degree in systems engineering from Tokyo University of Science, Tokyo, in 1984. In 1986, he joined the Computer Science Department of Ibaraki University, Japan, as an Assistant Professor. Since 1996, he has been a Full Professor at the Computer Science Department, Meiji University, Tokyo. He has also been a Visiting Professor at the University of Alabama in Birmingham, USA, and the University of Canterbury in New Zealand and has authored over 230 published computer science papers.

Professor Hayashi’s research interests include deep learning (DBN, CNN), especially the black box nature of deep neural networks and shallow neural networks, transparency, interpretability and explainability of deep neural networks, XAI, rule extraction, high-performance classifiers, data mining, big data analytics, and medical informatics. He has been the Action Editor of Neural Networks and the Associate Editor of IEEE Trans. Fuzzy Systems and Artificial Intelligence in Medicine. Overall, he has served as an associate editor, guest editor, or reviewer for 50 academic journals. He has been a senior member of the IEEE since 2000.