MAKE-Deep-Learning-Transparent

MAKE-Deep-Learning-Transparent

Special Session of the CD-MAKE conference organized by
Yoichi HAYASHI (a) and Guido BOLOGNA (b)

(a) Artificial Intelligence (AI) Lab, Department of Computer Science, Meiji University, Kawasaki,
Kanagawa 214-8571, Japan
(b) Computer Vision and Multimedia Lab, Department of Computer Science, University of Applied Science of Western Switzerland, Rue de la Prairie 4, Geneva 1202, Switzerland

Special session description:

Deep learning is a new branch of machine learning, which has been proved to be a powerful tool of feature extraction in computer vision. The primary disadvantage of deep learning are that they have no clear declarative representation of knowledge. In addition, deep learning have considerable difficulties in generating the necessary explanation structures, which limits their full potential because the ability to provide detailed characterizations of classification strategies would promote their acceptance. However, surprisingly, very little work has been conducted in relation to transparecncy of deep learning. Bridging this gap could be expected to contribute to the real-world utility of deep learning. Transparency of deep neural networks is the first step to fill the gap. The next step to utilize deep neural networks is rule extraction from deep neural networks. Transparency and the rule extraction from deep neural networks, therefore, remain an area in need of further innovation.

Topics of interests include, but are not limited to:

  • Big Data analytics using deep learning
  • Machine learning applied to transparency of deep learning
  • Transparency of deep learning for medical, financial, and industrial big data
  • Accuracy-interpretability dilemma in deep learning
  • Transparency tools for deep belief networks and convolutional neural networks

For inquiries please contact Prof. Hayashi and/or Dr. Bologna directly.

For submission details please proceed to the CD-MAKE authors area

Updated on 29.12.2017 09:00 CET