TY - JOUR
T1 - Interpreting learning models in manufacturing processes
T2 - Towards explainable AI methods to improve trust in classifier predictions
AU - Goldman, Claudia V.
AU - Baltaxe, Michael
AU - Chakraborty, Debejyo
AU - Arinez, Jorge
AU - Diaz, Carlos Escobar
N1 - Publisher Copyright:
© 2023 Elsevier Inc.
PY - 2023/6
Y1 - 2023/6
N2 - Smart manufacturing processes, building upon machine learning (ML) models could potentially reduce the pre-production testing and validation time for new processes. Beyond calculating accurate and reliable models, one critical challenge would be for users of these models (plant operators, engineers and technicians) to trust these models’ outputs. We propose to apply explainable AI methods to create trustworthy AI-based manufacturing systems. Consequently, these systems will be enriched with capabilities to explain their reasoning processes and outputs (e.g., predictions) automatically. This paper applies explainable AI methods to two problems in manufacturing: ultrasonic weld (USW) quality prediction and body-in-white (BIW) dimensional variability reduction. Class activation maps were computed to explain the effect of input signals and their patterns on the quality predictions of an ultrasonic weld yield by a neural network (good or bad). Contrastive gradient based saliency maps were also created to assess the robustness of this classifier. Furthermore, we explain a given connectionist network that predicts the dimensional quality of body-in-white framer points based on deviations in underbody points. Explaining these predictions help engineers understand which underbody points have more influence on deviations in the framer points. These two applications highlight the importance of explainable AI methods in the modern manufacturing industry.
AB - Smart manufacturing processes, building upon machine learning (ML) models could potentially reduce the pre-production testing and validation time for new processes. Beyond calculating accurate and reliable models, one critical challenge would be for users of these models (plant operators, engineers and technicians) to trust these models’ outputs. We propose to apply explainable AI methods to create trustworthy AI-based manufacturing systems. Consequently, these systems will be enriched with capabilities to explain their reasoning processes and outputs (e.g., predictions) automatically. This paper applies explainable AI methods to two problems in manufacturing: ultrasonic weld (USW) quality prediction and body-in-white (BIW) dimensional variability reduction. Class activation maps were computed to explain the effect of input signals and their patterns on the quality predictions of an ultrasonic weld yield by a neural network (good or bad). Contrastive gradient based saliency maps were also created to assess the robustness of this classifier. Furthermore, we explain a given connectionist network that predicts the dimensional quality of body-in-white framer points based on deviations in underbody points. Explaining these predictions help engineers understand which underbody points have more influence on deviations in the framer points. These two applications highlight the importance of explainable AI methods in the modern manufacturing industry.
KW - Artificial intelligence quotient
KW - Classifier learning systems
KW - Explainable AI
KW - Ultrasonic weld process monitoring
UR - https://www.scopus.com/pages/publications/85149815670
U2 - 10.1016/j.jii.2023.100439
DO - 10.1016/j.jii.2023.100439
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85149815670
SN - 2452-414X
VL - 33
JO - Journal of Industrial Information Integration
JF - Journal of Industrial Information Integration
M1 - 100439
ER -