Explainable AI for Industry 5.0: Shedding light on the black box

Keywords: XAI, explainable artificial intelligence, Industry 5.0, machine learning, industry

Abstract

The rapid development of artificial intelligence (AI) is accompanied by increasing computational complexity and decreasing model transparency, which significantly limits its adoption in critical domains that require a high level of trust, interpretability, and justification of decisions. Under these conditions, the field of Explainable Artificial Intelligence (XAI) has gained particular importance as it focuses on approaches and technologies that enable understanding of AI system logic and interpretation of their outputs. This article examines the timely topic of implementing XAI in the context of Industry 5.0. Special attention is given to practical application scenarios: the authors present concrete industrial cases from IBM, Siemens, and other companies demonstrating how XAI contributes to enhancing the reliability, safety, efficiency, and trustworthiness of AI systems. The study includes a systematic search and analysis of the literature in this domain and proposes well-grounded key criteria for comparing existing XAI approaches. The article also outlines the advantages, current limitations, and promising directions for the development of XAI, highlighting the opportunities it opens for improving effectiveness, transparency, and trust in business.

Downloads

Download data is not yet available.

References

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. (2019). In W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, & K.-R. Müller (Eds.), Lecture Notes in Computer Science. Springer International Publishing. https://doi.org/10.1007/978-3-030-28954-6

Vilone, G., & Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76, 89–106. https://doi.org/10.1016/j.inffus.2021.05.009

IEEE Std 7001-2021. (2022). IEEE Standard for Transparency of Autonomous Systems. IEEE. https://doi.org/10.1109/IEEESTD.2022.9726144

ISO/IEC TR 24028:2020. (2020). Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence. Geneva: International Organization for Standardization. https://www.iso.org/standard/77608.html

Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68–77. https://doi.org/10.1145/3359786

Gamoura, S. C. (2023). Explainable AI (XAI) for AI-Acceptability: The coming age of digital management 5.0. 2023 IEEE International Conference on Networking, Sensing and Control (ICNSC), 1–6. https://doi.org/10.1109/icnsc58704.2023.10319030

Khan, A., Jhanjhi, N. Z., Hamid, D. H. T. B. A. H., & Omar, H. A. H. bin H. (2024). The need for explainable AI in Industry 5.0. Advances in Explainable AI Applications for Smart Cities, 1–30. https://doi.org/10.4018/978-1-6684-6361-1.ch001

Chang, T.-S., & Bau, D.-Y. (2024). eXplainable artificial intelligence (XAI) in business management research: A success/failure system perspective. Journal of Electronic Business & Digital Economics, 4(1), 36–53. https://doi.org/10.1108/jebde-07-2024-0019

Tchuente, D., Lonlac, J., & Kamsu-Foguem, B. (2024). A methodological and theoretical framework for implementing explainable artificial intelligence (XAI) in business applications. Computers in Industry, 155, 104044. https://doi.org/10.1016/j.compind.2023.104044

Martins, T., de Almeida, A. M., Cardoso, E., & Nunes, L. (2024). Explainable Artificial Intelligence (XAI): A systematic literature review on taxonomies and applications in finance. IEEE Access, 12, 618–629. https://doi.org/10.1109/access.2023.3347028

European Commission: Directorate-General for Research and Innovation. (2021). Industry 5.0: Towards a sustainable, human-centric and resilient European industry. Publications Office of the European Union. https://data.europa.eu/doi/10.2777/308407

Ahmed, I., Jeon, G., & Piccialli, F. (2022). From artificial intelligence to explainable artificial intelligence in Industry 4.0: A survey on what, how, and where. IEEE Transactions on Industrial Informatics, 18(8), 5031–5042. https://doi.org/10.1109/tii.2022.3146552

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/access.2018.2870052

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L119, 1–88. http://data.europa.eu/eli/reg/2016/679/oj

NIST AI 100-1. (2023). Artificial intelligence risk management framework (AI RMF 1.0). National Institute of Standards and Technology. https://doi.org/10.6028/nist.ai.100-1

Zavodna, L. S., Überwimmer, M., & Frankus, E. (2024). Barriers to the implementation of artificial intelligence in small and medium sized enterprises: Pilot study. Journal of Economics and Management, 46, 331–352. https://doi.org/10.22367/jem.2024.46.13

Weitz, K., Dang, C. T., & André, E. (2022). Do we need explainable AI in companies? Investigation of challenges, expectations, and chances from employees’ perspective. arXiv:2210.03527. https://doi.org/10.48550/arXiv.2210.03527

Darvish, M., Kret, K. S., & Bick, M. (2024). An explorative study on the adoption of explainable artificial intelligence (XAI) in business organizations. In: van de Wetering, R., et al. Disruptive Innovation in a Digitally Connected Healthy World. Lecture Notes in Computer Science, 14907, 29–40. Springer, Cham. https://doi.org/10.1007/978-3-031-72234-9_3

Joyce, D. W., Kormilitzin, A., Smith, K. A., & Cipriani, A. (2023). Explainable artificial intelligence for mental health through transparency and interpretability for understandability. Npj Digital Medicine, 6(1). https://doi.org/10.1038/s41746-023-00751-9

Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080. https://doi.org/10.1073/pnas.1900654116

Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv:1708.08296. https://doi.org/10.48550/arXiv.1708.08296

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608. https://doi.org/10.48550/arXiv.1702.08608

Yuan, H., Yang, F., Du, M., Ji, S., & Hu, X. (2021). Towards structured NLP interpretation via graph explainers. Applied AI Letters, 2(4), e58. https://doi.org/10.1002/ail2.58

Dumka, A., Chaudhari, V., Bisht, A. K., Rawat, R., & Pandey, A. (2024). Methods, techniques, and application of explainable artificial intelligence. In R. Gupta, A. Jain, J. Wang, & R. Pateriya (Eds.), Reshaping Environmental Science Through Machine Learning and IoT, 337–354. IGI Global Scientific Publishing. https://doi.org/10.4018/979-8-3693-2351-9.ch017

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3063289

Dixit, M., Kansal, I., Khullar, V., Kumar, R., & Kumar, S. (2025). Analyzing trustworthiness and explainability in artificial intelligence: A comprehensive review. Recent Advances in Electrical & Electronic Engineering, 18(8), article e040724231621. https://doi.org/10.2174/0123520965308169240616144800

Vasanth, S., Keerthana, S., & Saravanan, G. (2024). Demystifying AI: A robust and comprehensive approach to explainable AI. 2024 International Conference on Intelligent Computing and Emerging Communication Technologies (ICEC), 1–5. https://doi.org/10.1109/icec59683.2024.10837078

Avdoshin, S. М., & Pesotskaya, E. Yu. (2022). Trusted artificial intelligence: Strengthening digital protection. Business Informatics, 16(2), 62–73. https://doi.org/10.17323/2587-814x.2022.2.62.73

Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., & Flach, P. (2020). FACE: Feasible and actionable counterfactual explanations. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 344–350. https://doi.org/10.1145/3375627.3375850

Gramegna, A., & Giudici, P. (2021). SHAP and LIME: An evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence, 4, 752558. https://doi.org/10.3389/frai.2021.752558

Hjelkrem, L. O., & Lange, P. E. de. (2023). Explaining deep learning models for credit scoring with SHAP: A case study using open banking data. Journal of Risk and Financial Management, 16(4), 221. https://doi.org/10.3390/jrfm16040221

Li, Y., Simon, Z., & Turkington, D. (2021). Investable and interpretable machine learning for equities. Journal of Financial Data Science, 4(1), 54–74. https://doi.org/10.3905/jfds.2021.1.084

Fritz-Morgenthal, S., Hein, B., & Papenbrock, J. (2022). Financial risk management and explainable, trustworthy, responsible AI. Frontiers in Artificial Intelligence, 5, 779799. https://doi.org/10.3389/frai.2022.779799

Watanabe, A., Kuramata, M., Majima, K., Kiyohara, H., Kondo, K., & Nakata, K. (2021). Constrained Generalized Additive 2 Model with consideration of high-order interactions (CGA2M+). arXiv:2106.02836. https://doi.org/10.48550/arXiv.2106.02836

IBM. (2023). IBM Maximo Predict. IBM Documentation. https://www.ibm.com/docs/en/mhmpmh-and-p-u/cd?topic=overview-maximo-predict

Hermansa, M., Kozielski, M., Michalak, M., Szczyrba, K., Wróbel, Ł., & Sikora, M. (2021). Sensor-based predictive maintenance with reduction of false alarms – A case study in heavy industry. Sensors, 22(1), 226. https://doi.org/10.3390/s22010226

Kilari, S. D. (2025). The role of explainable AI (XAI) in improving transparency and trust in supply chain demand and price forecasting models. SSRN preprint. https://doi.org/10.2139/ssrn.5357669

Siemens. (2023). The rise of industrial explainable artificial intelligence (XAI) – Insights across the AI life cycle. White Paper. https://assets.new.siemens.com/siemens/assets/api/uuid:3b4de373-57e2-4329-b025-2825db0172aa/WhitepaperXAI.pdf

Jean-Quartier, C., Bein, K., Hejny, L., Hofer, E., Holzinger, A., & Jeanquartier, F. (2023). The cost of understanding — XAI algorithms towards sustainable ML in the view of computational cost. Computation, 11(5), 92. https://doi.org/10.3390/computation11050092

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L257, 1–64. https://data.europa.eu/eli/reg/2024/1689/oj

Černevičienė, J., & Kabašinskas, A. (2024). Explainable artificial intelligence (XAI) in finance: a systematic literature review. Artificial Intelligence Review, 57(8), 216. https://doi.org/10.1007/s10462-024-10854-8

Brasse, J., Broder, H. R., Förster, M., Klier, M., & Sigler, I. (2023). Explainable artificial intelligence in information systems: A review of the status quo and future research directions. Electronic Markets, 33, 26. https://doi.org/10.1007/s12525-023-00644-5

Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832. https://doi.org/10.3390/electronics8080832

Molnar C. (2025). Interpretable machine learning. A guide for making black box models explainable. 3rd edition. https://christophm.github.io/interpretable-ml-book/

Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: An analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5), e1424. https://doi.org/10.1002/widm.1424

Rai, A. (2019). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5

Liao, Q. V., & Varshney, K. R. (2021) Human-centered explainable AI (XAI): From algorithms to user experiences. arXiv:2110.10790. https://doi.org/10.48550/arXiv.2110.10790

Chamola, V., Hassija, V., Sulthana, A. R., Ghosh, D., Dhingra, D., & Sikdar, B. (2023). A review of trustworthy and explainable artificial intelligence (XAI). IEEE Access, 11, 78994–79015. https://doi.org/10.1109/access.2023.3294569

Belle, V., & Papantonis, I. (2021). Principles and Practice of Explainable Machine Learning. Frontiers in Big Data, 4, 688969. https://doi.org/10.3389/fdata.2021.688969

d’Avila Garcez, A. S., Broda, K. B., & Gabbay, D. M. (2002). Neural-Symbolic Learning Systems. In Perspectives in Neural Computing. Springer. https://doi.org/10.1007/978-1-4471-0211-3

Besold, T. R., d’Avila Garcez, A. S., Bader, S., Bowman, H., Domingos, P., Hitzler, P., Kuehnberger, K.-U., Lamb, L. C., Miikkulainen, R., & Silver, D. L. (2017) Neural-Symbolic Learning and Reasoning: A Survey and Interpretation. arXiv:1711.03902. https://doi.org/10.48550/arXiv.1711.03902

Yu, D., Yang, B., Liu, D., Wang, H., & Pan, S. (2023). A survey on neural-symbolic learning systems. Neural Networks, 166, 105–126. https://doi.org/10.1016/j.neunet.2023.06.028

Kim, J., Maathuis, H., & Sent, D. (2024). Human-centered evaluation of explainable AI applications: a systematic review. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1456486

Rong, Y., Leemann, T., Nguyen, T.-T., Fiedler, L., Qian, P., Unhelkar, V., Seidel, T., Kasneci, G., & Kasneci, E. (2024). Towards human-centered explainable AI: A survey of user studies for model explanations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(4), 2104–2122. https://doi.org/10.1109/tpami.2023.3331846

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., III, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723

Vadapalli, S. R. (2022). Monitoring the performance of machine learning models in production. International Journal of Computer Trends and Technology, 70(9), 38–42. https://doi.org/10.14445/22312803/IJCTT-V70I9P10559

Donoso-Guzmán, I., Ooge, J., Parra, D., & Verbert, K. (2023). Towards a comprehensive human-centred evaluation framework for explainable AI. In: Longo, L. (eds) Explainable Artificial Intelligence (xAI 2023). Communications in Computer and Information Science, 1903. Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_10

Published
2026-03-30
How to Cite
AvdoshinS. M., & PesotskayaE. Y. (2026). Explainable AI for Industry 5.0: Shedding light on the black box. Business Informatics, 20(1), 7-28. https://doi.org/10.17323/2587-814X.2026.1.7.28
Section
Articles