Analyzing the decision making of machine learning
Author : tatiana-dople | Published Date : 2025-06-27
Description: Analyzing the decision making of machine learning models Kārlis Zars Drsccomp Prof Guntis Bārzdiņš Research focuses on analyzing the decisionmaking processes of machine learning models As these models become increasingly complex
Presentation Embed Code
Download Presentation
Download
Presentation The PPT/PDF document
"Analyzing the decision making of machine learning" is the property of its rightful owner.
Permission is granted to download and print the materials on this website for personal, non-commercial use only,
and to display it on your personal computer provided you do not modify the materials and that you retain all
copyright notices contained in the materials. By downloading content from our website, you accept the terms of
this agreement.
Transcript:Analyzing the decision making of machine learning:
Analyzing the decision making of machine learning models Kārlis Zars Dr.sc.comp., Prof. Guntis Bārzdiņš • Research focuses on analyzing the decision-making processes of machine learning models. As these models become increasingly complex, understanding their decision-making logic becomes more challenging, yet critically important • In various industries, from healthcare to finance, ML models are being used to make high-stakes decisions. However, their ‘black-box’ nature often leaves decision-makers without clear insights into how these models arrive at their conclusions Research topic • Interpreting ML models is vital for several reasons: it builds trust in AI systems, ensures compliance with regulatory requirements, and enables humans to understand and justify automated decisions • Without a clear understanding of these models, organizations risk making decisions based on potentially flawed or biased logic, which can have significant negative consequences Why it’s important? • For now, the primary objective of research is to develop new methods or improve existing ones to enhance the interpretability of ‘black-box’ machine learning models • Specifically, I aim to create techniques that can effectively explain the decision-making processes of complex models, making their outputs more understandable and transparent to non-experts • Additionally, my research seeks to promote the use of these interpretability methods in real-world applications, thereby improving the trust and reliability of AI systems across various industries Research Objectives • Machine learning models, from linear regressions to deep neural networks, are tools that learn from data to make predictions or decisions • While they vary in complexity, advanced models often operate as ‘black boxes,’ making their internal decision-making processes opaque • ‘Black Box’ Nature: Complexity leads to non-transparent decision-making Overview of Machine Learning Models • Complexity and Non-Transparency: The complexity of advanced ML models, especially deep neural networks, makes them difficult to interpret. Non-transparent decision-making processes hinder the ability to understand and trust model outputs. • Lack of Interpretability Tools: • While there are tools for model interpretation, they often fall short in explaining highly complex models. • Existing methods like feature importance scores, partial dependence plots, and surrogate models provide limited insights. Challenges in Interpreting and Explaining ML Models What is XAI? Definition and Importance Benefits of XAI Main Methods of XAI Feature Importance Local Interpretable Model-Agnostic Explanations (LIME) SHapley Additive exPlanations (SHAP) Partial Dependence Plots (PDP) Counterfactual Explanations Applications of XAI Healthcare Finance Autonomous Vehicles Regulatory Compliance Explainable AI • Feature Recognition: Feature recognition involves identifying which features (input