1. What is AI?
Artificial intelligence means creating machines or algorithms capable of performing tasks similar to humans. These machines are trained on data and use complex ML algorithms to make decisions on their own. AI and ML are used in every area nowadays. Data is used everywhere to solve problems and understand the lying pattern behind it. As data increases, the complexity of models is also increasing, and understanding the logic behind the decision-making process of models takes work. That’s why these models are called black boxes.
Machine learning development services is used in healthcare and safety environments. So understanding the reason behind the decision-making of ML models becomes important. With an understanding of the logic used and how a machine learning model concluded, users may trust them.
2. What is XAI?
Explainable AI is also known as interpretable machine learning. It provides techniques to understand how our machine learning model works. The main goal of XAI is to make systems more transparent which allows users to understand and trust how AI is reaching its conclusion and producing the output. Understanding models is useful not only for data scientists or data engineers but also for users. It provides transparency to users of algorithms not only what it does but also how it does.
// Fig. 1: XAI
3. Accuracy vs. Interpretability: Trade-offs in ML Models
Many machine learning models are easy to understand like linear regression but their accuracy may be low and not sufficient to provide correct results. Some models are very complex and difficult to understand but they produce high accuracy e.g. neural networks.
So to understand and explain ML models there are two techniques used that make AI systems more explainable:
3.1 Interpretable Models
Interpretable models are those models which are intrinsically interpretable. They are transparent and easy to use. They have simple mathematical systems or equations that allow users to see how input features are contributing to output features. Given the same input, they will consistently generate the same output. E.g. A tree-like structure of a decision tree consists of a root, multiple child nodes, and leaf nodes. It segments the information according to specific criteria and keeps splitting it further until it reaches the terminal leaf node, the outcome. Each data point follows a certain path in the decision tree from root to node, this path tells us which feature contributed most to the output.
3.2 Post Hoc Explanation
Post hoc explanations are also known as black box model explanations. They are used after a model has made its prediction to interpret and explain the output of complex models. These methods are used for powerful models that produce high accuracy but are less transparent and difficult to understand.
3.2.1 Understanding the Two Types of Explanations
- Global explanation: It explains the whole model and provides details about how a model behaves overall with the entire dataset. It provides insight into what features contribute most to the output. It identifies trends and patterns across all instances in the dataset.
- Local explanation: It explains the model decision for a particular data point. It provides insight into how the model arrived at this particular prediction. It is useful for questions like why the model gives this output for a particular input and what if certain features had different data values.
3.2.2 Exploring the Two Types of Explanation Techniques
- Model agnostic: These methods or techniques are not bound to any particular ML model. They can be applied to any model regardless of their underlying algorithm. They treat models as black boxes and provide explanations based on input-output relationships without interpreting the internal details of the model.
- Model specific: These techniques are bound to a specific ML model and they are not generalized. They rely on the internal structure and parameters passed to the model such as support vectors in SVM and weights in neural networks. They provide precise explanations as they are based on model-specific architecture.
4. Model Agnostic XAI methods
4.1 Permutation Feature Importance
It is a model-agnostic technique that has a global scope means it provides an explanation based on the whole model. It measures the importance of features by measuring changes in the performance of the model when the data point of a feature shuffles. Let’s understand how this works:
- First, train the machine learning model and predict the output.
- Calculate baseline performance measures like accuracy, RMSE, etc
- Permute the value of a single feature and break the connection between the target and this feature.
- Now, reevaluate the performance of the model and calculate performance measures.
- Compare the performance measure with the baseline measurement and calculate the difference.
- If the difference is not significant or there is no change then the feature is not that important. But if there is a sudden drop in value means a large difference then that feature is important.
4.2 Local Interpretable Model-Agnostic Explanations (LIME)
It is an agnostic model that has Local scope means it explains a single prediction of any machine learning model. It approximates the model locally with an interpretable model such as linear regression. Let’s understand how this works:
- For one specific prediction, LIME generates a set of new synthetic data points by making small changes to the instance’s features.
- The black box model is used to predict the outcome for new data points.
- LIME then fits an interpretable model (e.g., linear regression) to this locally perturbed dataset and measures model prediction changes to highlight the most influential regions.
- The interpretable model coefficients indicate the contribution of each feature to the prediction for that specific instance.
4.3 Shapley Additive Explanations (SHAP)
Shap is a model-agnostic method that has a local scope which means it explains individual features. However individual explanations can be used to get global interpretations. It is based on Shapley values which assign each feature an importance score and can be used to explain the prediction of instance x by computing each feature’s contribution (Shapley value) to the prediction. Let’s understand how this works:
- Compute the Shapley value of every feature to measure the contribution of each to a single prediction by evaluating all possible combinations of features.
- For computing Shapley value do this: First, create all possible subsets for a feature suppose f1. Then calculate the prediction with a subset that includes this feature and also with a subset that doesn’t include this feature. Then calculate the difference between them. This difference is known as marginal contribution.
- After that, we calculate the average of marginal contributions for all combinations of features for a particular feature to get its Shapley value.
5. Model Specific XAI Methods
5.1 Forest-Guided Clustering
It is a model-specific method and it has a global scope means it explains the whole dataset. It is used to explain the decision-making process of decision tree ensembles like a random forest model by clustering the data points that follow the same decision path in the decision tree. Let’s understand how this works:
- Each data point, follows a unique path in the decision tree by following the sequence of decisions made by the model.
- Then we compare the path of each data point and calculate the similarity between them by using hamming distance.
- Then we make groups or clusters of similar data points by using either k-means clustering or hierarchical clustering.
- Then these clusters help us to understand which features are most contributing to the output.
5.2 Gradient-weighted Class Activation Mapping (Grad-CAM)
It is a local scope model that explains the Convolutional Neural Network (CNN). It is used for image classification to highlight the region in the images that contribute most to the prediction. Let’s understand how this works:
- First, we do the forward pass means we pass the input images to the CNN model to get the feature maps and predicted class score.
- Then, we calculate the gradient of the predicted score concerning the obtained feature maps.
- Then, we calculate the average of gradients to get importance weights for each feature.
- Then, we calculate the weighted sum of feature maps and passed to ReLU to make a heatmap of important regions.
- Then, we upscale the heatmap to compare it with the original image and visualize which area areas of the image are contributing most to the prediction.
6. Conclusion
XAI is an important aspect of machine learning which gives humans the ability to understand what a machine learning model is doing and how it is doing. With the help of XAI, a user who does not know complex machine learning models can understand which features are most responsible for the output or why an ML model made a certain decision. It makes the systems transparent, interpretable and can be trusted by the user. With increasing trust in technology and transparent processes,
Generative AI Development Services can be used in environments where trust and transparency are the most important things like health care, finance, auto driving, etc.