MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset

The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications. However, due to the nature of such deep black-boxed models, concerns about interpretability, fairness, and biases in healthcare scenarios where human lives are at stake call for a careful and thorough examinations of both datasets and models. In this work, we focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset, and conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction. In terms of interpretabilty, we observe that (1) the best performing interpretability method successfully identifies critical features for mortality prediction on various prediction models; (2) demographic features are important for prediction. In terms of fairness, we observe that (1) there exists disparate treatment in prescribing mechanical ventilation among patient groups across ethnicity, gender and age; (2) all of the studied mortality predictors are generally fair while the IMV-LSTM (Interpretable Multi-Variable Long Short-Term Memory) model provides the most accurate and unbiased predictions across all protected groups. We further draw concrete connections between interpretability methods and fairness metrics by showing how feature importance from interpretability methods can be beneficial in quantifying potential disparities in mortality predictors.


Introduction
With the release of large scale healthcare datasets, research of data-driven deep learning methods for healthcare applications demonstrates their superior performance over traditional methods on various tasks, including mortality prediction, length-of-stay prediction, phenotyping classification and intervention prediction [1,2,3]. However, deep learning models have been treated as black-box universal function approximators, where prediction explanations are no longer available as their traditional counterparts, e.g., Logistic Regression and Random Forests. Lack of interpretability hinders the wide application of deep learning models in critical domains like healthcare. In addition, due to bias in datasets or models, decisions made by machine learning algorithms are prone to be unfair, where an individual or a group is favored compared with the others owing to their inherent traits. As a result, more and more concerns about interpretability, fairness and biases have been raised recently in the healthcare domain where human lives are at stake [4]. These concerns call for careful and thorough analyses of both datasets and algorithms.
In this work, we focus on the latest version (version IV [5]) of a widely used large scale healthcare dataset MIMIC [6], and conduct comprehensive analyses of model interpretability, dataset bias, algorithmic fairness, and the interaction between interpretability and fairness.
Interpretability evaluation. First, we evaluate the performance of common interpretability methods for feature importance estimation on multiple deep learning models trained for the mortality prediction task. Due to the complexity of dynamics in electronic health record data, there is no access to the ground truth of feature importance. Therefore, we utilize ROAR (remove and retrain) [7] to quantitatively evaluate different feature importance estimations. On all models considered, the ArchDetect [8] outperforms other interpretation methods in feature importance estimation.Then we qualitatively analyze the feature importance estimation results given by ArchDetect, and verify its effectiveness based on the observations that it successfully identifies critical features for mortality prediction. We also find that demographic features are important for prediction, which leads to our following analyses of dataset bias and algorithmic fairness.
Dataset bias and algorithmic fairness. We adopt the following commonly used demographic features as protected attributes: 1) ethnicity, 2) gender, 3) marital status, 4) age, and 5) insurance type. For dataset bias, we analyze the average adoption and duration of five types of ventilation treatment on patients from different groups. There exists treatment disparity among patient groups split by different protected attributes, which is most evident across different ethnic groups: Black and Hispanic cohorts are less likely to receive ventilation treatments, as well as shorter treatment duration on average. However, there are multiple confounders that may lead to the observed disparity in treatment, which adds the difficulty of identifying intentional discriminations. Hence we call for a close look at causal analysis for a better understanding. For algorithmic fairness, we evaluate the performance of state-of-the-art machine learning approaches for mortality prediction in terms of AUC-based fairness metrics. Experiment results indicate a strong correlation between mortality rates and fairness: machine learning approaches tend to obtain lower AUC scores on groups with higher mortality rates. Meanwhile, all of the studied mortality predictors are fair in general while IMV-LSTM [9] performs the best overall across protected groups.
Interactions between interpretability and fairness. We examine the interaction of interpretability and fairness by drawing connections between feature importance and fairness metrics. Furthermore, we observe substantial disparities in the importance of each demographic feature used for in-mortality prediction across the protected subgroups, which raises a concern whether these demographic features should be used in mortality prediction.
In summary, our main contributions are: • We give quantitative evaluation of various interpretability methods for feature importance estimation on deep learning models in the context of mortality prediction. It is observed that the best performing interpretability method successfully identifies critical features for mortality prediction on various prediction models. Also demographic features are important for prediction.
• For dataset bias, we observe treatment disparity among patient groups split by different protected attributes.
• For algorithmic fairness, we find that all of the studied mortality predictors are fair in general while the IMV-LSTM model performs the best overall across different protected groups.
• We also examine the interaction between interpretability and fairness, and observe disparities of feature importance among demographic subgroups.
2 Related Work 2.1 Interpretability Evaluation

Aspects of Interpretability of Deep Learning Models
Due to the complexity of deep learning models, interpretability research has developed diversely, and many methods have been used to interpret how a deep learning model works from various aspects, including: (1) Feature importance estimation [10,11,12,13,14,15,16,17,18,19,20]. For a given data sample, these methods estimate the importance of each individual input feature with respect to a specified output. (2) Feature interaction attribution [8,21,22,23,24,25]. In addition to estimating the importance of individual features, these methods analyze how interactions of feature pairs/groups contribute to predictions. (3) Neuron/layer attribution [26,27,28,29,20]. These methods estimate the contribution of specified layers/neurons in the model. (4) Explanation with high-level concepts [30,31,32]. These methods interprete deep learning models with human-friendly concepts instead of importance of low-level input features.
In this paper, we focus on feature importance estimation due to its importance and the completeness of its evaluation methods.

Evaluation of Feature Importance Interpretation
Since feature importance estimation assigns an importance score for each input feature, the evaluation of results is equivalent to the evaluation of binary classification results when the ground truth of feature importance is available, where the label indicates whether the feature is important for the problem. [33] constructs synthetic datasets with feature importance labels for evaluation. [34] obtains feature importance labels from both manually constructed tasks and domain experts. [35] derives importance labels from tasks with graph-valued data with computable ground truths. However, these evaluation methods require the accessibility of ground truth labels, which is hard to fulfill and is usually the problem itself we need to solve in domains such as healthcare.
For evaluation without ground truth, A common strategy to evaluate feature importance estimation is to measure the degradation of model performance with the gradual removal of features estimated to be important. [36] pertubates features ranked by importance in test samples and calculate the area over the MoRF curve (AOPC): a higher AOPC means the information disappears faster with feature removal and indicates a better importance estimation. [7] remove features from the entire dataset and retrain the model when obtaining AOPC, which excludes the interference of data distribution shifting. [33] replace features with known feature distributions for evaluation on synthetic tasks to ensure the consistency of data distribution. In this paper, we utilize the evaluation in [7].

Bias and Fairness in Machine Learning
With the open access to large-scale datasets and the development of machine learning algorithms, more decisions in the real world are made by machine learning algorithms with or without human's intervention, e.g., job advertisements promoting [37], facial recognition [38], treatment recommendation [39], etc. Due to bias in datasets or models, decisions made by machine learning algorithms are prone to be unfair, where an individual or a group is favored compared with the others owing to their inherent traits. One well-known example is the software COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which was found a bias against African-Americans to assign a higher risk score of recommitting another crime than to Caucasians with the same profile [40].
Based on the general assumption that the algorithm itself is not coded to be biased, the decision unfairness can be attributed to biases in the data, which is likely to be picked up and amplified by the trained algorithm [41]. Three major sources of data biases are [41]: 1) Biased Labels: the ground-truth labels for the machine learning algorithms to predict are biased; 2) Imbalanced representation: imbalanced representation of different demographic groups occurs when some protected groups are underrepresented with fewer observations in the dataset compared with other groups; 3) Data Quality Disparity: data from protected groups might be less complete or accurate during data collecting and processing. Mostly widely considered traits, such as gender, age, ethnicity, marital status, are considered as protected or sensitive attributes in literature [42]. Fairness has been defined in various ways considering different contexts or applications, two of them are the most widely leveraged for bias detection and correction: Equal Opportunity, where the predictions are required to have equal true positive rate across two demographics, and Equalized Odds, where an additional constraint is put on the predictor to have equal false positive rate [43]. To derive fair decisions with machine learning algorithms, three categories of approaches have been proposed to mitigate biases [44,42]: 1)Pre-processing: the original dataset is transformed so that the underlying discrimination towards some groups is removed [45]; 2) In-processing: either by adding a penalization term in the objective function [46] or imposing a fairness-relevant constraint [47]; 3) Post-processing: further recompute the results from predictors to improve fairness [48].
When making medical decisions based on text data like clinical notes, word embeddings, used as machine learning inputs, have been demonstrated to propagate unwanted relationships with regard to different genders, language speakers, ethnicities, and insurance groups [53,50]. With respect to gender and insurance type, differences in accuracy and therefore machine bias has been observed for mortality prediction [51]. To mitigate biases and improve prediction fairness, Chen et al. argued that collecting data with adequate sample sizes and predictive variables measures is an effective approach to reduce discrimination without sacrificing accuracy [4]. Martinez et al. proposed an in-processing approach where the fairness problem is characterized as a multi-objective optimization task, where the risk for each protected group is a separate objective [49]. After well-trained machine learning models make predictions, equalized odds post-processing [53] and updating predictions according to the weighted sum of utility and fairness [52] were introduced respectively as effective post-processing approaches.
To continue the dataset bias and algorithmic fairness study on MIMIC-IV, we follow previous fairness study work and adopt the following commonly used demographic features as protected attributes: 1) ethnicity, 2) gender, 3) marital status, 4) Age, and 5) insurance type. For dataset bias, we analyze the average adoption and duration of five types of ventilation treatment on patients from different groups. For algorithmic fairness, we evaluate the performance of state-of-the-art machine learning approaches for mortality prediction in terms of accuracy and fairness.

Interactions between Interpretability and Fairness
Besides accuracy, interpretability and fairness are two important aspects that businesses and researchers should take into consideration when designing, deploying, and maintaining machine learning models [54]. It is also well acknowledged that enhancing model interpretability is an important step towards developing fairer ML systems [55] since interpretations can help detect and mitigate bias during data collection or labeling [56,57,58]. Given evaluation metrics from the two concepts, demonstrations of performance from different predictive models have been shown in literature to further investigate their interactions [59,60,61,62,63]. When the model's complexity is determined by the number of features and simpler models are more interpretable, curves showing how model fairness is affected by model complexity were studied besides its influence on accuracy [59,60]. When the feature importance is leveraged to interpret model predictions, failure of fairness can be identified by detecting whether the feature has a larger effect than it should have [61,62]. For instance, Adebayo et al. showed that gender is of low importance among all studied demographic features in a bank's credit limit model, which indicates that the bank's algorithm is not overly dependent on gender in making credit limit determinations [61]. Recently, connections between interpretability and fairness were quantitatively studied by comparing fairness measures and feature importance measure: there is a direct relation between SHAP value difference and equality of opportunity after removing bias with reweighing techniques and measuring feature importance with SHAP on Adult, German, Default and COMPAS datasets [63]. Given mortality predictions made by state-of-the-art models on MIMIC-IV, we study the connections between feature importance induced by different interpretation approaches and the fairness measures in this paper.

MIMIC-IV Dataset
In this section, we describe the following preprocessing steps of the MIMIC-IV dataset: cohort selection, feature selection, and data cleaning. We also report the distributions of demographic, admission and comorbidity variables of the preprocessed dataset.

Dataset Description
MIMIC-IV [5,6] is a publicly available database of patients admitted to the Beth Israel Deaconess Medical Center (BIDMC) in Boston, MA, USA. It contains de-identified data of 383,220 patients admitted to an intensive care unit (ICU) or the emergency department (ED) between 2008 -2019. Till the day when we finished all experiments, the latest version of MIMIC-IV is v0.4 and only provides public access to the electronic health record data of 50,048 patients admitted to the ICU, which is sourced from the clinical information system MetaVision at the BIDMC. Therefore, we design the following data preprocessing procedures for the ICU data part of MIMIC-IV.

Cohort Selection
Following the common practice in [1,3], we select ICU stays satisfying the following criteria as the cohort: (1) the patient is at least 15 years old at the time of ICU admission; (2) the ICU stay is the first known ICU stay of the patient; (3) the total duration of ICU stay is between 12 hours and 10 days. After the cohort selection, we collect 45,768 ICU stays as the cohort. According to the cohort selection criterion (2), each ICU stay corresponds to one unique patient and one unique hospital admission.

Data Cleaning
We follow the same data cleaning procedure in [1] to handle: (1) Inconsistent units. We convert features with multiple units to their major unit. (2) Multiple recordings at the same time. We use the average value for numerical features and the first appearing value for categorical features. (3) Range of feature values. We use the median of the range as the value of the feature.

Feature Selection
We select 164 features from the following groups: • Electronic healthcare records (EHR). We modify the feature list used in [1] and extract 122 features after removing features that are no longer available in MIMIC-IV.
• Demographic features. We extract 5 from patients' demographic information.
• Admission features. We extract 4 from admission records.
• Comorbidity features. We extract binary flags of 33 types of comorbidity using patients' ICD codes as comorbidity features.
We provide a detailed list of all selected features in Table A1 in Appendix.

Data Filtering, Truncation, Aggregation and Imputation
Data Filtering After specifying the list of features, we further filter ICU stays from the cohort and only keep those that have records of selected EHR features for at least 24 hours and at most 10 days, starting from the first record within 6 hours prior to ICU admission time. We have 43005 ICU stays after the filtering.
Other works such as [3] extract the first 30-hour data and drop the data from the last 6 hours to avoid information leakage of positive mortality labels to features measured within 6 hours prior to deathtime. We find that most (96.02%) of the patients with positive in-hospital mortality labels have measurement for over 30 hours prior to their deathtime, thus we omit this processing step.
Truncation For each ICU stay, we only keep the data of the first 24 hours, starting from the first record within 6 hours prior to its ICU admission time.
Aggregation For each ICU stay, we aggregate its records hourly by taking the average of multiple records within the same hourly time window.
Imputation We perform forward and backward imputation to fill missing values. For cases where certain features of some patients are completely missing, we fill with mean values of corresponding features in the training set. After all preprocessing steps, we obtain features of the shape (N, T, F ), where N = 43005 is the number of ICU stays (data samples), T = 24 is the number of time steps with 1-hour step size, and F = 164 is the total number of features. We also process the data into the tabular form (N, F ) by replacing sequential EHR features with the summary over time steps including minimum, maximum, and mean values (for the urinary_output_sum feature we have summation in addition), where F = 409.

Dataset Summary
We show the distribution of demographic features, admission features and comorbidity features grouped by patients' in-hospital mortality status in Table A2 in Appendix. We also demonstrate differences between the preprocessed MIMIC-IV data in this work and the preprocessed MIMIC-III data from [1] in Table 1.

Interpretability Evaluation
In this section, we evaluate the performance of various feature importance interpretability methods on multiple models for the in-hospital mortality prediction task. We describe the task, models, interpretability methods, and the evaluation method in detail and report the evaluation results.

Task Description
Mortality prediction is one primary outcome of high interest of hospital admissions, and is widely considered in other benchmark works [1,2,64,3]. We use the in-hospital mortality prediction task to train different models and evaluate the performance of various interpretability methods.
We formulate the in-hospital mortality prediction task as a binary classification task. Given the observed sequence of features X ∈ R T ×F of one patient (or its summary x ∈ R F , depending on the model), the model gives the probability that the patient dies during his/her hospital admission after being admitted to ICU. In MIMIC-IV, a patient has in-hospital mortality if and only if his/her deathtime exists in the mimic_core.admissions table. We randomly divide 60% data for training, 20% for validation and 20% for test.

Models
We consider following models: (1) AutoInt [65]. A model that learns feature interaction automatically via self-attentive neural networks.
(2) LSTM [66]. Long short-term memory recurrent neural network, which is a common baseline for sequence learning tasks.
(3) TCN [67]. Temporal convolutional networks, which outperform canonical recurrent networks across various tasks and datasets. (4) Transformer [68]. A network architecture based solely on attention mechanisms. Here we only adopt its encoder part for the classification task. (5) IMVLSTM [9]. An interpretable model that jointly learns network parameters, variable and temporal importance, and gives inherent feature importance interpretation. We use sequence data as input for (2)- (5), and the summary of sequence data as input for (1) since AutoInt only processes tabular data in its original implementation.
We use the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC) as metrics for binary classification. The performance of all models considered in this work is shown in Table 2.

Interpretability Methods
Interpretation of deep learning models is still a rapidly developing area and contains various aspects. In this work, we focus on the interpretation of feature importance, which estimates the importance of single features for a given model on a specific task. Estimation of feature importance helps improve the model, builds trust in prediction and isolates undesirable behavior [7]. In addition, recent works [36,7,33] have developed methods for evaluating feature performance estimation without access to the ground truth of feature importance, which fits scenarios in healthcare domains well: ground-truth feature importance for healthcare applications is either the problem we need to solve itself or requires extraction from a huge amount of domain knowledge. Therefore, we choose the interpretation of feature importance as the target aspect for evaluating interpretability methods.
Formally, given a function M : R din → R dout and the input (flattened) feature vector x ∈ R din , the interpretation of feature importance gives a non-negative score We select the following interpretability methods to compare their feature importance estimation results. Notice that some interpretability methods give signed scores (or "attributions"), where signs reflect positive/negative contributions of features to the output, and we use the absolute values of signed scores as importance scores. For methods requiring a baseline input vector, unless otherwise specified, we follow the method in [33] and randomly sample x ∈ R din , where (1) Graidient based methods.
• Saliency [10]. Saliency returns the gradients with respect to inputs as feature importance: s(x) = ∂M (x) ∂x . By taking the first-order Taylor expansion of the neural network at the input, which is a linear approximation of the network, the gradient ∂M (x) ∂xi = s(x) i is the coefficient of the i-th feature.
• IntegratedGradients [11]. IntegratedGradients assigns an importance score to each input feature by approximating the integral of gradients of the model's output with respect to the inputs along the path (straight line) from given baselines to inputs, i.e.
where x is the baseline. • DeepLift [12,13]. DeepLift decomposes the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.
. P io is the set of all paths from the i-th input feature to the output neuron in the network. (s, t) is a pair of connected neurons in path p. Each neuron t contains a linear transformation z t = q∈P a(t) w tq o q + b t followed by a nonlinear mapping o t = f (z t ).
• GradientShap [14]. GradientShap approximates SHAP (SHapley Additive exPlanations) values by computing the expectations of gradients by randomly sampling from the distribution of baselines. It first adds white noise to each input sample and selects a random baseline from a given distribution, then selects a random point along the path between the baseline and the input with noise, and computes the gradient of outputs with respect to the random point. The procedure is repeated for multiple times to approximate the expected values of gradients [14]. It extends DeepLift algorithm and approximates SHAP values using DeepLift. For each input, it samples baselines from a given distribution and computes the DeepLift score for each input-baseline pair and averages the resulting scores per input example as the output. • SaliencyNoiseTunnel [15]. SaliencyNoiseTunnel adds Gaussian noise to the input sample and averages the calculated attributions using Saliency method as the output.
• ShapleySampling [16,17]. Shapley value gives attribution scores by taking each permutation of the input features and adding them one-by-one to a given value. Since the computation complexity is extremely high for large numbers of features, ShapleySampling takes some random permutations of the input features and averages the marginal contribution of features. • FeaturePermutation [18]. FeaturePermutation permutes the input feature values randomly within a batch and computes the difference between original and shuffled outputs as the result. • FeatureAblation [19]. FeatureAblation replaces each input feature with a given baseline value and computes the difference in output as the result. • Occlusion [20]. Occlusion replaces each contiguous rectangular region with a given baseline and computing the difference in output as the result. • ArchDetect [8]. It utilizes the discrete interpretation of partial derivatives. While the original paper considers both single features and feature pairs, we here only apply it to single features, since the evaluation method in this work is designed for single feature importance only. In the single feature case, the importance score of the i-th feature is where Here we select x = 0 ∈ R din .
(3) Glassbox interpretation. If the model's architecture provides feature importance scores directly as a part of the output of the model, such as the attention score of each feature, we call this interpretation as "Glassbox" and regard it as an extra baseline.
(4) Random baseline. As a baseline, we randomly shuffle all features as the feature importance ranking.
For models in Section 4.2, AutoInt maps categorical features to embeddings using learnable dictionaries and has no gradient on categorical features, thus gradient based methods are not applicable. Only IMVLSTM model has Glassbox interpretation.

Evaluation Method
Since acquiring the ground-truth feature importance is challenging for mortality prediction tasks, we evaluate one feature importance estimation by gradually dropping most important features it gives at certain ratios from the dataset and observe the degradation of the model's performance. The larger the degradation is, the better the estimation is, since it identifies the features most helpful for the model on the task.
More specifically, we use ROAR (remove and retrain) proposed in [7] for evaluation. For each interpretability method, we replace the most important features of certain fractions of each data sample with a fixed uninformative value. We conduct this in both training and test sets. Then we retrain the model with the modified training set and evaluate its classification performance on the modified test set. By retraining the model on datasets with features removed, ROAR ensures that train and test data comes from a similar distribution and reduces the impact on the model's performance of data distribution discrepancy, so that the degradation of performance is caused by the removal of information instead of the shift of data distribution.
For sequence input X ∈ R T ×F , we flatten it and give feature importance scores for all T × F features. For the i-th feature, we use its mean value in the training set as its uninformative value. We evaluate each interpretability method with feature drop ratios 10%, 20%, . . . , 100% and plot the curve of model performance with respect to feature drop ratio for each model. Figure 1 shows the curves of model performance (measured with AUPRC and AUROC respectively) with respect to the feature drop ratio of different interpretability methods for each model. Table 3 gives the quantitative results of area under the curve (AUC). A lower value of AUC means that the performance curve drops faster with the increase of feature drop ratio, thus indicates that the interpretability method gives a better ranking of feature importance. We have the following observations: (1) ArchDetect gives the best performing feature importance estimation overall. From Figure 1, we observe that the curve of ArchDetect drops the fastest for all models on both metrics. Quantitative results in Table 3 also show that ArchDetect has the lowest AUC. Therefore, for the in-hospital mortality task, the feature importance ranking given by ArchDetect is the most reasonable one among results of all interpretability methods considered in this work. (2) Gradient based methods perform well on LSTM, Transformer and IMVL-STM models, but are no better than a random guess on TCN. AUC of both metrics of gradient based methods is significantly lower than that of random guessing for LSTM, Transformer and IMVLSTM. But for TCN, even the best performing gradient based method SaliencyNoiseTunnel has AUC close to random guessing (0.581 vs 0.605 for AUPRC and 0.896 vs 0.901 for AUROC). (3) Attention scores are not necessarily the best estimation of feature importance. In IMVLSTM, the Glassbox baseline utilizes attention scores the model gives as an estimation of feature importance. Although it outperforms the random guessing baseline, it is not among the best interpretation methods and is inferior to methods such as ArchDetect, IntegratedGradients and GradientShap. Similar observations also exist in the natural language processing domain [69,70], where attention weights largely do not correlate with feature importance. We further investigate and compare important features given by different prediction models with the best performing interpretablity method ArchDetect in Section 4.5.1 for a qualitative evaluation of its effectiveness. Since ArchDetect gives local feature importance for each data sample respectively, we aggregate local results for a global qualitative evaluation with following steps: (1) for each sample, get the rank of importance for each individual feature; (2) calculate the average of ranks for each feature over all data samples; (3) sort the averaged ranks of features from (2) as the global ordering of importance for all features. We then verify the effectiveness of feature importance estimation given by ArchDetect from following aspects:

Identified Important Features
Similarity of Important Features from Different Models Figure 2 shows the Jaccard similarity of top-50 most important features identified in models. We observe that (1) the Jaccard similarity of top-50 most important features from any pair of two models is above 0.667; (2) each pair of models accepting sequential data (LSTM, TCN, Transformer, and IMVLSTM) has a Jaccard similarity over 0.786. This result demonstrates that ArchDetect identifies similar sets of important features when applied to various models, which is necessary for its correctness since the ground truth set of important features is unique.
Visualization of Global Feature Importance Ranks We find that the feature importance estimation results successfully identify critical features that are also used in the domain knowledge based SAPS-II system for severity classification, indicating the correctness of results. We also notice that demographic features play important roles in prediction models, which may raise the concern of fairness. We further investigate the fairness of data and models in the following section.

Fairness Evaluation
In this section, we first describe the set of demographic features considered as protected attributes. We then investigate the extent of which disparate treatment exists within the MIMIC-IV dataset. Given that the in-hospital mortality predictors can be further developed and utilized in a down-stream decision-making policy, we further audit their performance in terms of fairness across various protected attributes.

Protected Attributes
MIMIC-IV came with a set of demographic features that are helpful for the task of auditing in-hospital mortality predictors for prediction fairness. Protected classes under the Equal Credit Opportunity Act (ECOA) include the following: age, color, marital status, national origin, race, recipient of public assistance, religion, sex [71]. For our task, we consider a subset of such protected classes available within the dataset.          obtain'. Table 4 lists the attributes and subgroups used within our analysis. Note that age is grouped by quartiles. For a more in-depth look at the distribution of each subgroup, please refer to Table A2 in the Appendix section.

Fair Treatment Analysis
Disparate treatment is unlawful discrimination in US labor law. Title VII of the United States Civil Rights Act is created to prevent unequal treatment or behavior toward someone because of a protected attribute (e.g. race, gender, or religious beliefs). Although the type and duration of treatment received by patients are determined by multiple factors, analyzing treatment disparities in MIMIC-IV can give us insights in potential biases in treatment received by different groups. Previously, there have been a few works pointing out the racial disparities in end-of-life care between cohorts of black and white patients within MIMIC-III [72,73]. In a similar spirit, we additionally investigate treatment adoptions and duration across not only ethnicity, but also gender, age, marital status, and insurance type.

Evaluation Method
In MIMIC-IV, 5 categories of mechanical ventilation received by patients have been recorded: HighFlow, InvasiveVent, NonInvasiveVent, Oxygen, and Trach. We first extract the treatment duration and then label the patients with no record as no intervention adoption. If a patient had multiple spans, such as an intubation-extubation-reintubation, then we consider the patient's treatment duration to be the sum of the individual spans. Figure 9 plots the intervention adoption rate and intervention duration across different protected attributes. We observe that: (1) There exists disparate treatments, which is most evident across different ethnic groups. The first column in Figure 9 indicates that on average Black and Hispanic cohorts are less likely to receive ventilation treatments, while also receiving a shorter treatment duration. Similarly, this is also observed across groups split by the marital status, where single patients tend to receive shorter and fewer ventilation treatments as opposed to married patients, and similarly with patients with public or private insurances. (2) There are numerous hidden confounders in analyzing disparate treatment. The fourth column in Figure 9 indicates more treatments provided to older patients. However, one can imagine that cause of this is medically relevant as the older cohort tends to require more care. Similarly, patients with generous public insurance can more easily afford more treatments. In particular, we note that it is difficult to precisely determine whether the differences in treatment are due to intentional discrimination or to differences caused by other confounders. At the current junction, we suspect a close look at causal analysis can help address this problem.

Fair Prediction Analysis
Fairness in machine learning is a rapidly developing field with numerous definitions and metrics for prediction fairness with respect to two notions: individual and group fairness. For our binary classification task of in-hospital mortality prediction, we consider the group notion where a small number of protected demographic groups G (such as racial groups) is fixed, and we then ask for the classification parity of certain statistics across all of these protected groups.

Fairness Metrics
Most recently, a multitude of statistical measures have been introduced for group fairness, most notable are statistics that ask for the equality of the false positive or negative rates across all groups G (often known as 'equal opportunity' [43]) or the equality of classification rates (also known as statistical parity). Interestingly, it has been proven that some of the competing definitions and statistics previously proposed are mutually exclusive [74]. Thus, it is impossible to satisfy all of these fairness constraints.
In our case, it is often necessary for mortality assessment algorithms to explicitly consider health-related protected characteristics, especially the age of the patients. Hence, an age-neutral assessment score can systematically overestimate  Hence, we choose AUC (area under the ROC curve) as our evaluation metrics to audit fairness across subgroups. First, it encompasses both FPR and FNR, which touches on the notion of equalized opportunity and equalized odds. Second, it is robust to class imbalance, which is especially important in the task of mortality prediction where mortality rates are ∼ 7%, Lastly, AUC is threshold agnostic, which does not necessitate setting a specific threshold for binary prediction that is used across all groups.

Evaluation Method
To evaluate fairness for each model on the MIMIC-IV dataset, we stratify the test set by groups (Table 4), and compute the AUC for each protected group, similarly to [75]. In addition, we also added a stratification for the patient group with the largest common comorbidity, with HEM/METS for patients with lymphoma, leukemia, multiple myeloma, and metastatic cancer. We report (1) AUC(min): minimum AUC over all protected groups, (2) AUC(macro-avg): macro-average over all protected group AUCs and (3) AUC(minority): AUC reported for the smallest protected group in the dataset. Higher AUC is better for all three metrics.
Additionally, as MIMIC-IV is an ongoing data collection effort, we also investigate the relationships between the predictive performance of the mortality predictors and the data distribution with respect to each protected group. It was shown in [76] that if the risk distributions of protected groups in general differ, such as mortality rates, threshold-based decisions will typically yield error metrics that also differ by group. Hence, we are interested in studying the potential source of the bias/differences in predictive performances from the MIMIC-IV training set. Figure 10 shows the training data distribution, mortality rates, and testing AUCs across each protected attribute for all patients and patients with HEM/METS, summarized over all five classifiers: AutoInt, LSTM, IMV-LSTM, TCN, and Transformer. Smaller gaps in AUC indicate equality in predictive performances, and larger gaps indicate potential inequalities. Table 5 gives the quantitative results of the area under the curve (AUC). Higher values of AUCs for each of the min, avg, and minority AUC metrics indicate better predictive performance with respect to the protected groups.

Results
We have the following observations: (1) IMV-LSTM performs the best overall on fairness measure with respect to AUC across different protected groups. Quantitatively, from Table 5, it is clear that IMV-LSTM has the highest AUC for both overall samples and the subgroups. We see that the minimum AUC for the protected subgroups is highest among the methods considered in this work. This indicates a higher lower bound over all protected attributes. Moreover, the AUC gap for minimum over protected groups is much larger than the next best model, Transformer, for  Figure 10, we observe that the maximum AUC gap across all attributes is at most 0.08, which is smaller than the maximum AUC gap for patients with HEM and METS at 0.11. The difference is more pronounced in the Ethnicity class, but can similarly be observed for other protected classes. In general, we note that all models are quite fair across ethnic groups, with small deviations in gender, and patient's insurance. Across both sets of patients, we see that all classifiers are in general more accurate for younger patients (<55years) versus older patients. (3) There exists a strong correlation between mortality rates and AUCs for each of the protected attributes. We observe that there is a strong correlation between group mortality rates and group AUC, with Pearson's r=-0.922 and a p-value < .00001. This shows that groups with higher mortality rates indicate lower AUC scores. From Figure 10, we also observe that data with imbalanced representation between each subgroup does not impact predictive performance substantively.  Figure 11: Interactions between Feature Importance from two interpretability approaches and fairness evaluation value Min AUC based on mortality predictions from four models.

Interactions between Interpretability and Fairness
Fairness and interpretability are two critical pillars of the recent push for fairness, accountability, and transparency within deep learning. Overall, most interpretability works concern with explaining how the input features impact the final prediction, whether through feature importance or attributions, interactions, and knowledge distillation. Fairness on the other hand concerns with fairness metrics, optimization for fairness constraints, and the trade-off between accuracy and fairness. However, to the best of our knowledge, few work attempts to answer the question of how can interpretability help with fairness. What can we learn from our interpretability methods that would indicate either algorithmic bias or representation bias? In this section, we present concrete evidence to establish the initial connection between the two areas, but admittedly leave the fully investigation on the strength of this interaction for future work.

Feature Importance Correlation with Fairness Metrics
Given mortality predictions made by state-of-the-art models on MIMIC-IV, we study the connections between feature importance induced by different interpretation approaches and the fairness measures in Figure 11. For all the five protected attributes, we compute their respective feature importance by averaging the values produced from interpretability models across time and patients. Taking the feature importance as x axis and the minimum AUC from subgroups split by protected attributes as y axis, we are expecting to see a decreasing trend, where more important features have a higher possibility to lead to performance divergence in the split subgroups. We observe the expected trend consistently among all prediction models, when the interpretability approach DeepLift and DeepLiftShap are utilized. As shown in Figure 11, age (black dot) is the most important feature compared with other protected attributes and the accuracy difference between young and old is more obvious than other group divisions. Similarly, ethnicity (red dot) and gender (green dot) are the least important features, which leads to much higher minimum AUC than other protected attributes. We plotted but did not observe obvious connections between feature importance from other interpretability approaches and other two fairness evaluation metrics.

Feature Importance Scores across Protected Attributes
Interpretability often concerns with global feature importance for the entire model and local feature importance for an individual sample with respect to its prediction. Here, we consider the group feature importance that builds upon local feature importance. Ideally, we want to measure how important each feature is across different groups with certain protected attributes. Hence, we define the group feature importance g i for feature i and protected attribute A, where N A is the size of the group with attribute A, and φ j i is the local feature importance of the feature i for a person j with attribute A. The parity between g i,A would indicate a parity in how each feature is being used for different groups within a certain class of protected attributes. In the MIMIC-IV setting, we are interested in the importance of each of the demographic features used for the in-mortality prediction across the protected subgroups.
Since the scales of the feature importance scores are different for each of the interpretability method, we calculate the group feature importance for each demographic feature and rank their importance relative to other features within each interpretability method. Additionally, since feature importance is provided for {each hour timestep} x {each feature} within the first 24 hours in the ICU, for all models, we additionally average the feature importance across timesteps. Figure 12 presents the box plot of the feature rankings for each demographic feature for the four models: Transformer, TCN, LSTM, and IMV-LSTM, and each of the 12 interpretability methods: ArchDetect, DeepLiftShap, FeaturePermutation, IntegratedGradients, SaliencyNoiseTunnel, DeepLift, FeatureAblation, GradientShap, Occlusion, Saliency, and ShapleySampling. A lower ranking indicates higher feature importance.
We observe that similar trends exist across different models of varying architectures, where a demographic feature is more important (has lower ranking) for specific groups. Out of 164 features used for each timestep, the feature ethnicity has the highest feature importance for the WHITE patients, similarly for the MALE patients with the feature gender, and the age group >= 78 YRS with the feature age, and so on. The protected attribute age is the most intuitive in this setting, where in-hospital mortality predictors would attribute high importance to elderly patients since that is a strong signal for mortality prediction. A similar case can be made the feature insurance, as patients with Medicare are often elderly. However, it is less intuitive for the ethnicity feature, as to why one subgroup would use the ethnicity feature more strongly than the other subgroups. This stark parity exists for all models, even for different methods of interpretability to obtain feature importance.
Again, it is difficult to identify the confounders or features that strongly correlate with the ethnicity feature, hence perhaps a causal perspective is most needed for this task. However, we do note that feature importance, especially when viewed as group importance, can concrete reveal how a feature is being used for different groups. Should this parity be considered as "unfair" and perhaps demographic features should not be used in training mortality classifiers. However, as we have also observed, age is such a strong indicator of mortality that omitting it would be detrimental to the classifiers' performance. We leave this connection for future work and emphasize the need to connect interpretability and fairness to explain what the model is doing and how can that explanation be used to reveal and/or audit potential bias within the model itself.

Summary
In this work, we conduct analysis on the MIMIC-IV dataset and several deep learning models in terms of model interpretability, dataset bias, algorithmic fairness, and the interaction between interpretability and fairness. We present quantitative evaluation of interpretability methods on deep learning models for mortality prediction, demonstrate the dataset bias in treatment in MIMIC-IV, verify the fairness of studied mortality prediction models, and reveal the disparities of feature importance among demographic subgroups. We will conduct further analysis from a causal perspective on the relation between the difference of feature importance and the difference of model outcomes among subgroups.