Joint Imbalance Adaptation for Radiology Report Generation

Purpose: Radiology report generation, translating radiological images into precise and clinically relevant description, may face the data imbalance challenge – medical tokens appear less frequently than regular tokens; and normal entries are significantly more than abnormal ones. However, very few studies consider the imbalance issues, not even with conjugate imbalance factors. Methods: In this study, we propose a Joint Imbalance Adaptation (JIMA) model to promote task robustness by leveraging token and label imbalance. JIMA predicts entity distributions from images and generates reports based on these distributions and image features. We employ a hard-to-easy learning strategy that mitigates overfitting to frequent labels and tokens, thereby encouraging the model to focus more on rare labels and clinical tokens. Results: JIMA shows notable improvements (16.75% - 50.50% on average) across evaluation metrics on IU X-ray and MIMIC-CXR datasets. Our ablation analysis proves that JIMA’s enhanced handling of infrequent tokens and abnormal labels counts the major contribution. Human evaluation and case study experiments further validate that JIMA can generate more clinically accurate reports. Conclusion: Data imbalance (e.g., infrequent tokens and abnormal labels) leads to the underperformance of radiology report generation. Our curriculum learning strategy successfully reduce data imbalance impacts by reducing overfitting on frequent patterns and underfitting on infrequent patterns. While data imbalance remains challenging, our approach opens new directions for the generation task.


Introduction
Radiology report generation is a multimodal and medical image-to-text task that generates text descriptions for radiographs (e.g., X-ray or CT scan), which may reduce the workloads of radiologists [1,2].The task has own unique characteristics than general image-to-text tasks (e.g., image captioning), such as lengthy medical notes, medical annotations, and clinical terminologies.As demonstrated in Figure 1, data imbalance can significantly impact model robustness that prevents model deployment in practice -models can easily overfit on frequent patterns.However, modeling data imbalance to augment the robust generation of the radiology report is understudied.Two major data imbalances exist in the radiology generation task, label and token.Label imbalance pertains to a disproportionate ratio of normal and abnormal diagnosis categories, which exist in radiological images and text reports.For instance, normal cases (images and reports) dominate radiology data, which can easily lead to underperformance in disease detection and professional description.As shown in Table 1, abnormal reports are considerably longer than normal reports while can only count less than 15%.These abnormal reports are much harder to generate than shorter reports [3][4][5] and can be worse with fewer samples than normal cases. 1 Existing imbalance learning studies of radiology report generation primarily focus on label imbalance [7,8].Token imbalance is a critical challenge in generation that tokens have varied occurrence frequencies, and the issue is more critical in the medical task.Learning infrequent tokens can be harder than frequent tokens for generation models [9,10].Medical tokens appear less frequently than regular ones, and the infrequent tokens may contain more medical results, highlighting the very unique challenge of this task.Figure 1 illustrates the learning progress of the state-of-the-art (SOTA) model RRG [11] in predicting a report with predominantly normal diagnoses.The model shows strong performance on normal cases but struggles on abnormal reports.
To promote the quality of generated reports, we propose Joint Imbalance Adaptation (JIMA) model by curriculum learning [12].JIMA automatically guides the model learning process by leveraging optimization difficulties, strengthening learning capability on infrequent samples, and alleviating overfitting on frequent patterns on both label and token.We incorporate the token and label metrics as a joint optimization and design a novel Training Scheduler that sampling and sorting training We conduct experiments on two publicly available datasets, MIMIC-CXR [13] and IU X-ray [14] with automatic and human evaluations.By comparing with six state-of-theart (STOA) baselines on overall and imbalance performance settings, our approach shows promising results over the STOA baselines.Our ablation and qualitative analyses show that JIMA can generate more precise medical reports, alleviating label and token imbalance.

Data
We collected two publicly accessible datasets for this study, IU X-ray [14] and MIMIC-CXR [13], de-identified chest X-ray datasets to evaluate radiology report generation.IU X-ray [14], collected from the Indiana Network for Patient Care, includes 7,470 X-ray images and corresponding 3,955 radiology reports.MIMIC-CXR [13], collected from the Beth Israel Deaconess Medical Center, contains 377,110 X-ray images and 227,827 radiology reports for 65,379 patients.Each report is a text document and associates with one or more front and side X-ray images.Table 1 summarizes statistics of data imbalance and Figure 2 visualize the distributions of frequent (ranked in the top 12.5% of the vocabulary) and infrequent tokens.We include preprocessing details in Appendix A. Table 1 presents imbalance patterns in tokens and labels.Abnormal entries are predominant in both datasets, and MIMIC-CXR displays a more skewed label distribution, as more abnormal samples were collected during diagnosis phases not for screening purposes.MIMIC-CXR has a longer average length than IU X-ray.The lengthier documents may pose a unique multimodal generation challenge in the medical field.To conduct our analysis, we define the low and high frequency using the top 12.5% frequent tokens.Figure 1 suggests a joint relation between label and token imbalance and higher ratios of low-frequency tokens in abnormal reports.This observation motivates us to investigate how the imbalance impacts model robustness and reliability.

Imbalance Effects
We examine the potential impact of label and token imbalance on model performance.To ensure consistency, we keep the top 12.5% to split low-and high-frequent tokens for evaluation purposes.The analysis includes three state-of-the-art models, R2Gen [15], WCL [16], and CMN [17].We use BLEU-4 [18] and F1 scores to measure performance across both token (low vs high frequency) and label (normal vs. abnormal) imbalance.We visualize performance variations in Figure 2. The results suggest that the models exhibit significant difficulties in coping under label and token imbalance.Models consistently perform worse on abnormal reports, which are lengthier and have more infrequent tokens than normal reports.For example, the top 12.5% frequent tokens count > 80% tokens in two datasets, and low-frequent tokens have much worse performance than frequent tokens, as infrequent tokens are harder to optimize [19].However, infrequent tokens contain higher ratios of medical terms (e.g., silhouettes and pulmonary) describing health states.The significantly varying performance highlights the unique challenges to adapt token and label imbalance.While existing work [7] has considered label imbalance, however, the study did not examine the performance effects of label or token imbalance.The findings inspire us to propose our model Joint Imbalance Adaptation (JIMA) to model token and label imbalance.

Joint Imbalance Adaptation
In this section, we present our approach Joint Imbalance Adaptation (JIMA) in Fig 3 by using curriculum learning.JIMA aims to augment model robustness under label and token imbalance.As optimizing data imbalance has been demonstrated difficulty, deploying such a learning strategy will strengthen model robustness and reliability.Our proposed approach deploys curriculum learning (CL) [20] that automatically adjusts the optimization process by gradually selecting training data entries from learning difficulty -learning from hard to easy samples as our optimization strategy [21].To achieve the goal, we design two major CL modules, difficulty measurer for assessing the difficulty of samples, and a training scheduler for determining the percentage of training data.Then we employ our CL training strategy to two tasks.Task 1 aims to predict entities from the images and Task 2 can generate a report from images' features and entity distribution.
Difficulty measurer is to measure sample difficulties.To diversify learning aspects and jointly incorporate imbalance factors, we propose a novel measurement to improve model performance over imbalance patterns.Our measurement adopts a competitive mechanism that encourage correct options with higher ranking over incorrect ones, Fig. 3 JIMA has two tasks.Task 1 aims to predict entity distribution from images and task2 aims to generate report from image's feature and entity distribution.We assign one color per task and solid arrows as workflows.Cross-concatenation rather than independently increasing the likelihood of correct options and decreasing the likelihood of incorrect options.This approach helps mitigate overfitting on common samples and underfitting on rare samples since it focuses on ranking of correct option rather than prediction confidence.Specifically, given a reference token z, vocabulary list V and the prediction p ∈ R |V | , we calculate the token reference (z) probability ranking in the prediction p as the following,

Radgraph
where ♣V ♣ is the vocabulary size.Rank(p, p[z]) assigns a rank to p in descending order and identifies the position of p[z] within this ranking.k ranges from 0 to 1 under regularization with ♣V ♣.A higher value of k indicates that the sample is more difficult.Then, we feed the difficulty information to the next step, Training Scheduler.
Training scheduler aims to automatically leverage imbalance effects by selecting training samples via the difficulty measurers.Our goal is to increase the number of easier samples when the performance decreases and vice versa.According to our goal, we design our scheduler function, c(s t ) as following: , where s is the average performance of all training samples, measuring the model's learning ability.t is the training step.Given decreasing performance as an example, st−1 will be negative.During the process, the ratio 1 − (st−st−1) st−1 > 1 will allow the model to include more easy training data than the last step c(s t−1 ).When the performance increase, the scheduler feed less easy samples to the model and reduce the over-fitting on these samples.After multiple epochs of training, harder samples receive more training iterations than easier samples.In this way, we can alleviate the the challenge from imbalanced tokens and labels in radiology report generation task.To start our curriculum learning, we record the samples' average performance of the last two regular training epochs as s 0 and s 1 , where we empirically initialize c(s 0 ) as 1.

CL-Task 1
CL-Task 1 is to exploit imbalance patterns of radiology labels to generate clinically accurate reports.Entities in clinical reports play a crucial role in disease diagnosis.However, these clinical tokens often occur infrequently and are significantly underestimated during model training.Hence, we assess the accuracy of clinical entities to evaluate performance.Our intuition is that as abnormal cases contain more infrequent entities, focusing on the clinical entities may benefit the abnormal cases.If our generated reports are clinically correct, the visual extractor can accurately extract the same entities as gold entities from images.
The computing process is as the following.Given a radiology image Img and the corresponding report Z = (z 0 , . . ., z l ) with the length l, we extract the features from images with a visual extractor.We use ResNet101 [22] (f R ) as our visual extractor and obtain image features (X) from different convolutional channels, X = f R (Img).X ∈ R patch size×d , where d is the size of the feature vector.To predict entities distribution, we feed the feature from X into the Entity Extractor (f E ) with parameters W E ∈ R d×|V | and average the value on each patch(1st dimension), Then we obtain the entity distribution representation q ∈ R |V | .To optimize the model, we minimize Binary Cross Entropy as follows, where q i is the prediction probability of the i-th token and y i = 1 if i-th token is the entities.We extract the gold entities (e) by radgraph [23].To evaluate sample's difficulty in this task, we input the entity distribution prediction q into e.q 1 and obtain

CL-Task 2
CL-Task 2 implements an image-to-text generation pipeline with the objective of improving the infrequent tokens prediction in reports.To generate a report containing more clinically useful information, we integrate the probability prediction of entities(q) in e.q. 3 with image's feature (X).Since d ̸ = ♣V ♣, we cannot interact q and X directly.To facilitate their interaction and information sharing, we employ a cross-concatenation and perform a dot product operation on their cross-concatenated matrix as follows: where S ∈ R patch size×(d+|V |) .Finally, we adopt a transformer structure to encode S and generate i-th token probability distribution P i from encoding feature S and i-th token, P i = f T (S, z i−1 ).To optimize the model, we minimize negative log-likelihood loss (NLL) as follows, We can access the sample's difficulty with P i by e.q. 1, Algorithm 1 Optimization Process of JIMA Require: rateαβ 1: for each epoch do Sample a batch from D 2 and update Task 2: f T ← f T − β∇ f T L task2 8: end for

CL-Joint Optimization
We propose a joint optimization approach to integrate two tasks.Algorithm 1 summarizes the overall optimization process of our approach.We set the learning rate of task 1 as α and β refers to the learning rate of tasks 2. In each training step, we sample different data for different tasks and each task focuses on optimizing its own module of the models.For example, we update the visual extractor (f R ) and the entity extractor (f E ) in task 1. Next, we freeze the parameters of the visual extractor and the entity extractor, and update the parameters of the transformer (f T ) specifically for task 2. Our optimization approach integrates with curriculum learning to tailor joint imbalance learning for each module (f R , f E , f T , ).Curriculum learning empowers the model to concentrate on optimizing hard samples while mitigating the risk of overfitting to easier samples.The joint optimization scheme facilitates each task to manage different module parameters optimization and learn a transferable knowledge from the simpler to more complex task.As a result, all modules collaborate to enhance error reduction from previous tasks.

Experiments
We design our experiments to evaluate performance on both regular and imbalanced settings via automatic and human evaluations.The automatic evaluation includes NLG-oriented and clinical-correctness metrics.NLG-oriented metrics measure the similarity between generated and reference reports.Clinical correctness and human evaluation belong to factually-oriented metrics, and domain-specific evaluation methods.To be consistent with our baselines [10,11,15], we utilize the F1 CheXbert [24] for the clinical-correctness metrics.The experiments compare our proposed approach (JIMA) and the state-of-the-art baselines.Two of our five baselines (CMM + RL & RRG) are designed to solve label imbalance by improving the abnormal findings generation.We conduct ablation and case analyses to fully understand the capabilities of our proposed approach.We include more implementation details and hyperparameter settings in Appendix B.1.
R2Gen [15] is a transformer-based model with ResNet101 [22] as the visual extractor.To capture some patterns in medical reports, R2Gen proposes a relational memory to enhance the transformer so that the model can learn from the patterns' characteristics.Furthermore, R2Gen deploys a memory-driven conditional layer normalization to the transformer decoder facilitating incorporating the previous step generation into the current step.
CMN [17] is a novel extension to the transformer architecture that facilitates the alignment of textual and visual modalities.The cross-modal memory network record the shared information of visual and textual features.The alignment process is carried out via memory querying and responding.The model maps the visual and textual features into the same representation space in memory querying and learns a weighted representation of these features in memory responding.
WCL [16] utilizes the R2Gen framework and incorporates a weakly supervised contrastive loss.Specifically, WCL leverages the contrastive loss to enhance the similarity between a given source image and its corresponding target sequence.Furthermore, the model enhances its ability to learn from difficult samples by assigning more weights to instances sharing common labels.
CMM + RL [25] is a cross-modal memory-based model with reinforcement learning for optimization.CMM + RL designs a cross-modal memory model to align the visual and textual features and deploy reinforcement learning to capture the label imbalance between abnormality and normality.The author uses BLEU-4 as a reward to guide the model to generate the next word from the image and previous words.
RRG [11,26] aims to generate clinically correct reports by weakly-supervised learning of the entities and relations from reports.RRG is a BERT-based model with Densenet-121 [28] as a visual extractor.RRG leverages RadGraph [23] to extract the entities and relation labels in a report.RRG utilizes reinforcement learning to optimize the model.The reward assesses the consistency and completeness of entities and the relation set between generated reports and reference radiology reports.RRG addresses label imbalance issues by maximizing the reward of predicting more complicated entities and relations in abnormal samples.
TIMER [10] aims to decrease the over-fitting of frequent tokens by introducing unlikelihood loss to punish the error on these tokens.The tokens set of unlikelihood loss is automatically adjusted by maximizing the average F1 score on different frequency tokens.
RGRG [27] adopts GPT2 as the language generation model and generate a report based on the localized visual features of anatomical regions, which are extracted by a object detection.This baseline experiment was specifically carried out on the MIMIC-CXR dataset, as the IU X-ray dataset lacks anatomical region information, resulting in the inability to train an object detection module effectively.

Imbalance Setting
We evaluate model robustness under token and label imbalance settings and present results in Section 5.2 and 5.3..For token imbalance, we compare F1-scores of frequent and infrequent tokens separately.We introduce three different scales to define frequency token sets, 1/4, 1/6, and 1/8 respectively.The splits define the top 1/4, 1/6, and 1/8 vocabulary as frequent tokens and the rest vocabulary as infrequent tokens.The setting is to demonstrate the effectiveness of our approach in adapting token imbalance.For label imbalance, we divide our samples into a binary category, normal and abnormal.In this section, we present overall performance and report results of imbalance evaluations and include an ablation analysis and a case study.Generally, JIMA outperforms the state-of-the-art baselines by a large margin, especially under imbalance settings.Our qualitative studies show our method can achieve more clinically accuracy and generate more precisely clinical terms.

Overall Performance
Table 2 presents the performance of JIMA by NLG and clinical-correctness metrics.JIMA outperforms baseline models (both imbalance and regular methods) on BLEU scores by a large margin, confirming the validity of selecting training samples by our curriculum learning method.The approach enables the model to learn multiple times from the samples with lower BLEU-4, resulting in a better performance compared to the baseline models.For example, JIMA shows an improvement of 16.59% on average for IU X-ray and 16.28% for MIMIC-CXR.We infer this is as our task 1 and 2 jointly work to improves the token and label imbalanced problem.
Second, our model achieves the best performance in F1 of the clinical metric, which indicates the Task 1 (Section 3.1) can enable the model to put more attention on difficult samples with lower F1 scores.Additionally, our method promotes clinical token prediction as performance on infrequent tokens and medical terms have been improved.For example, our generation significantly outperforms the baselines on F1 score by 72.10% on IU X-ray and 31.29% on the MIMIC-CXR average.CMN + RL performs better than other baselines on IU X-ray but not on MIMIC-CXR.JIMA maintains a stable performance on both IU X-ray and MIMIC-CXR.We infer this as our joint imbalance adaptation can yield more improvements.

Token Imbalance
Table 3 compares high-and low-frequent tokens F1 in different ratio splits.Our method consistently outperforms baselines in the low-frequent tokens across frequency splits ( 14 , 1  6 , and 1 8 ) on IU X-ray and MIMIC-CXR.While RRG and CMN + RL approaches have adapted label imbalance, the approaches may not be able to adapt the token imbalance.Our approach achieves better performance on the token imbalance.Generating rare tokens with accuracy remains a difficult task despite the high performance achieved on frequent tokens.Common tokens are prone to overfitting while rare tokens are predicted with less precision.For example, the 0.00 score by R2GEN on 3/4 split of the MIMIC-CXR vocabulary.Performance imbalance can deteriorate the clinical correctness of generated reports as medical terminologies are usually infrequent.Nonetheless, our joint imbalance adaptation approach has shown considerable improvements in this area, indicating a promising direction to enhance the robustness of radiology report generation, a critical clinical task.

Label Imbalance
We report NLG evaluations on label imbalance (normal vs. abnormal) in Table 4. JIMA significantly outperforms baseline models both on normal and abnormal splits, which demonstrates its effectiveness under label imbalance.JIMA also performs better than the label imbalance methods, RRG and CMM+RL, indicating that the joint imbalance adaptation is a promising direction to improve model robustness.It is worth noting that models generally perform better on normal samples than on abnormal ones.We infer this for two reasons: 1) abnormal reports contain more infrequent medical tokens, and 2) abnormal reports are longer, as discussed in Section 2. JIMA more improvements on abnormal samples over baselines while maintains a similar performance on samples with normal labels.The observations suggest that our approach can successfully learn from lengthier documents with more medical tokens.

Ablation Analysis
In this section, we carry out ablation experiments to analyze the impact of our curriculum learning approach on tokens and labels with different frequencies.To investigate the performance across different tokens, we categorize tokens into five groups based on their frequency, with "0" representing the most frequent tokens and "4" representing the least frequent tokens.Each group contains an equal number of tokens.In order to compare the performance across different labels, we present their performance individually.We conduct our ablation experiments on the MIMIC-CXR dataset, and the results are depicted in Figure First, we notice that removing curriculum learning does not result in a significant detrimental impact on highly frequent tokens and labels.For instance, the performance is comparable in the "0" token group and the "0-5" label groups.Curriculum learning empowers the model to allocate increased attention to challenging samples, thereby reducing the likelihood of predictions on highly frequent samples.However, our curriculum learning selects training samples based on the ranking of the correct answers.Therefore, despite the reduced probability of the correct answer, the ranking remains unchanged.For example, the correct option still holds the highest estimation).As a result, our curriculum learning approach does not diminish the performance on highly frequent samples.Next, our curriculum learning approach significantly enhances performance primarily on moderately frequent samples.The average improvement amounts to 6.49% in the "1-3" token group and 2.58% in the "6-10" label group.However, our method exhibits limitations in enhancing the performance of exceedingly rare tokens.Notably, the model struggles to predict tokens in the "4" group.

Human Evaluation
To verify the factual correctness, we invite two health professionals to perform evaluation.First, we randomly select 50 test instances per data from IU X-ray and MIMIC-CXR, respectively.We choose CMM+RL as our targeting comparison, as the model is the best performing baseline by automatic metrics.In evaluation, we show the X-ray images, corresponding ground truth reports, and two generated reports (one from our model and the other from CMM+RL) to the expert without disclosing their sources.The experts selected a better description from two candidate reports or chooses the "Same" option if both reports are of similar quality.
We present our human evaluation results in Table 5, which shows a consistent result with automatic evaluation results.Generally, JIMA outperforms the baseline with 11 reports in total.Notably, our approach exhibits significant improvements in abnormal samples.Even though JIMA has only one more vote than the baseline in normal samples, our model secures ten more votes in abnormal samples.This is because abnormal samples have lengthier reports on average and encompass more medical entities, indicating that our approach generates more clinically precise reports.Furthermore, our human evaluation is consistent with the automated evaluation results shown in Table 2.

Case Study
To verify our model's effectiveness in generating clinically correct descriptions, we perform a case study in section and present the result in Fig 5 .We select four samples from IU X-ray and MIMIC-CXR datasets and compare the normal and abnormal samples' performance separately.The correct pathological and anatomical entity predictions are remarked in blue color.Generally, our predictions cover more than 90% entities in reference reports.Compared to normal samples, abnormal samples have longer descriptions and contain more complex entities.These entities usually are rare in corpus and suffer under-fitting Therefore, models underperform in abnormal samples.However, JIMA can capture most of the entities in all kinds of samples and achieve similar performance in both and abnormal samples, which proves our model's effectiveness in improving the factual completeness and correctness of generated radiology reports.

Related Work
Radiology report generation a domain-specific image-to-text task that has two major directions, retrieval- [29,30] and generation-based [15,25,31].The approach compares similarities between an input radiology image and a set of report candidates, ranks the candidates, and returns the most similar one [5,26,29,32].In contrast, our study focuses on the generation-based task, which automatically generates a precise report from an input image.The task has domain-specific characteristics in the clinical field.The clinical data contains many infrequent medical terminologies and longer documents than image captioning from general domains [6].As radiology report generation can reduce the workloads of radiologists, generating highly qualified and precise can be a critical challenge, especially under the imbalance settings.Differing from previous work, we aim to promote model robustness and reliability under imbalance settings, which have been rarely studied in the radiology report generation.
Imbalance learning aims to model skewed data distributions.The primary focus of imbalance learning is on class or label imbalance, such as positive or negative reviews in sentiment analysis [33].While previous studies proposed new objective functions (e.g., focal-loss [34]) or oversampling [35], those methods may not be applicable to our primary generation unit, token, which has large vocabulary sizes and extreme sparsity.In terms of radiology report generation, reports may have disease-related labels.
Recent studies have augmented model robustness by balancing performance between disease and normal by reinforcement learning [7,8].However, ignore a fundamental challenge of generation task, token imbalance -a long-tail distribution.The token imbalance can be even more critical for the clinical domain, as medical tokens appear less frequently than regular tokens in radiology reports.Our study makes a unique contribution to the radiology report generation that jointly consider multiple imbalance factors via curriculum learning.

Conclusion
In this study, we have demonstrated the critical imbalance challenge and developed a curriculum learning-based model to jointly adapt label and token imbalance.Extensive experiments, ablation analysis, and human evaluations show that JIMA leads to significant improvements over the existing state-of-the-art baselines, especially in handling token and label imbalance.Our future work will examine the proposed approach on more imbalance factors (e.g., demography).

Limitations
Limitations should be fully acknowledged before fully interpreting this study, as no research can be fully perfect.Evaluation.We are aware of other evaluation metrics, such as RadGraph [23] and CheXpert [36].However, additional metrics may only be to the MIMIC-CXR or have overlapped with our existing method, such as CheXpert and CheXbert [24].We have included diverse metrics, including NLG, clinical correctness, and human evaluations.To keep consistency with our state-of-theart baselines, we utilize a similar evaluation schema.Having consistent observations between our human and automatic evaluations may also prove our evaluation validity.https://openi.nlm.nih.gov/faq#collection.These datasets are publicly available to researchers subject to the respective data use agreements and ethical guidelines.Any additional generated and analyzed during the current study are available from the corresponding author on reasonable request.
• Code availability The code that implements the method described in this study is publicly available at https://github.com/woqingdoua/JIMA.This repository includes all necessary scripts, tools, and documentation to replicate the results and analyses presented in this paper.• Author contribution Y.W. designed the methodology, performed data collection, formal analysis, prepared materials, wrote the original draft.I-C.H. supervised the conducted analysis, and reviewed and edited the manuscript.X.H. conceived and designed the study (conceptualization and methodology), performed data and formal analysis, prepared materials, wrote the original draft, and reviewed the manuscript.All authors reviewed the manuscript.

Fig. 1
Fig. 1 State-of-the-art model performance on normal and abnormal entries by BLEU-4 (left two) and low-and high-frequent tokens by F1 scores (right two).

Fig. 2
Fig.2Frequent and infrequent token distributions conditioning on report label.

2 : 3 :training schedulers 4 :
Rank entries by the two difficulty measurers (k task1 and k task2 ), and obtain two sorted datasets D 1 , D 2 Calculate c(k task1 t ) and c(k task2 t ) Select top c(k task1 t ) samples from the sorted datasets D 1 obtained by step 1 as training sets 5: Select top c(k task2 t ) samples from the sorted datasets D 2 obtained by step 1 as training sets 6: Sample a batch from D 1 and update Task 1

Fig. 4
Fig.4 Performance comparison of JIMA with and without curriculum learning across various labels and tokens.

Table 1
Data statistics summary.Variations exist in label (Normal and Abnormal %) and average report length (L).

Table 2
Overall performance.∆ is the averaged percentage improvements over baselines.

Table 3
Results on high-and low-frequent tokens with three ratio splits.

Table 4
Label imbalance evaluation with binary types, normal and abnormal.

Table 5
Human evaluation."Same" means the experts vote the same for the generated reports.

5
Qualitative comparison between JIMA and CMM+RL.We highlight correct predictions of pathological and anatomical entities in blue color.