Purpose To develop a foundational pretraining method for digital mammography that extracts fine-grained visual-language representations from images and reports in label-limited settings. Materials and Methods A multiview mammogram-report pretraining framework for automated breast cancer analysis was developed using retrospectively collected data from January 2010 to December 2020. This framework provides visual explanations of the model's learning, allowing researchers to "visualize what you learn." The abnormality-aware technique was tailored to mammogram characteristics of dense fibroglandular tissue. The proposed framework was evaluated on downstream tasks from four external medical centers, involving label-efficient abnormality recognition in mammograms, including malignancy classification, segmentation, and localization. Statistical analyses were performed using the DeLong test and paired t test for area under the receiver operating characteristic curve and Dice scores, respectively. Results The visualization results, including abnormality-enhanced mammograms and abnormality-awareness maps, could explain that the developed model successfully captures relationships between multiview mammograms and corresponding reports. This reduces the false positives for breast cancer by 37% and enables zero-shot abnormality segmentation. Furthermore, the developed model consistently outperformed existing approaches in fine-tuning for both malignancy classification (area under the receiver operating characteristic curve, INbreast: 0.90 vs 0.78 [P < .001]; Curated Breast Imaging Subset of Digital Database for Screening Mammography [CBIS-DDSM]: 0.85 vs 0.79 [P < .01]; Chinese Mammography Database: 0.85 vs 0.78 [P < .001]; and Cohort of Screen-age Women-Case Control: 0.86 vs 0.77 [P < .001]) and segmentation and localization (Dice score, INbreast: 0.75 vs 0.63 [P < .001]; CBIS-DDSM: 0.76 vs 0.61 [P < .001]). Conclusion The proposed framework enhances interpretability and fine-grained multimodal foundational learning for multiview mammograms and reports. Keywords: Mammography, Breast, Segmentation, Feature Detection, Quantification, Diagnosis, Translation, Transfer Learning, Unsupervised Learning, Breast Cancer, Representation Learning, Visual-Language Foundation Model, Explainable AI Supplemental material is available for this article. © RSNA, 2025.
This website uses cookies to ensure you get the best experience on our website.