Unified Medical Image Pre-training in Language-Guided Common Semantic Space

Xiaoxuan He, Yifan Yang, Xinyang Jiang, Xufang Luo*, Haoji Hu, Siyun Zhao, Dongsheng Li, Yuqing Yang, Lili Qiu ;

Abstract


"Vision-Language Pre-training (VLP) has shown the merits of analysing medical images. It efficiently learns visual representations by leveraging supervisions in their corresponding reports, and in turn facilitates analysis and interpretation of intricate imaging data. However, such observation is predominantly justified on single-modality data (mostly 2D images like X-rays), adapting VLP to learning unified representations for medical images in real scenario remains an open challenge. This arises from medical images often encompass a variety of modalities, especially modalities with different dimensions (e.g., 3D images like Computed Tomography), and there are almost no paired multi-dimension data here. To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely , which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images). Under the text’s guidance, effectively select text-related 2D slices from sophisticated 3D volume, which acts as pseudo-pairs to bridge 2D and 3D data, ultimately enhancing the consistency across various medical imaging modalities. To demonstrate the effectiveness and versatility of , we evaluate its performance on both 2D and 3D images across several different datasets, covering a wide range of medical image tasks such as classification, segmentation, and retrieval. has demonstrated superior performance in downstream tasks, showcasing its effectiveness in establishing a universal medical visual representation."

Related Material


[pdf] [supplementary material] [DOI]