TCGM: An Information-Theoretic Framework for Semi-Supervised Multi-Modality Learning
Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang Zhang, Yizhou Wang
;
Abstract
Fusing data from multiple modalities provides more information to train machine learning systems. However, it is prohibitively expensive and time-consuming to label each modality with a large amount of data, which leads to a crucial problem of such semi-supervised multi-modal learning. Existing methods suffer from either ineffective fusion across modalities or lack of theoretical results under proper assumptions. In this paper, we propose a novel information-theoretic approach \-- namely, extbf{T}otal extbf{C}orrelation extbf{G}ain extbf{M}aximization (TCGM) \--- for semi-supervised multi-modal learning, which is endowed with promising properties: (i) it can utilize effectively the information across different modalities of unlabeled data points to facilitate training classifiers of each modality (ii) has theoretical guarantee to have theoretical guarantee to identify Bayesian classifiers, i.e., the ground truth posteriors of all modalities. Specifically, by maximizing TC-induced loss (namely TC gain) over classifiers of all modalities, these classifiers can cooperatively discover the equivalent class of ground-truth classifiers; and identify the unique ones by leveraging a limited percentage of labeled data. We apply our method and can achieve state-of-the-art results on various datasets, including the Newsgroup dataset, Emotion recognition (IEMOCAP and MOSI) and Medical Imaging (Alzheimer’s Disease Neuroimaging Initiative).
"
Related Material
[pdf]