Title: Weakly Paired Multimodal Fusion for Object Recognition
Authors: Liu, HP; Wu, YP; Sun, FC; Fang, B; Guo, D
Author Full Names: Liu, Huaping; Wu, Yupei; Sun, Fuchun; Fang, Bin; Guo, Di
Source: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 15 (2):784-795; 10.1109/TASE.2017.2692271 APR 2018
Language: English
Abstract: The ever-growing development of sensor technology has led to the use of multimodal sensors to develop robotics and automation systems. It is therefore highly expected to develop methodologies capable of integrating information from multimodal sensors with the goal of improving the performance of surveillance, diagnosis, prediction, and so on. However, real multimodal data often suffer from significant weak-pairing characteristics, i.e., the full pairing between data samples may not be known, while pairing of a group of samples from one modality to a group of samples in another modality is known. In this paper, we establish a novel projective dictionary learning framework for weakly paired multimodal data fusion. By introducing a latent pairing matrix, we realize the simultaneous dictionary learning and the pairing matrix estimation, and therefore improve the fusion effect. In addition, the kernelized version and the optimization algorithms are also addressed. Extensive experimental validations on some existing data sets are performed to show the advantages of the proposed method. Note to Practitioners-In many industrial environments, we usually use multiple heterogeneous sensors, which provide multimodal information. Such multimodal data usually lead to two technical challenges. First, different sensors may provide different patterns of data. Second, the full-pairing information between modalities may not be known. In this paper, we develop a unified model to tackle such problems. This model is based on a projective dictionary learning method, which efficiently produces the representation vector for the original data by an explicit form. In addition, the latent pairing relation between samples can be learned automatically and be used to improve the classification performance. Such a method can be flexibly used for multimodal fusion with full-pairing, partial-pairing and weak-pairing cases.
ISSN: 1545-5955
eISSN: 1558-3783
IDS Number: GB6XH
Unique ID: WOS:000429217900030