2024
2024
-
Record 217 of
Title:RGB-guided hyperspectral image super-resolution with deep progressive learning
Author(s):Zhang, Tao; Fu, Ying; Huang, Liwei; Li, Siyuan; You, Shaodi; Yan, ChenggangSource: CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY Volume: 9 Issue: 3 DOI: 10.1049/cit2.12256 Published: 2024Abstract:Due to hardware limitations, existing hyperspectral (HS) camera often suffer from low spatial/temporal resolution. Recently, it has been prevalent to super-resolve a low resolution (LR) HS image into a high resolution (HR) HS image with a HR RGB (or multispectral) image guidance. Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors. Recently, researchers pay more attention to deep learning methods with direct supervised or unsupervised learning, which exploit deep prior only from training dataset or testing data. In this article, an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance. Specifically, a progressive HS image super-resolution network is proposed, which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance. Then, the super-resolution network is progressively trained with supervised pre-training and unsupervised adaption, where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes. The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint. It has a good generalisation capability, especially for blind HS image super-resolution. Comprehensive experimental results show that the proposed deep progressive learning method outperforms the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.Accession Number:ISSN: 2468-6557eISSN: 2468-2322 -
Record 218 of
Title:Detecting the Background-Similar Objects in Complex Transportation Scenes
Author(s):Sun, Bangyong; Ma, Ming; Yuan, Nianzeng; Li, Junhuai; Yu, TaoSource: IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS Volume: 25 Issue: 3 DOI: 10.1109/TITS.2023.3268378 Published: 2024Abstract:With the development of intelligent transportation systems, most human objects can be accurately detected in normal road scenes. However, the detection accuracy usually decreases sharply when the pedestrians are merged into the background with very similar colors or textures. In this paper, a camouflaged object detection method is proposed to detect the pedestrians or vehicles from the highly similar background. Specifically, we design a guide-learning-based multi-scale detection network (GLNet) to distinguish the weak semantic distinction between the pedestrian and its similar background, and output an accurate segmentation map to the autonomous driving system. The proposed GLNet mainly consists of a backbone network for basic feature extraction, a guide-learning module (GLM) to generate the principal prediction map, and a multi-scale feature enhancement module (MFEM) for prediction map refinement. Based on the guide learning and coarse-to-fine strategy, the final prediction map can be obtained with the proposed GLNet which precisely describes the position and contour information of the pedestrians or vehicles. Extensive experiments on four benchmark datasets, e.g., CHAMELEON, CAMO, COD10K, and NC4K, demonstrate the superiority of the proposed GLNet compared with several existing state-of-the-art methods.Accession Number:ISSN: 1524-9050eISSN: 1558-0016