2024

2024

  • Record 133 of

    Title:Design of Optical-mechanical System of Catadioptric Aerial Mapping Camera Based on Secondary Mirror Image Motion Compensation
    Author(s):Zhang, Hongwei(1); Qu, Rui(1); Chen, Weining(1); Yang, Hongtao(1)
    Source: Guangzi Xuebao/Acta Photonica Sinica  Volume: 53  Issue: 2  DOI: 10.3788/gzxb20245302.0222001  Published: February 2024  
    Abstract:Aerial surveying and mapping is an important technical means of civil/military surveying and mapping,which can quickly obtain large-scale and high-precision scale mapping of the target area in a short period of time,and accurately obtain coordinate information of the target plane and elevation information on the map. The acquired information plays an important supporting role in digital city construction,land resources survey,military strategic planning,etc. With the development of aerial surveying and mapping technology,the requirements for aerial mapping cameras have been further improved. It is required that aerial mapping cameras can achieve wide width,high precision and large scale mapping. In order to meet the above requirements,the aerial mapping camera adopts scan imaging mode,but this imaging mechanism will introduce forward/scan image motion,which will affect the image quality. In order to satisfy the image stabilization accuracy of the aerial mapping camera,it is necessary to compensate the image motions. Therefore,a catadioptric aerial mapping camera based on secondary mirror image motion compensation is designed in this paper. Aiming at the dynamic image motion problem of the aerial mapping camera in the process of ground swing imaging,the vector aberration theory for a two-mirror telescopic systems is adopted. The secondary mirror is used as the image motion compensation element, and the comprehensive image motion compensation of the aerial mapping camera is realized through the secondary mirror multi-dimensional motion. However,in the process of compensating the image motion,the secondary mirror will be eccentric and inclined,which will cause the secondary mirror to be off-axis and affect the image quality. Therefore,a misalignment optical system model is established to study the relationship between the deviation vector of the secondary mirror field and the misalignment of the secondary mirror field,and the influence of the secondary mirror motion on the image quality is analyzed. Meanwhile,a design example of the optical-mechanical system of the catadioptric aerial mapping camera based on the secondary mirror image motion compensation is given. The effective focal length of the optical system is 450 mm,and the working spectrum is 435~900 nm. The field of view of the optical system is 4.17×3.13,and the F-number is 4.2. In the design process,the optical-mechanical system of aerial mapping camera adopts non-thermal design to adapt to the working environment of −40 ℃~60 ℃. In order to verify the image motion compensation ability of multi-dimensional motion of the optical element,an experimental platform is built to conduct laboratory imaging tests and field imaging tests on the aerial mapping camera. The laboratory imaging test results show that the dynamic resolution of the aerial mapping camera using the image motion compensation technology can reach 74 lp/mm,and the image motion compensation accuracy is better than 0.5 pixels,which meets the design expectation. In addition,the field imaging test results show that compared with disable image motion compensation function,the aerial survey camera with enable image motion compensation function can acquire sharp edges,clear images,and image quality can meet the expected requirements. Therefore,the camera has the advantages of high accuracy of image motion compensation,compact volume and high reliability,which lays a theoretical foundation for the direction of light and small,high precision and large scale mapping. © 2024 Chinese Optical Society. All rights reserved.
    Accession Number: 20240715561717
  • Record 134 of

    Title:Blind deep-learning based preprocessing method for Fourier ptychographic microscopy
    Author(s):Wu, Kai(1,2); Pan, An(1); Sun, Zhonghan(1); Shi, Yinxia(1,2); Gao, Wei(1)
    Source: Optics and Laser Technology  Volume: 169  Issue:   DOI: 10.1016/j.optlastec.2023.110140  Published: February 2024  
    Abstract:Fourier ptychographic microscopy (FPM) is a technique for tackling the trade-off between the resolution and the imaging field of view by combining the techniques from aperture synthesis and phase retrieval to estimate the complex object from a series of low-resolution intensity images captured under angle-varied illumination. The captured images are commonly corrupted by multiple noise, leading to the degradation of the reconstructed image quality. Typically speaking, the noise model and noise level of the experimental images are unknown, and the traditional image denoising methods have limited effect. In this paper we model the FPM forward imaging process corrupted by noise and divide the noise in the captured images into two parts: the signal-dependent part and the signal-independent part. Based on the noise model we propose a novel blind deep-learning based Fourier ptychographic microscopy preprocessing method, termed BDFP, for removing these two components of noise. First, from a portion of the captured low-resolution images, a set of blocks corresponding to the smooth area of the object are extracted to model signal-independent noise. Second, under the assumption that the signal-dependent noise follows a Poisson distribution, we add Poisson noise and signal-independent noise blocks to clean images to form a paired training dataset, which is then used for training a deep convolutional neural network (CNN) model to reduce both signal-dependent noise and signal-independent noise. The proposed blind preprocessing method, combining with typical FPM reconstruction algorithms, is tested on simulated data and experimental images. Experimental results show that our preprocessing method can significantly reduce the noise in the captured images and bring about effective improvements in reconstructed image quality. © 2023 Elsevier Ltd
    Accession Number: 20234014830596
  • Record 135 of

    Title:Fourier ptychographic reconstruction with denoising diffusion probabilistic models
    Author(s):Wu, Kai(1,2); Pan, An(1); Gao, Wei(1)
    Source: Optics and Laser Technology  Volume: 176  Issue:   DOI: 10.1016/j.optlastec.2024.111016  Published: September 2024  
    Abstract:Fourier ptychographic microscopy (FPM) is a promising computational imaging technique that can bypass the diffraction limit of the objective lens and achieve high-resolution, wide field-of-view imaging. The FPM setups firstly capture a series of low-resolution (LR) intensity images by angle-varied illumination and then reconstruction algorithms recover the high-resolution (HR) complex-valued object from the LR measurements. The image acquisition process commonly introduces noise, ultimately leading to degradation in the quality of the reconstruction results. In this paper, we report a noise-robust Fourier ptychographic reconstruction method that generates the HR complex-valued object estimation using the image priors specified by denoising diffusion probabilistic models (DDPM). First, the initial estimation of the HR complex-valued object is matched with an intermediate state in the Markov chain defined by DDPM. Then, the noisy initial solution is iteratively updated to a high-quality reconstruction result in the reverse process of DDPM and gradient descent correction is incorporated to enforce data consistency with the LR measurements. The proposed method integrates DDPM specified image priors and gradient descent correction, achieving solutions with less noise-related artifacts and high fidelity for HR complex-valued object estimation in Fourier ptychographic reconstruction. We apply the proposed method on both synthetic and real captured data. The experimental results show that our method can efficiently suppress the impact of noise and improve reconstruction results quality. © 2024 Elsevier Ltd
    Accession Number: 20241715963426
  • Record 136 of

    Title:Dual Optical Target Recognition Method for Collimated Images Based on BLOB Region and Edge Feature Analysis
    Author(s):He, Wenxuan(1,2); Wang, Zhengzhou(1); Wei, Jitong(1); Wang, Li(1); Yi, Dongchi(1)
    Source: Guangzi Xuebao/Acta Photonica Sinica  Volume: 53  Issue: 2  DOI: 10.3788/gzxb20245302.0210001  Published: February 2024  
    Abstract:In order to solve the problem that the collimated target recognition algorithm of optical path docking cannot distinguish the adhesive state of double targets,a new method of collimated image dual optical target recognition based on Binary Large Object(BLOB)region feature analysis is proposed. There are two optical targets in the optical alignment image,that is,the simulated optical target and the main laser target. In the initial beam control stage,the positions of the two optical targets are random and uncertain,and there is a possibility of the two optical targets sticking together,which causes great difficulties in beam control. Therefore,optical path alignment needs to solve the image recognition problem in two cases:1)In the initial beam control stage,when the main laser beam and the analog beam are just introduced,the adhesion recognition algorithm needs to be used to identify the adhesion state of the two optical targets. If the two optical targets are in the adhesion state,the two targets need to be completely separated by adjusting the 2D frame BM6XY motor;2) In the case of two optical targets completely separated,it is necessary to distinguish between the analog light target and main laser target in the two optical targets. Firstly,the binary image is processed by digital morphology to calculate the area,center Cxy,axis length lenxy and region Reginxy,of each BLOB region in the whole image. Secondly,the number of valid BLOB regions vblobcount is counted,and the distance between the two maximum connected domains dir is calculated. When vblobcount>1 and dir>100,the collimation image is the completely separated double target image,otherwise it is the adhered image. Then,for the completely separated dual-target image,the number of BLOBs located in the two largest BLOB regions with the center of each BLOB region is counted. The small number of candidate BLOB regions is the main laser target,and the large number of candidate BLOB regions is the analog light target. Finally,for the adhered image,when dir © 2024 Chinese Optical Society. All rights reserved.
    Accession Number: 20240715561765
  • Record 137 of

    Title:Temperature Measurement Method for Small Target Medium-Wave Infrared Spectral Radiation Based on Distance Correction
    Author(s):Li, Wen-Kai(1,2); Zhou, Liang(1); Liu, Zhao-Hui(1); Gui, Kai(1); Liu, Kai(1); Li, Zhi-Guo(1); Xie, Mei-Lin(1)
    Source: Guang Pu Xue Yu Guang Pu Fen Xi/Spectroscopy and Spectral Analysis  Volume: 44  Issue: 4  DOI: 10.3964/j.issn.1000-0593(2024)04-1158-07  Published: April 2024  
    Abstract:For long-distance space targets moving at high speeds, temperature is one of the important parameters to characterize their working state and performance. Accurately obtaining the temperature of the target has an important reference value for judging its motion state and predicting its situation development. At present, the commonly used processing method of surface target or point target is no longer applicable to the measurement of the radiation characteristics of small targets. At the same time, spectral detection increases the distinguishable information of the target in the wavelength dimension, which can accurately obtain the distribution of the target energy with wavelength, providing a possibility for the inversion of the target temperature, and has great application potential. The slitless spectrometer can reduce the requirements for tracking and stabilization accuracy of space targets, has the characteristics of simple structure, high frame rate and fast response speed, and has high application value in astronomical observation and spacecraft observation. In this paper, we analyzed the spectral calibration model of target infrared radiation characteristic measurement and determined the main parameters in the linear response model of infrared detector pixels. In order to reduce the influence of imaging distance on temperature measurement accuracy, we proposed a target temperature inversion model based on distance correction. The improved temperature measurement accuracy meets the accuracy requirements in practical engineering applications and greatly affects infrared radiation spectrum temperature measurement. Certain guiding significance. © 2024 Science Press. All rights reserved.
    Accession Number: 20241515866101
  • Record 138 of

    Title:An Infrared Evanescent Wave Sensor for Detection of Ascorbic Acid in Food and Drugs
    Author(s):You, Tianxiang(1); Zhao, Yongkun(1); Xu, Yantao(2); Guo, Haitao(2); Zhu, Jihong(3); Tao, Haizheng(1); Zhang, Xianghua(4); Xu, Yinsheng(1)
    Source: Journal of Lightwave Technology  Volume: 42  Issue: 9  DOI: 10.1109/JLT.2024.3357491  Published: May 1, 2024  
    Abstract:An infrared evanescent wave sensor was developed to accurately detect ascorbic acid (vitamin C) in food and drugs. The sensor was fabricated by tapering and bending of As2S3 infrared fibers. Due to the broad transmission range (5000-1500 cm-1) of the infrared fibers, covering the characteristic absorption peak of ascorbic acid (C = O at 1760 cm-1 and C = C at 1690 cm-1), the sensor is capable of accurately identifying and detecting the concentration of ascorbic acid. Experimental results demonstrated that a conically tapered fiber sensor with a waist diameter of 50 μm, waist length of 30 mm, and a radius of 2 mm achieved a maximum sensitivity of 0.1257 (a.u./(mg·ml-1)) and a limit of detection (LoD) of 0.917 mg/ml. Furthermore, the application of this fiber sensor in various vitamin C-containing tablets and juices validated its high accuracy and minimal measurement deviation (as low as 0.19 mg/ml). Compared to traditional detection methods, the sensor not only provides a faster and cost-effective solution to identify the substance but also maintains high accuracy. It offers a new approach to quantitative and qualitative analysis of food and drugs. © 1983-2012 IEEE.
    Accession Number: 20240615489260
  • Record 139 of

    Title:A Cross-Level Interaction Network Based on Scale-Aware Augmentation for Camouflaged Object Detection
    Author(s):Ma, Ming(1); Sun, Bangyong(1,2)
    Source: IEEE Transactions on Emerging Topics in Computational Intelligence  Volume: 8  Issue: 1  DOI: 10.1109/TETCI.2023.3299305  Published: February 1, 2024  
    Abstract:Camouflaged object detection (COD), with the task of separating the camouflaged object from its color/texture similar background, has been widely used in the fields of medical diagnosis and military reconnaissance. However, the COD task is still a challenging problem due to two main difficulties: large scale-variation for different camouflaged objects, and extreme similarity between the camouflaged object and its background. To address these problems, a cross-level interaction network based on scale-aware augmentation (CINet) for the COD task is proposed. Specifically, a scale-aware augmentation module (SAM) is firstly designed to perceive the scales information of the camouflaged object by calculating an optimal receptive field. Furthermore, a cross-level interaction module (CLIM) is proposed to facilitate the interaction of scale information at all levels, and the context of the feature maps is enriched accordingly. Finally, with the purpose of fully utilizing these features, we design a dual-branch feature decoder (DFD) to strengthen the connection between the predictions at each level. Extensive experiments performed on four COD datasets, e.g., CHAMELEON, CAMO, COD10K, and NC4K, demonstrate the superiority of the proposed CINet compared with 21 existing state-of-the-art methods. © 2017 IEEE.
    Accession Number: 20233414601306
  • Record 140 of

    Title:High-Precision Domain Adaptive Detection Method for Noncooperative Spacecraft Based on Optical Sensor Data
    Author(s):Zhang, Gaopeng(1); Zhang, Zhe(1); Lai, Jiahang(2); Zhang, Guangdong(1); Ye, Hao(1); Yang, Hongtao(1); Cao, Jianzhong(1); Du, Hubing(3); Zhao, Zixin(4); Chen, Weining(1); Lu, Rong(1); Wang, Changqing(2)
    Source: IEEE Sensors Journal  Volume: 24  Issue: 8  DOI: 10.1109/JSEN.2024.3370309  Published: April 15, 2024  
    Abstract:The accurate detection of noncooperative spacecraft based on optical sensor data is essential for critical space tasks, such as on-orbit servicing, rendezvous and docking, and debris removal. Traditional object detection methods struggle in the challenging space environment, which includes extreme variations in lighting, occlusions, and differences in image scale. To address this problem, this article proposes a high-precision, deep-learning-based, domain-adaptive detection method specifically tailored for noncooperative spacecraft. The proposed algorithm focuses on two key elements: dataset creation and network structure design. First, we develop a spacecraft image generation algorithm using cycle generative adversarial network (CycleGAN), facilitating seamless conversion between synthetic and real spacecraft images to bridge domain differences. Second, we combine a domain-adversarial neural network with YOLOv5 to create a robust detection model based on multiscale domain adaptation. This approach enhances the YOLOv5 network's ability to learn domain-invariant features from both synthetic and real spacecraft images. The effectiveness of our high-precision domain-adaptive detection method is verified through extensive experimentation. This method enables several novel and significant space applications, such as space rendezvous and docking and on-orbit servicing. © 2001-2012 IEEE.
    Accession Number: 20241115731816
  • Record 141 of

    Title:Enhancing Aircraft Object Detection in Complex Airport Scenes Using Deep Transfer Learning
    Author(s):Zhong, Dan(1); Li, Tiehu(2); Li, Cheng(3)
    Source: Guangzi Xuebao/Acta Photonica Sinica  Volume: 53  Issue: 4  DOI: 10.3788/gzxb20245304.0415002  Published: April 2024  
    Abstract:Within the civil aviation airports of China,intricate traffic scenarios and a substantial flow of traffic are pervasive. Conventional monitoring methodologies,including tower observations and scene reports,manifest vulnerability to potential errors and omissions. Aircraft object detection at airport scenes remains a challenging task in the field of computer vision,particularly in complex environmental conditions. The issues of severe aircraft object occlusion, the dynamic nature of airport environments and the variability in object sizes pose difficulties for accurate object detection tasks. In response to these challenges,we propose an enhanced deep learning model for aircraft object detection at airport scenes. Given the practical constraints of limited hardware computational power at civil aviation airports,the proposed method adopts the ResNet-50 model as the foundational backbone network. After pre-training on publicly available datasets,transfer learning techniques are employed for fine-tuning within the specific target domain of airport scenes. Deep transfer learning methods are utilized to enhance the feature extraction capabilities of the model,ensuring better adaptation to the limited aircraft dataset in airport scenarios. Additionally,we incorporate an adjustment module,consisting of two convolution layers,into the backbone network with a residual structure. The adjustment module can increase the receptive field of deep feature maps and improve the model's robustness. Moreover,the proposed method introduces the Feature Pyramid Network,establishing lateral connections across various stages of ResNet-50 and top-down connections. FPN generates and extracts feature information from multiple scales,facilitating the fusion of features in the feature maps. This enhances the accuracy of multi-scale target detection in the task of object detection. Furthermore,optimizations have been implemented on the detection head,composed of parallel classification and regression branches. This detection head aims to strike a balance between the accuracy and real-time performance of target detection,facilitating the fast and accurate generation of bounding boxes and classification outcomes in the model's output. The loss function incorporates weighted target classification loss and localization loss,with GIoU loss used to calculate the localization loss. Moreover, we construct a comprehensive airport scene dataset named Aeroplane, to evaluate the effectiveness of our proposed model. This dataset encompasses real images of diverse aircraft in various backgrounds and scenes,including challenging weather conditions such as rain,fog,and dust,as well as different times of day like noon,dusk,and night. Most of the color images are captured from the camera equipment deployed in various locations,including terminal buildings,control towers,ground sentry posts and other places of a civil aviation airport surveillance system in China. The diversity of the dataset contributes to enhancing the generalization performance of the model. The Aeroplane dataset is structured adhering to standards and is scalable for future expansion. And we conduct experiments on the Aeroplane dataset. Experimental results demonstrate that our proposed model outperforms classic approaches such as RetinaNet,Inception-V3+FPN,and ResNet-34+FPN. Compared to the baseline method,ResNet-50+FPN, our model achieves a 4.9% improvement in average precision for single-target aircraft detection,a 4.0% improvement for overlapped aircraft detection,and a 4.4% improvement for small target aircraft detection on the Aeroplane dataset. The overall average precision is improved by 2.2%. Through experimental validation,our proposed model has demonstrated significant performance improvement in aircraft target detection within airport scenarios. The presented model exhibits robust scene adaptability in various airport environments,including non-occlusion,occlusion,and complex scenes such as nighttime and foggy weather. This validates its practicality in real-world airport settings. The balanced design of real-time performance and accuracy in our approach renders it feasible for practical applications,providing a reliable aircraft target detection solution for airport surveillance systems and offering valuable insights for the task of object detection. © 2024 Chinese Optical Society. All rights reserved.
    Accession Number: 20241715960809
  • Record 142 of

    Title:Experimental study on the implementation method of short pulse laser in distance-selective imaging system
    Author(s):Wang, Chong(1); Li, Miaomiao(1); Yang, Jiahao(1); Zhu, Bingli(2); Han, Jianghao(1); Dang, Wenbin(1)
    Source: Optics and Laser Technology  Volume: 171  Issue:   DOI: 10.1016/j.optlastec.2023.110358  Published: April 2024  
    Abstract:Conventional distance-selective imaging systems use lasers that are large in size, high in power consumption, and high in cost. In order to reduce the system size and reduce the system power consumption and cost, The principles and design methods of two drive circuits for generating narrow pulse lasers based on step recovery diodes SRD (combined with shorted transmission lines) and RF bipolar transistors are discussed, physically fabricated and tested, and the characteristics of the two pulse generators and the factors affecting the pulse width amplitude are analyzed. The experimental results show that the SRD-based method can generate a narrow pulse with a rise time of 456.8 ps, a fall time of 458.3 ps, a pulse width of 1.5 ns, and an amplitude of 2.38 V; the transistor-based method can generate a narrow pulse with a rise time of 903.5 ps, a fall time of 946.1 ps, a pulse width of 824 ps, and an amplitude of 2.46 V, both of which can reach a repetition frequency of 50 MHz. Both design methods can be combined with an external laser diode to achieve excellent short pulse laser output. © 2023
    Accession Number: 20234815126167
  • Record 143 of

    Title:Auto-Alignment Non-Contact Optical Measurement Method for Quantifying Wobble Error of a Theodolite on a Vehicle-Mounted Platform
    Author(s):Li, Xiangyu(1,2,3); Hao, Wei(1,3); Xie, Meilin(1,3); Liu, Bo(1,3); Jiang, Bo(1,3); Lv, Tao(1,2,3); Song, Wei(1,2,3); Ruan, Ping(1,3)
    Source: Tehnicki Vjesnik  Volume: 31  Issue: 2  DOI: 10.17559/TV-20230510000617  Published: 2024  
    Abstract:During non-landing measurements of a theodolite, the accuracy of the goniometric readings can be compromised by wobble errors induced by various factors such as wind loads, theodolite driving torque, and the stiffness of the supporting structure. To achieve high-precision non-landing measurements, it is essential to accurately determine and correct the platform wobble errors affecting the azimuth and pitch pointing angles. In this paper, a non-contact optical measurement method is proposed for quantifying platform wobble errors. The method establishes an auto-alignment optical path between an autocollimator and a reflector in the measuring device. By detecting the deviation angle of the CCD image point as the optical path changes, precise measurements of the platform wobble errors can be obtained. Experimental results demonstrate that the measuring device can achieve an auto-alignment optical path within 5 minutes, significantly improving measurement efficiency. Furthermore, after measuring the platform wobble error and applying data correction, the average error in the azimuth pointing angle is reduced from 31.5″ to 9.8″, and the average error in the pitch pointing angle is reduced from 21″ to 9.2″. These results highlight the substantial correction effect achieved by the proposed method. © 2024, Strojarski Facultet. All rights reserved.
    Accession Number: 20241115717961
  • Record 144 of

    Title:Constructing 1D/0D Sb2S3/Cd0.6Zn0.4S S-scheme heterojunction by vapor transport deposition and in-situ hydrothermal strategy towards photoelectrochemical water splitting
    Author(s):Liu, Dekang(1); Jin, Wei(1); Zhang, Liyuan(1); Li, Qiujie(1); Sun, Qian(1); Wang, Yishan(2); Hu, Xiaoyun(1); Miao, Hui(1)
    Source: Journal of Alloys and Compounds  Volume: 975  Issue:   DOI: 10.1016/j.jallcom.2023.172926  Published: February 25, 2024  
    Abstract:Antimony sulfide (Sb2S3) is widely used in photocatalysts and photovoltaic cells because of its abundant reserves, low toxicity, environmental friendliness, narrow band gap, and high light absorption capacity. Sb2S3 shows a quasi-one-dimensional structure composed of [Sb4S6]n nanoribbons, a lot of reported studies are focused on preparing Sb2S3 with [hk1] oriented dominant growth to improve the photogenerated carrier transport capacity of Sb2S3. However, there is relatively few research on the preparation of [hk1] oriented rod-like Sb2S3 by vapor transport deposition (VTD) method. In this work, the VTD method was used to prepare Sb2S3 with [hk1] oriented growth on the FTO substrate, and then composite with the ternary solid solution CdxZn1−xS. Finally, a novel Sb2S3/Cd0.6Zn0.4S S-scheme heterojunction with rod-like core-shell structure was successfully constructed, which could effectively improve the photoelectrochemical properties. Because the solid solution component x is adjustable, that is, CdxZn1−xS has continuously adjustable band gap width and energy level position, the Sb2S3/CdxZn1−xS heterojunction type can be regulated from Type-II to S-scheme. Photoelectrochemical (PEC) tests indicated that the composite photoanode Sb2S3/Cd0.6Zn0.4S achieved a higher photocurrent density (2.54 mA·cm−2, 1.23 V vs. RHE), which is about 4.31 times that of pure Sb2S3 nanorod photoanode (0.59 mA·cm−2, 1.23 V vs. RHE). © 2023 Elsevier B.V.
    Accession Number: 20234915144994