2019

2019

  • Record 541 of

    Title:Highly reconfigurable hybrid laser based on an integrated nonlinear waveguide
    Author(s):Aadhi, A.(1); Kovalev, Anton V.(2); Kues, Michael(3); Roztocki, Piotr(1); Reimer, Christian(1,4); Zhang, Yanbing(1); Wang, Tao(1,5); Little, Brent E.(6); Chu, Sai T.(7); Wang, Zhiming(5); Moss, David J.(8); Viktorov, Evgeny A.(2); Morandotti, Roberto(1,2,5)
    Source: Optics Express  Volume: 27  Issue: 18  DOI: 10.1364/OE.27.025251  Published: September 2, 2019  
    Abstract:The ability of laser systems to emit different adjustable temporal pulse profiles and patterns is desirable for a broad range of applications. While passive mode-locking techniques have been widely employed for the realization of ultrafast laser pulses with mainly Gaussian or hyperbolic secant temporal profiles, the generation of versatile pulse shapes in a controllable way and from a single laser system remains a challenge. Here we show that a nonlinear amplifying loop mirror (NALM) laser with a bandwidth-limiting filter (in a nearly dispersion-free arrangement) and a short integrated nonlinear waveguide enables the realization and distinct control of multiple mode-locked pulsing regimes (e.g., Gaussian pulses, square waves, fast sinusoidal-like oscillations) with repetition rates that are variable from the fundamental (7.63 MHz) through its 205th harmonic (1.56 GHz). These dynamics are described by a newly developed and compact theoretical model, which well agrees with our experimental results. It attributes the control of emission regimes to the change of the NALM response function that is achieved by the adjustable interplay between the NALM amplification and the nonlinearity. In contrast to previous square wave emissions, we experimentally observed that an Ikeda instability was responsible for square wave generation. The presented approach enables laser systems that can be universally applied to various applications, e.g., spectroscopy, ultrafast signal processing and generation of non-classical light states. © 2019 Optical Society of America.
    Accession Number: 20193607395239
  • Record 542 of

    Title:The enhanced X-ray Timing and Polarimetry mission—eXTP
    Author(s):Zhang, ShuangNan(1); Santangelo, Andrea(1,2); Feroci, Marco(3,4); Xu, YuPeng(1); Lu, FangJun(1); Chen, Yong(1); Feng, Hua(5); Zhang, Shu(1); Brandt, Søren(36); Hernanz, Margarita(12,13); Baldini, Luca(33); Bozzo, Enrico(6); Campana, Riccardo(23); De Rosa, Alessandra(3); Dong, YongWei(1); Evangelista, Yuri(3,4); Karas, Vladimir(8); Meidinger, Norbert(16); Meuris, Aline(10); Nandra, Kirpal(16); Pan, Teng(21); Pareschi, Giovanni(31); Orleanski, Piotr(37); Huang, QiuShi(22); Schanne, Stephane(10); Sironi, Giorgia(31); Spiga, Daniele(31); Svoboda, Jiri(8); Tagliaferri, Gianpiero(31); Tenzer, Christoph(2); Vacchi, Andrea(25,26); Zane, Silvia(14); Walton, Dave(14); Wang, ZhanShan(22); Winter, Berend(14); Wu, Xin(7); in’ t Zand, Jean J. M.(11); Ahangarianabhari, Mahdi(29); Ambrosi, Giovanni(32); Ambrosino, Filippo(3); Barbera, Marco(35); Basso, Stefano(31); Bayer, Jörg(2); Bellazzini, Ronaldo(33); Bellutti, Pierluigi(28); Bertucci, Bruna(32); Bertuccio, Giuseppe(29); Borghi, Giacomo(28); Cao, XueLei(1); Cadoux, Franck(7); Campana, Riccardo(23); Ceraudo, Francesco(3); Chen, TianXiang(1); Chen, YuPeng(1); Chevenez, Jerome(36); Civitani, Marta(31); Cui, Wei(25); Cui, WeiWei(1); Dauser, Thomas(39); Del Monte, Ettore(3,4); Di Cosimo, Sergio(1); Diebold, Sebastian(2); Doroshenko, Victor(2); Dovciak, Michal(8); Du, YuanYuan(1); Ducci, Lorenzo(2); Fan, QingMei(21); Favre, Yannick(7); Fuschino, Fabio(23); Gálvez, José Luis(12,13); Gao, Min(1); Ge, MingYu(1); Gevin, Olivier(10); Grassi, Marco(30); Gu, QuanYing(21); Gu, YuDong(1); Han, DaWei(1); Hong, Bin(21); Hu, Wei(1); Ji, Long(2); Jia, ShuMei(1); Jiang, WeiChun(1); Kennedy, Thomas(14); Kreykenbohm, Ingo(39); Kuvvetli, Irfan(36); Labanti, Claudio(23); Latronico, Luca(34); Li, Gang(1); Li, MaoShun(1); Li, Xian(1); Li, Wei(1); Li, ZhengWei(1); Limousin, Olivier(10); Liu, HongWei(1); Liu, XiaoJing(1); Lu, Bo(1); Luo, Tao(1); Macera, Daniele(29); Malcovati, Piero(30); Martindale, Adrian(15); Michalska, Malgorzata(37); Meng, Bin(1); Minuti, Massimo(33); Morbidini, Alfredo(3); Muleri, Fabio(3,4); Paltani, Stephane(6); Perinati, Emanuele(2); Picciotto, Antonino(28); Piemonte, Claudio(28); Qu, JinLu(1); Rachevski, Alexandre(24); Rashevskaya, Irina(27); Rodriguez, Jerome(10); Schanz, Thomas(2); Shen, ZhengXiang(22); Sheng, LiZhi(20); Song, JiangBo(21); Song, LiMing(1); Sgro, Carmelo(33); Sun, Liang(1); Tan, Ying(1); Uttley, Phil(9); Wang, Bo(17); Wang, DianLong(19); Wang, GuoFeng(1); Wang, Juan(1); Wang, LangPing(18); Wang, YuSa(1); Watts, Anna L.(9); Wen, XiangYang(1); Wilms, Jörn(39); Xiong, ShaoLin(1); Yang, JiaWei(1); Yang, Sheng(1); Yang, YanJi(1); Yu, Nian(1); Zhang, WenDa(8); Zampa, Gianluigi(24); Zampa, Nicola(24); Zdziarski, Andrzej A.(38); Zhang, AiMei(1); Zhang, ChengMo(1); Zhang, Fan(1); Zhang, Long(21); Zhang, Tong(1); Zhang, Yi(1); Zhang, XiaoLi(21); Zhang, ZiLiang(1); Zhao, BaoSheng(20); Zheng, ShiJie(1); Zhou, YuPeng(21); Zorzi, Nicola(28); Zwart, J. Frans(11)
    Source: Science China: Physics, Mechanics and Astronomy  Volume: 62  Issue: 2  DOI: 10.1007/s11433-018-9309-2  Published: February 1, 2019  
    Abstract:In this paper we present the enhanced X-ray Timing and Polarimetry mission—eXTP. eXTP is a space science mission designed to study fundamental physics under extreme conditions of density, gravity and magnetism. The mission aims at determining the equation of state of matter at supra-nuclear density, measuring effects of QED, and understanding the dynamics of matter in strong-field gravity. In addition to investigating fundamental physics, eXTP will be a very powerful observatory for astrophysics that will provide observations of unprecedented quality on a variety of galactic and extragalactic objects. In particular, its wide field monitoring capabilities will be highly instrumental to detect the electro-magnetic counterparts of gravitational wave sources. The paper provides a detailed description of: (1) the technological and technical aspects, and the expected performance of the instruments of the scientific payload; (2) the elements and functions of the mission, from the spacecraft to the ground segment. © 2018, Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature.
    Accession Number: 20185206295714
  • Record 543 of

    Title:High-resolution imaging of space target based on compressed sensing
    Author(s):Yu, Congcong(1,2); Zhao, Hui(1); Zhang, Ling(1,2); Wang, Jing(1,2); Ge, Rui(1,2); Fan, Xuewu(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 11179  Issue:   DOI: 10.1117/12.2539647  Published: 2019  
    Abstract:Different degradation factors such as Poisson noise, blurring effect, different contrast and different reflectivity and so on will impose severe influences on the imaging process of the non-cooperative space targets with low light intensity and the corresponding image quality is usually poor. In this paper, a two-step reconstruction framework based on compressed sensing (CS) theory is proposed to deal with these degradation factors to improve the quality of the space target images. The proposed algorithm is divided into two steps, the first step is standard compressed sensing based reconstruction, and the second step is super-resolution based on the theory of compressed sensing. Specifically speaking, when the sparsely sampling are obtained, the total variation augmented Lagrangian alternating direction algorithm (TVAL3) is first used to recover the 2D image, which only obtain 25% of the number of pixels in the original image instead of all the pixels in the traditional sampling. Subsequently, the single-frame image super-resolution reconstruction is performed on the captured 2D image, and the super-resolution algorithm based on the dictionary learning is used to realize super-resolution reconstruction, which makes the image resolution doubled. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
    Accession Number: 20193907475225
  • Record 544 of

    Title:Low-Light Remote Sensing Images Enhancement Algorithm Based on Fully Convolutional Neural Network
    Author(s):Jian, Wuzhen(1,2); Zhao, Hui(1); Bai, Zhe(1); Fan, Xuewu(1)
    Source: Lecture Notes in Electrical Engineering  Volume: 552  Issue:   DOI: 10.1007/978-981-13-6553-9_7  Published: 2019  
    Abstract:Low-light remote sensing is a powerful complement to daytime optical remote sensing and can greatly expand the time domain of high-resolution earth observations, and make day and night imaging possible. However, when a low-light sensor is used in the morning dusk and dawn, the captured images have characteristics of low contrast, low brightness, and low signal-to-noise ratio, which severely restrict the identification and interpretation of ground objects. Traditional low-light image enhancement algorithms such as histogram equalization, gamma conversion, and contrast-limited adaptive histogram equalization algorithm, and so on can enhance the low-light remote sensing image and solve the problem of contrast enhancement, but the noise amplification effect brought by the enhancement will degrade the signal-to-noise ratio of the enhanced image. Therefore, in this paper, a data-driven low-light remote sensing image enhancement algorithm is studied. First of all, lots of low-light raw image data pairs corresponding to very low illumination are captured. Then, these raw image data are used to train a deep fully convolutional neural network composed of an encoder–decoder structure. After that, the low-light remote sensing images could be enhanced by the pretrained net structure. The numerical results demonstrate that the fully convolutional neural network based on enhancement algorithm greatly improves the brightness and the contrast of low-light images compared with the traditional enhancement algorithms while a high enough signal-to-noise ratio could be preserved, which will make interpretation and identification much easier. © 2019, Springer Nature Singapore Pte Ltd.
    Accession Number: 20191706840904
  • Record 545 of

    Title:Design of high-accuracy corner cube retroreflector array
    Author(s):Liu, Jie(1); Lin, Shangmin(1); Wang, Hu(1); Liu, Yang(1); Xue, Yaoke(1); Liu, Meiying(1); Xie, Yongjie(1); Bu, Fan(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 11052  Issue:   DOI: 10.1117/12.2522043  Published: 2019  
    Abstract:The retroreflector array consists of multiple cubic corner reflectors, and is used as a cooperative target for space attitude measurement. The position and normal direction of each cubic corner reflector directly affect the measurement accuracy. From the point of view of structural design, a series of practical precision extraction methods are put forward based on machining accuracy in this paper. After the verification of some experiments, the accuracy of the method can be controlled within 5', and the position accuracy is better than 0.05mm. © 2019 SPIE.
    Accession Number: 20190806535059
  • Record 546 of

    Title:Scene text detection with inception text proposal generation module
    Author(s):Zhang, Hang(1,2); Liu, Jiahang(1); Chen, Tieqiao(1,2)
    Source: ACM International Conference Proceeding Series  Volume: Part F148150  Issue:   DOI: 10.1145/3318299.3318373  Published: 2019  
    Abstract:Most scene text detection methods based on deep learning are difficult to locate texts with multi-scale shapes. The challenges of scale robust text detection lie in two aspects: 1) scene text can be diverse and usually exists in various colors, fonts, orientations, languages, and scales in natural images. 2) Most existing detectors are difficult to locate text with large scale change. We propose a new Inception-Text module and adaptive scale scaling test mechanism for multi-oriented scene text detection. the proposed algorithm enhances performance significantly, while adding little computation. The proposed method can flexibly detect text in various scales, including horizontal, oriented and curved text. The proposed algorithm is evaluated on three recent standard public benchmarks, and show that our proposed method achieves the state-of-the-art performance on several benchmarks. Specifically, it achieves an F-measure of 93.3% on ICDAR2013, 90.47% on ICDAR2015 and 76.08%1 on ICDAR2017 MLT. © 2019 Association for Computing Machinery.
    Accession Number: 20192307006435
  • Record 547 of

    Title:Improvement of geometric calibration algorithm with collinear constraints
    Author(s):Guan, Zhao(1,2); Qiao, Weidong(2); Yang, Jianfeng(1); Xue, Bin(1); Lv, Baogang(1); Wang, Nange(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 10843  Issue:   DOI: 10.1117/12.2506578  Published: 2019  
    Abstract:With digital cameras coming into wide-spread use and the intelligence in application system increasingly growing, 3D reconstruction has become an essential part of the vision system. For the sake of achieving it, geometric camera calibration in the context of three-dimensional machine vision must be performed firstly to determine a set of parameters that describe the mapping between 3-D reference coordinates and 2-D image coordinates. In the typical classic method with high calculation accuracy and strong robustness, however, little attention has been paid to initial values of distortion coefficients, and model constraints that make results global optimal. In this paper, we present an improved algorithm based on the traditional calibration method. First, determine exact homography matrices by RANSAC algorithm to reject more error points, solve the initial values of distortion with distortion model, and then constrain the image coordinates of feature points in line by straight lines. Finally, the whole optimized parameters, the suppression of re-projection errors, and the calibration parameters with higher precision are obtained. It becomes a prerequisite for the realization of image fusion and the wide application of computer vision in the field of 3D reconstruction. © 2019 SPIE ·
    Accession Number: 20191006603246
  • Record 548 of

    Title:Single-image super-resolution reconstruction via generative adversarial network
    Author(s):Ju, Chunwu(1,2); Su, Xiuqin(1); Yang, Haoyuan(1,2); Ning, Hailong(1,2)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 10843  Issue:   DOI: 10.1117/12.2505809  Published: 2019  
    Abstract:Single-image super-resolution (SISR) reconstruction is important for image processing, and lots of algorithms based on deep convolutional neural network (CNN) have been proposed in recent years. Although these algorithms have better accuracy and recovery results than traditional methods without CNN, they ignore finer texture details when super-resolving at a large upscaling factor. To solve this problem, in this paper we propose an algorithm based on generative adversarial network for single-image super-resolution restoration at 4x upscaling factors. We decode a restored high-resolution image by the generative network and make the generator output results finer, more realistic texture details by the adversarial network. We performed experiments on the DIV2K dataset and proved that our method has better performance in single image super-resolution reconstruction. The image quality of this reconstruction method is improved at the peak signal-to-noise ratio and structural similarity index and the results have a good visual effect. © 2019 SPIE.
    Accession Number: 20191006603227
  • Record 549 of

    Title:Noise-resistant matching algorithm integrating regional information for low-light stereo vision
    Author(s):Feng, Huahui(1,2); Zhang, Geng(1); Hu, Bingliang(1); Zhang, Xin(1); Li, Siyuan(1,2,3)
    Source: Journal of Electronic Imaging  Volume: 28  Issue: 1  DOI: 10.1117/1.JEI.28.1.013050  Published: January 1, 2019  
    Abstract:Low-light stereo vision is a challenging problem because images captured in dark environment usually suffer from strong random noises. Some widely adopted algorithms, such as semiglobal matching, mainly depend on pixel-level information. The accuracy of local feature matching and disparity propagation decreases when pixels become noisy. Focusing on this problem, we proposed a matching algorithm that utilizes regional information to enhance the robustness to local noisy pixels. This algorithm is based on the framework of ADCensus feature and semiglobal matching. It extends the original algorithm in two ways. First, image segmentation information is added to solve the problem of incomplete path and improve the accuracy of cost calculation. Second, the matching cost volume is calculated with AD-SoftCensus measure that minimizes the impact of noise by changing the pattern of the census descriptor from binary to trinary. The robustness of the proposed algorithm is validated on Middlebury datasets, synthetic data, and real world data captured by a low-light camera in darkness. The results show that the proposed algorithm has better performance and higher matching rate among top-ranked algorithms on low signal-to-noise ratio data and high accuracy on the Middlebury benchmark datasets. © 2019 SPIE and IS&T.
    Accession Number: 20191106619862
  • Record 550 of

    Title:Robust subspace clustering by cauchy loss function
    Author(s):Tao, Dacheng(3); Dong, Yongsheng(1,2); Lu, Quanmao(1); Li, Xuelong(1)
    Source: arXiv  Volume:   Issue:   DOI: null  Published: April 28, 2019  
    Abstract:Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-arts approaches are designed by following the model of spectral clustering based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, we in this paper propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has a upper bound which can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real datasets reveal that our proposed method outperforms several representative clustering methods. Copyright © 2019, The Authors. All rights reserved.
    Accession Number: 20200306428
  • Record 551 of

    Title:Handheld target probe tip center position calibration for target-based vision measurement system
    Author(s):Ma, Yueyang(1); Zhao, Hong(1); Gu, Feifei(2,3); Zhang, Chunwei(1); Zhao, Zixin(1); Zhang, Gaopeng(1,4); Li, Kejia(1)
    Source: Measurement Science and Technology  Volume: 30  Issue: 6  DOI: 10.1088/1361-6501/ab0c5a  Published: May 13, 2019  
    Abstract:The calibration of the handheld target probe tip center position (PTCP) is an essential procedure in a target-based vision measurement system (T-VMS). At present, handheld PTCP calibration methods typically work via the least squares (LS) method, which is easily affected by noise and may generate a sizable error if placed improperly. This paper proposes a regularized total least squares (RTLS) method for handheld target PTCP calibration. Feature points on the handheld target are first subjected to a robust matching strategy, and then positioned precisely by center extraction deviation compensation. Fixed-radius constraint equations derived from the three-dimensional coordinates of feature points are then used to establish an errors in variables (EIV) model of the PTCP. Finally, Tikhonov-regularized and L-curve methods are applied to search for the optimal solution of the EIV model, i.e. the PTCP coordinates. The proposed method was applied in the laboratory and on site to test its accuracy. Practical data demonstrated that it can calibrate the PTCP of a handheld target effectively and with better accuracy. © 2019 IOP Publishing Ltd.
    Accession Number: 20192607088136
  • Record 552 of

    Title:Speckle Reduction for Fourier Ptychographic Reconstruction Using Gamma-Correction and Reshaped Wirtinger Flow Optimization
    Author(s):Li, Zhixin(1,2); Wen, Desheng(1); Song, Zongxi(1); Liu, Gang(1,2); Zhang, Weikang(1,2); Wei, Xin(1,2); Jiang, Tuochi(1,2)
    Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  Volume: 11902 LNCS  Issue:   DOI: 10.1007/978-3-030-34110-7_31  Published: 2019  
    Abstract:Fourier ptychography is a newly reported computational imaging technique, which is used for long-distance, sub-diffraction imaging recently. Compared to conventional Fourier ptychographic microscopy, there is pronounced laser speckle noise in captured images. In this work, a new framework is proposed to suppress speckle noise and reconstruct the high-resolution image for diffuse object. We introduce a random phase to simulate the effects of rough surface during imaging process, and then recover the high-resolution spectrum following two steps: the first is to promote the noisy captured images using Gamma-correction, and the second step is to recover the Fourier spectrum using reshaped Wirtinger flow optimization. Experiments on both simulation and real data demonstrate that the proposed method incorporates speckle noise reduction into reconstruction process, which can achieve better results on both visual and quantitative metrics compared to previous work. © 2019, Springer Nature Switzerland AG.
    Accession Number: 20195207929638