During magnetic resonance imaging (MRI), the strong response to the signal is usually displayed as structural edges and textures, which is important for distinguishing different tissues and lesions. In the current superresolution (SR) methods with the usage of deep learning, some low-level structural information tends to gradually disappear as the network deepens, resulting in excessive smoothness in high-frequency regions. This phenomenon is particularly noticeable in MRI with poor brightness contrast and small gray dynamic range. Although the generative adversarial network (GAN) can repair structured textures well in natural images, it is likely to learn patterns that do not exist in the images, which poses risks to the reconstruction of medical images. Therefore, we propose an enhanced gradient guiding network (EG2N) to alleviate these problems. On the one hand, for improving the contrast and suppress the noise effectively, we use a multi-scale wavelet enhancement for preprocessing, where the enhanced gradient map is considered as the structural prior. On the other hand, blindly using dense connections in the feed-forward network will bring about redundancy, so structural features from an additional branch are added to specific layers as a supplement to high-level features and constrain optimization. We add a feedback mechanism to promote cross-layer flow between low-level and high-level features. In addition, the perceptual loss is added to avoid distortion caused by excessive smoothing. The experimental results show that our method achieves the best visual results and excellent performance compared with state-of-the-art methods on most popular MR images SR benchmarks.
To solve the question of monocular pose measurement of the non-projected-axisymmetric targets, a Contour Image Length matching method is proposed,firstly,the contour of the targets is simplified as many triangles based on nose to solve the information redundancy and visual occlusion;secondly,the simplified target is projected to the image plane to get virtual image length,the actual image length is extracted simultaneously, then the pitch, yaw and roll angle can be obtained by length matching between virtual image and actual image. The aircraft flight calibration test shows that the precision of the pitch angle, yaw angle and roll angle was 0.9 degrees,1.2 degrees, and 1.5 degrees respectively, which indicated the result is slightly lower than intersection, but can save cost and improve efficiency. Finally, the key factors concerning method error are analyzed; the method can lay an important foundation for monocular pose measurement in range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.