Segmenting the pelvic region accurately and quickly from T2-weighted MR images is beneficial for the treatment of pelvic-related diseases. Nevertheless, there are great challenges in segmenting the pelvic region due to its varying scale range, similar tissue strength, and blurred edges. Although the well-known encoder-decoder convolutional neural network has achieved great success in the field of medical image segmentation, its ordinary convolutional layer is insufficient to adequately extract and convey features. Moreover, this architecture does not make the full utilization of different scale contextual information and sometimes fails to capture long-range dependencies. To address these challenges, this paper proposes a DFM-Net which applies Dense Block as the basic module to extract features and enhance the transfer of features. Aiming at the problem of tissue strength similarity, the non-local operation Feature Similarity Module (FSM) was introduced to capture long-range dependencies of the pelvis structure. To enrich the extraction of global and local context information, a new multi-scale method is applied to optimize the model during the up-sampling process of the decoder. Finally, experiments and evaluations were performed on the T2-weighted MR image data set of the pelvis. The encouraging results show that our proposed method is superior to the other five different methods of segmentation.
Facial makeup transfer is a hot research field in computer vision, aiming to transfer the reference face's makeup style to the non-makeup face. In the existing research, we can use to combat the loss to keep the identity information of the face consistent before and after the makeup transfer, but because the input face sometimes has a massive deflection and expression action, which seriously affects the effect of makeup transfer. This paper proposed a face makeup transfer framework based on multi-scale feature loss. Our model is composed of a generator, discriminator, and multi-scale discriminator. The reference makeup and the non-makeup face are input into the generator at the same time. The generator will output the non-makeup face after makeup, which contains the input non-makeup face's identity information and the reference makeup style. To enhance the robustness of the makeup result on the network and improve the effect of makeup on the network, the output makeup face and the input non-makeup face are input into the multiscale discriminator to calculate the feature loss. The multi-scale discriminator takes the pixel multiplication of the semantic segmentation image of the makeup face and the non-makeup face as the input 1 of the multi-scale discriminator, and the pixel multiplication of the non-makeup face and its semantic segmentation as the input 2 of the multi-scale discriminator, and then calculates the feature loss of the two inputs after passing through the multi-scale discriminator. In the calculation process, the feature loss can constrain the pixel differences of different semantic segmentation, which can restrain the shadow and makeup overflow caused by angle deflection and facial expression in makeup transfer. The experimental results show that this paper's experimental method achieves a better effect of makeup transfer than the existing experimental methods.
Texture synthesis is proposed to construct a large digital image from a small sample by taking advantage of its structural content. However, many approaches in texture synthesis are based on the presence of broken features at the overlap of adjacent patches. Due to inaccurate similarity measures, several optimization schemes for patch mergence may fail if these schemes cannot find satisfactory candidates from input samples. Self-similarity of candidate patches was proposed, and an algorithm was developed to perform distance error matching and alignment via the sum of self-similarity. First, when the matching window moves in the texture samples along the scan line, the sum of self-similarity replaces the sum-of-squared difference to decrease broken features or texels of outputs. Next, the synthesis speed was accelerated by eliminating redundancy calculation during matching window sliding. Finally, the proposed method enlarges the search range of the suture from one patch to all patches in the horizontal direction, and the broken features of overlaps are eliminated in the outputs. To demonstrate the performance of the proposed method, the obtained synthesis results were compared with other conventional synthesis methods. In all cases, experimental results demonstrate that the self-similarity matching-based method can decrease the number of feature discontinuities and accelerate synthesis time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.