As an extension of diffusion models, consistency models reduce the necessary sampling steps to a single iteration when synthesizing an image sample, thereby significantly enhances the efficiency of image generation. In addition, they also allow multi-step generation, providing flexibility for trade-offs between sample quality and computational efficiency. Despite these advantages, traditional consistency models are troubled by loss of high-frequency image details. This issue is attributed to the inherent regression-to-mean property of the L2 training loss, impeding overall improvement of model performance on the image generation task. In this paper, we propose a novel consistency adversarial model to address the loss of high-frequency image details through adversarial generation. In particular, we train a consistency model in an adversarial manner by treating it as an image generator. Then, an additional image discriminator is introduced and optimized along with the consistency model in an adversarial manner. The target of the image discriminator is to punish the image generator when it synthesizes images lacking high-frequency image details. In this way, image samples with high-frequency image details can be obtained and the performance of consistency models can be improved. Extensive experiments demonstrate the effectiveness of our proposed method. Our CAM outperforms the traditional consistency model on two challenging benchmarks: ImageNet and LSUN.
Referring image segmentation aims to segment an object mentioned in natural language from an image. It is a fundamental computer vision task. This task is challenging because it involves both vision and language features that need to be aligned and fused effectively. For alignment, pre-trained CLIP is widely used in many vision-language tasks for its notable success in aligning these two modalities. However, in the majority of existing methods, vision and language information are independent in the encoder stage, which is a suboptimal fusion approach. In this paper, we introduce an innovative CLIPDriven Hierarchical Fusion framework named CHRIS. We utilize CLIP as the encoder for its valuable vision-language alignment, we also design an effective early fusion approach in the encoder stage called hierarchical attention. Moreover, we introduce a novel hierarchical fusion neck to fuse vision and language information. In this way, the vision and language features contained in CLIP are further fused effectively. We perform comprehensive experiments on the three datasets widely adopted in the research community, RefCOCO, RefCOCO+, and G-Ref. Our proposed framework demonstrates superior performance compared to previous approaches by just using ResNet as the backbone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.