We take a look at crowdsourcing for subjective image quality evaluation using real image stimuli with nonsimulated distortions. Our aim is to scale the task of subjectively rating images while ensuring maximal data validity and accuracy. While previous work has begun to explore crowdsourcing for quality assessment, it has either used images that are not representative of popular consumer scenarios or used crowdsourcing to collect data without comparison to experiments in a controlled environment. Here, we address the challenges imposed by the highly variable online environment, using stimuli that are subtle and more complex than has traditionally been used in quality assessment experiments. In a series of experiments, we vary different design parameters and demonstrate how they impact the subjective responses obtained. Of the parameters examined are stimulus display mode, study length, stimulus habituation, and content homogeneity/heterogeneity. Our method was tested on a database that was rated in a laboratory test previously. Once our design parameters were chosen, we rated a database of consumer photographs and are making this data available to the research community.
KEYWORDS: Video, Video surveillance, Databases, Video compression, Visualization, Statistical modeling, Statistical analysis, Visual process modeling, Feature extraction, Data modeling
We propose a no reference (NR) video quality assessment (VQA) model. Recently, ‘completely blind’ still picture quality analyzers have been proposed that do not require any prior training on, or exposure to, distorted images or human opinions of them. We have been trying to bridge an important but difficult gap by creating a ‘completely blind’ VQA model. The principle of this new approach is founded on intrinsic statistical regularities that are observed in natural vidoes. This results in a video ‘quality analyzer’ that can predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions or human judgments. Hence, the model is zero shot. Experimental results show that, even with such paucity of information, the new VQA algorithm performs better than the full reference (FR) quality measure PSNR on the LIVE VQA database. It is also fast and efficient. We envision that the proposed method is an important step towards making real time monitoring of ‘completely blind’ video quality feasible.
Although a variety of successful no-reference (blind) picture quality analyzers have been proposed, progress on the blind video quality analysis problem has been slow. We break down the problem of perceptual blind video quality assessment (VQA) into components, which we address individually, before proposing a holistic solution. The idea is to tackle the challenges that comprise the blind VQA problem individually in order to gain a better understanding of it. Publisher’s Note: The first printing of this volume was completed prior to the SPIE Digital Library publication and this paper has since been replaced with a corrected/revised version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.