Low-cost and efficient observation devices/methods are always desirable to supplement rainfall observation networks to get high-quality rainfall data. Ubiquitous urban surveillance cameras, recording rainfall video and audio, have shown their huge potential for rainfall estimation. Audio has gained more attention because it can be recorded in weather and has less volume than video, and some surveillance audio-based rainfall estimation (SARE) methods were proposed. However, in urban acoustic scenarios where noise is unavoidable, the current procedure lacks a noise processing part, potentially affecting estimation accuracy. A parallel neural network is put forward to address this noise challenge by employing an attention mechanism. First, one channel in our network is designed to process noises and later cooperates with another to estimate rainfall information. Then, a divide-and-conquer strategy is employed to calculate rainfall intensity. In experiments on the urban surveillance audio dataset, our method achieves a root mean absolute error of |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Rain
Surveillance
Acoustics
Data modeling
Education and training
Performance modeling
Background noise