ИСТИНА |
Войти в систему Регистрация |
|
ФНКЦ РР |
||
Universal adversarial perturbation (UAP) attacks are widely used to analyze image classifiers that employ convolutional neural networks. In this paper, we make the first attempt in attacking differentiable no-reference image- and video-quality metrics through UAPs. The goal of attacks on quality metric is to increase the quality score of an output image, when visual quality does not improve after the attack. The development of new attacks plays an important role in vulnerability analysis of quality metrics. When developers of image- and video-algorithms can boost metric scores through preprocessing, objective algorithm comparisons are no longer fair. Inspired by the idea of UAPs for classifiers, we trained UAPs for seven no-reference image- and video-quality metrics (PaQ-2-PiQ, Linearity, VSFA, MDTVSFA, KonCept512, Nima and SPAQ) to increase the respective scores. We treated the UAP as network weights and applied the deeplearning training techniques. We then applied trained UAPs to FullHD video frames before compression and proposed a method for comparing metrics stability based on RD curves to identify metrics that are the most resistant to UAP attack. The existence of successful UAP appears to diminish metric’s ability to provide reliable scores. We recommend the proposed method as an additional verification of metric reliability to complement traditional subjective tests and benchmarks.