Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models
Yuchen Yang*, Kwonjoon Lee, Behzad Dariush, Yinzhi Cao*, Shao-Yuan Lo*
;
Abstract
"Video Anomaly Detection (VAD) is crucial for applications such as security surveillance and autonomous driving. However, existing VAD methods provide little rationale behind detection, hindering public trust in real-world deployments. In this paper, we approach VAD with a reasoning framework. Although Large Language Models (LLMs) have shown revolutionary reasoning ability, we find that their direct use falls short of VAD. Specifically, the implicit knowledge pre-trained in LLMs focuses on general context and thus may not apply to every specific real-world VAD scenario, leading to inflexibility and inaccuracy. To address this, we propose , a novel rule-based reasoning framework for VAD with LLMs. comprises two main stages: induction and deduction. In the induction stage, the LLM is fed with few-shot normal reference samples and then summarizes these normal patterns to induce a set of rules for detecting anomalies. The deduction stage follows the induced rules to spot anomalous frames in test videos. Additionally, we design rule aggregation, perception smoothing, and robust reasoning strategies to further enhance ’s robustness. is the first reasoning approach for the one-class VAD task, which requires only few-normal-shot prompting without the need for full-shot training, thereby enabling fast adaption to various VAD scenarios. Comprehensive experiments across four VAD benchmarks demonstrate ’s state-of-the-art detection performance and reasoning ability. is open-source and available at: https://github.com/ Yuchen413/AnomalyRuler"
Related Material
[pdf]
[supplementary material]
[DOI]