Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human Activity Reasoning

MVIG-RHOS, SJTU

demo
Human reasoning can be understood as a cooperation between the intuitive, associative ``System-1'' and the deliberative, logical ``System-2''. For existing System-1-like methods in visual activity understanding, it is crucial to integrate System-2 processing to improve explainability, generalization, and data efficiency. One possible path of activity reasoning is building a symbolic system composed of symbols and rules, where one rule connects multiple symbols, implying human knowledge and reasoning abilities. Previous methods have made progress, but are defective with limited symbols from handcraft and limited rules from visual-based annotations, failing to cover the complex patterns of activities and lacking compositional generalization. To overcome the defects, we propose a new symbolic system with two ideal important properties: broad-coverage symbols and rational rules. Collecting massive human knowledge via manual annotations is expensive to instantiate this symbolic system. Instead, we leverage the recent advancement of LLMs (Large Language Models) as an approximation of the two ideal properties, i.e., Symbols from Large Language Models (Symbol-LLM). Then, given an image, visual contents from the images are extracted and checked as symbols and activity semantics are reasoned out based on rules via fuzzy logic calculation. Our method shows superiority in extensive activity understanding tasks.

News and Olds

[2023.12] arXiv Released.
[2023.11] Code Released.
[2023.10] Symbol-LLM will appear at NeurIPS 2023.

Publications

If you find our paper, data or code usefull, please cite:
@inproceedings{wu2023symbol,
  title={Symbol-LLM: Leverage Language Models for Symbolic System in Visual 
  Human Activity Reasoning},
  author={Wu, Xiaoqian and Li, Yong-Lu and Sun, Jianhua and Lu, Cewu},
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
  year={2023}
}
Map
© Copyright 2022 MVIG-RHOS • Based on tbakerx