Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action Recognition


Abstract

We introduce SynSE, a novel syntactically guided generative approach for Zero-Shot Learning (ZSL). Our end-to-end approach learns progressively refined generative embedding spaces constrained within and across the involved modalities (visual, language). The inter-modal constraints are defined between action sequence embedding and embeddings of Parts of Speech (PoS) tagged words in the corresponding action description. We deploy SynSE for the task of skeleton-based action sequence recognition. It has been accepted for publication at the 2021 IEEE ICIP.

 

Architecture

ocr

Citation

 
    @misc{gupta2021syntactically,
        title={Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action Recognition},
        author={Pranay Gupta and Divyanshu Sharma and Ravi Kiran Sarvadevabhatla},
        year={2021},
        eprint={2101.11530},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }