Transformer3 STMT: A spatial-temporal mesh transformer for Mocap-based action recognition, CVPR’23 논문 링크 : https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023_paper.pdfAbstractExisting methods for MoCap-based action recognition take skeletons as input, which requires an extra manual mapping step and loses body shape information. We propose a novel method that directly models raw mesh sequences which ca.. 2023. 7. 21. Mesh Graphormer, ICCV’21 논문 링크 : https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Mesh_Graphormer_ICCV_2021_paper.pdf1. IntroductionTransformers are good at modeling long-range dependencies on the input tokens, but they are less efficient at capturing fine-grained local information.Convolution layers, on the other hand, are useful for extracting local features, but many layers are required to capture global con.. 2023. 3. 26. Actor-Transformers for Group Activity Recognition, CVPR’20 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/papers/Gavrilyuk_Actor-Transformers_for_Group_Activity_Recognition_CVPR_2020_paper.pdf1. IntroductionWe hypothesize a transformer network can also better model relations between actors and combine actor-level information for group activity recognition compared to models that require explicit spatial and temporal constraints.A key enabler is.. 2023. 3. 26. 이전 1 다음