Select Language

AI社区

公开数据集

Weizmann Dataset 人体行为动作形状的数据集

Weizmann Dataset 人体行为动作形状的数据集

336.6M
669 浏览
1 喜欢
6 次下载
0 条讨论
Action/Event Detection Classification

概述:2005年,以色列 Weizmann institute 发布了Weizmann 数据库。数据库包含了 10个动作(bend, jack, jump, pjump, run,side,......

数据结构 ? 336.6M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    概述:2005年,以色列 Weizmann institute 发布了Weizmann 数据库。数据库包含了 10个动作(bend, jack, jump, pjump, run,side, skip, walk, wave1,wave2),每个动作有 9 个不同的样本。视频的视角是固定的,背景相对简单,每一帧中只有 1 个人做动作。

    数据库中标注数据除了类别标记外还包括:前景的行为人剪影和用于背景抽取的背景序列。

    Abstract

    Human action in  video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by Gorelick et. al. for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video.



    NEW! The PAMI paper (full version, updated results) in PDF (2MB) format (BibTeX).
    Updated database - including original silhouette sequences and their aligned version, as well as the robustness sequences, can be found
    below.
    The ICCV paper (shorter version) in
    PDF (2MB) format (BibTeX).

    Poisson features

    We use the solution of the Poisson equation to extract several space time features. In the table below we demonstrate these features for three sequences of different actions. The first two columns show the original video sequence and the extracted foreground mask. The third column shows the solutions of the Poisson equation, color-coded from blue (low values) to red (high values). The last three columns show the Space-Time ''saliency'', ''plateness'' and ''stickness'' features that we use. See the paper for details. Click the images below to play the full video sequences.



    Experimental Results

    In the paper we report results for four experiments: action clustering, action recognition, robustness experiments and action detection. Here we show results of last three.

    Action Recognition:

    We collected a database of 90 low-resolution (180 x 144, deinterlaced 50 fps) video sequences showing nine different people, each performing 10 natural actions such as run,walk,skip,jumping-jack(or shortly jack), jump-forward-on-two-legs (or jump), jump-in-place-on-two-legs(or pjump), gallopsideways(or side), wave-two-hands (or wave2), waveone- hand (or wave1), or bend.

    In order to treat both the periodic and nonperiodic actions in the same framework as well as to compensate for different length of periods, we used a sliding window in time to extract space-time cubes, each having eight frames with an overlap of four frames between the consecutive space-time cubes.

    Below we summarize our recognition rates in "leave-one-sequence-out" classification experiments for both complete sequences and sub-sequences .


    Robustness Experiments:

    In this experiment we demonstrate the robustness of our method to high irregularities in the performance of an action. We collected ten test video sequences of people walking in various difficult scenarios in front of different non-uniform backgrounds (see the sequences and their foreground masks below). We show that our approach has relatively low sensitivity to partial occlusions, non-rigid deformations and other defects in the extracted space-time shape.
    Click the images below to play the full video sequences.


    Experiment results: The table below shows for each of the test sequences the first and second best choices and their distances as well as the median distance to all the actions in our database. The test sequences are sorted by the distance to their first best chosen action. All the sequences were classified as "walk".

    Moreover we demonstrate the robustness of our method to substential changes in viewpoint. For this purpose we collected ten additional sequences, each showing the "walk" action captured from a different viewpoint (varying between 0° and 81° relative to the image plane with steps of 9°). Note, that sequences with angles approaching 90 degrees contain significant changes in scale within the sequence. All sequences with viewpoints between 0° and 54° were classified correctly with a large relative gap between the first (true) and the second closest actions (see table below). For larger viewpoints a gradual deterioration occurs. This demonstrates the robustness of our method to relatively large variations in viewpoint.

    Action Detection in a Ballet Movie

    This experiment shows action detection on a movie sequence of a ballet dance, performed by the "Birmingham Royal Ballet" from the "London Dance" website. Original full video can be found also here (WMV format, 400KB). The task was to detect all instances of the ''cabriole'' pa (the query) in the input video.
    Click the images below to play the full video sequences.

    BibTeX

    The PAMI paper:


    @article{ActionsAsSpaceTimeShapes_pami07,
     author = {Lena Gorelick and Moshe Blank and Eli Shechtman and Michal Irani and Ronen Basri},
     title = {Actions as Space-Time Shapes},
     journal = {Transactions on Pattern Analysis and Machine Intelligence},
     volume = {29},
     number = {12},
     pages = {2247--2253},
     month = {December},
     year = {2007},
     ee = {www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html}
    }

    The ICCV paper:
    	@inproceedings{ActionsAsSpaceTimeShapes_iccv05,
     author = {Moshe Blank and Lena Gorelick and Eli Shechtman and Michal Irani and Ronen Basri},
     title = {Actions as Space-Time Shapes},
     booktitle = {The Tenth IEEE International Conference on Computer Vision (ICCV'05)},
     pages = {1395-1402},
     location = {Beiging},
     year   = {2005},
     ee     = {www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html},
     }

    Contact Details

    For further details please contact the authors:

    Lena Gorelick
    Moshe Blank
    Eli Shechtman



    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:20 去赚积分?
    • 669浏览
    • 6下载
    • 1点赞
    • 收藏
    • 分享