Select Language

公开数据集

Weakly Supervised Clothing Co-Parsing

Weakly Supervised Clothing Co-Parsing

Scene:

Person

Data Type:

2D Box
所需积分:10 去赚积分?
  • 428浏览
  • 13下载
  • 0点赞
  • 收藏
  • 分享

贡献者查看主页

中山大学人机物智能融合实验室

Our laboratory seeks general and task-driven ways to build intelligence, embracing the access to the ternary human-cyber-physical universe.

Data Preview ? 125M

    Data Structure ?

    *数据结构实际以真实数据为准

    “Clothes Co-Parsing via Joint Image Segmentation and Labeling with Application to Clothing Retrieval”, Xiaodan Liang, Liang Lin*, Wei Yang, Ping Luo, Junshi Huang, and Shuicheng Yan,IEEE Transactions on Multimedia (T-MM), 18(6): 1175-1186, 2016.(A shorter previous version was published in CVPR 2014.) PDF

    f<em></em>ramework

    This project aims to develop an integrated system for clothing co-parsing: given an image database of clothes/human, where all images are unsegmented but annotated with tags, jointly parse the images into semantic clothing configurations. To incorporate prior knowledge on clothing, we present a semantic template for clothing that arranges diverse clothes tags based on the spatial layout of human and garment co-occurrence. Then we propose a framework consisting of two phases of optimization, guided by the semantic template:

    (i) image co-segmentation for extracting clothes regions: We first group regions in every image, and then propagate and refine segmentation jointly over all images by employing exemplar-SVMs;

    (ii) region co-labeling for recognizing cloth components: We assign a garment tag to each region by modeling the problem as a multi-image graphical model.

    Experiments

    recall

    Average recall of some garment items with high occurrences in Fashionista.


    Fashionista CCP
    Methods aPA* mAGR* aPA mAGR
    Ours-full 90.29 65.52 88.23 63.89
    PECS [1] 89.00 64.37 85.97 51.25
    BSC [2] 82.34 33.63 81.61 38.75
    STF [3] 68.02 43.62 66.85 40.70
    aPA: average Pixel Accuracy (%)
    mAGR: mean Average Garment Recall (%)

    Some parsing results are showed in the following figure. Our method could easily parse clothes accurately enough even in some challenging illumination and complex background conditions. Moreover, our method could even parse some small garments such as belt, purse, hat, sunglasses, etc. For reasonably ambiguous clothing patterns such as dotted t-shirt or colorful dress, our framework could give satisfying results. In addition, the proposed method could even parse several persons in a single image simultaneously.

    References

    1. K. Yamaguchi, H. Kiapour, L. E. Ortiz, and T. L. Berg. Parsing clothing in fashion photographs. CVPR, 2012.

    2. X. Liu, B. Cheng, S. Yan, J. Tang, T. S. Chua, and H. Jin. Label to region by bi-layer sparsity priors. In ACM MM, 2009.

    3. J. Shotton, M. Johnson, and R. Cipolla. Semantic texton forests for image categorization and segmentation. In CVPR, 2008.


    0相关评论
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。