TCD contains 800 commodity images (dresses, jeans, T-shirts, shoes and hats) from the shops on the Taobao website. The ground truth masks of the TCD dataset are obtained by inviting common sellers of Taobao website to annotate their commodities, i.e., masking salient objects that they want to show from their exhibition. These images include all kinds of commodity with and without human models, thus having complex backgrounds and scenes with highly complex foregrounds. Pixel-accurate ground truth masks are given. These images including all kinds of commodities with and without human models have complex backgrounds and scenes with large foregrounds for evaluation. Figure 1 illustrates some of them.
We evaluate several state-of-the-art saliency detection methods DSR, GC, HS, SF, RC, HC, CA, FT, SR and LC on this dataset. The precision-recall curve and F-measure curves are plotted in the following figures left and right.
Results of the state-of-art methods
 X. Li, H. Lu, L. Zhang, X. Ruan, and M.H. Yang. “Saliency Detection via Dense and Sparse Reconstruction,” in IEEE ICCV, 2013.
 M.M. Cheng, J. Warrell, W.Y. Lin, S. Zheng, V. Vineet, and N. Crook. “Efficient Salient Region Detection with Soft Image Abstraction,” in IEEE ICCV, 2013.
 Q. Yan, L. Xu, J. Shi, and J. Jia. “Hierarchical Saliency Detection,” in IEEE CVPR, 2013, pp. 1155–1162.
 F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung. “Saliency filters: contrast based filtering for salient region detection,” in CVPR, 2012, pp. 733–740.
 M. Cheng, G. Zhang, N. Mitra, X. Huang, and S. Hu. “Global contrast based salient region detection,” in CVPR, 2011.
 S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in IEEE CVPR, 2010, pp. 2376–2383.
 R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk.“Frequency-tuned salient region detection,” in IEEE CVPR, 2009, pp. 1597–1604.
 X. Hou, and L. Zhang. “Saliency detection: A spectral residual approach,” in IEEE CVPR, 2007, pp. 1–8.
 Y. Zhai and M. Shah, “Visual attention detection in video
sequences using spatiotemporal cues,” in ACM Multimedia, 2006, pp.