Select Language

AI社区

公开数据集

11kHands人手拍照数据集,大型手部图像数据集的性别识别和生物统计识别

11kHands人手拍照数据集,大型手部图像数据集的性别识别和生物统计识别

632M
598 浏览
0 喜欢
12 次下载
0 条讨论
Action/Event Detection Classification

欢迎来到11k手数据集,这是一个由190名年龄在18-75岁之间的受试者的11076张手部图像(1600 x 1200像素)组成的集合。每个受试者......

数据结构 ? 632M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    欢迎来到11k手数据集,这是一个由190名年龄在18-75岁之间的受试者的11076张手部图像(1600 x 1200像素)组成的集合。每个受试者都被要求打开和关闭他的左右手的手指。每只手都从背侧和掌侧进行拍摄,背景为统一的白色,并与相机保持大致相同的距离。每张图片都有一个与之相关的元数据记录,其中包括 (1)主体ID,(2)性别,(3)年龄,(4)肤色,以及(5)一组被拍摄的手的信息,即右手或左手,手的侧面(背侧或掌侧),以及指手部图像是否包含饰品、指甲油或不规则的逻辑指标。拟议的数据集有大量的手部图像,有更详细的元数据。该数据集对于合理的学术公平使用是免费的。

    Citation

    @article {
    	afifi201911kHands,
    	title = {11K Hands: gender recognition and biometric identification using a large dataset of hand images},
    	author = {Afifi, Mahmoud},
    	journal = {Multimedia Tools and Applications},
    	doi = {10.1007/s11042-019-7424-8},
    	url = {https://doi.org/10.1007/s11042-019-7424-8},
    	year={2019}
    }


    Statistics

    The following Figures show the basic statistics of the proposed dataset.

    The first Figure contains the following:

    Top: the distribution of skin colors in the dataset is shown. The number of images for each skin color category is written in the top right of the figure. The skin detection process was performed using the skin detection algorithm proposed by by Conaire et al. [1].

    Bottom: shows the statistics of 1) the number of  subjects, 2) hand images (dorsal- and palmar- sides), 3) hand images with accessories, and 4) hand images with nail polish.

    The second Figure shows the age distribution of the subjects and images of the proposed dataset.

    [1] Conaire, C. O., O'Connor, N. E., & Smeaton, A. F.. Detector adaptation by maximising agreement between independent data sources. In CVPR'07



    Comparison with other datasets


    [1] Sun, Z., Tan, T., Wang, Y., & Li, S. Z. (2005, June). Ordinal palmprint represention for personal identification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 1, pp. 279-284). IEEE.

    [2] Yoruk, E., Konukoglu, E., Sankur, B., & Darbon, J. (2006). Shape-based hand recognition. IEEE transactions on image processing, 15(7), 1803-1815.

    [3] Yörük, E., Dutağaci, H., & Sankur, B. (2006). Hand biometrics. Image and Vision Computing, 24(5), 483-497.

    [4] Hu, R. X., Jia, W., Zhang, D., Gui, J., & Song, L. T. (2012). Hand shape recognition based on coherent distance shape contexts. Pattern Recognition, 45(9), 3348-3359.

    [5] Kumar, A. (2008, December). Incorporating cohort information for reliable palmprint authentication. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP'08. Sixth Indian Conference on(pp. 583-590). IEEE.

    [6] Ferrer, M. A., Morales, A., Travieso, C. M., & Alonso, J. B. (2007, October). Low cost multimodal biometric identification system based on hand geometry, palm and finger print texture. In Security Technology, 2007 41st Annual IEEE International Carnahan Conference on (pp. 52-58). IEEE.

    base Model

    We present a two-stream CNN for gender classification using the proposed dataset. We then employ this trained two-stream CNN as a feature extractor for both gender classification and biometric identification. The latter is handled using two different approaches. In the first approach, we construct a feature vector from the deep features, extracted from the trained CNN, to train a support vector machine (SVM) classifier. In the second approach, three SVM classifiers are fed by the deep features extracted from different layers of the trained CNN and one SVM classifier is trained using the local binary pattern (LBP) features in order to improve the correct identification rate obtained by summing up the classification scores of all SVM classifiers.

    You can download the trained models and classifiers from tables below.

    Gender classification

    As we have a bias towards the number of female hand images (see the statistics above), we use 1,000 dorsal hand images of each gender for training and 500 dorsal hand images of each gender for testing. The images are picked randomly such that the training and testing sets are disjoint sets of subjects, meaning if the subject's hand images appear in the training data, this subject is excluded from the testing data and vice-versa. The same is done for palmar side hand images. For each side, we repeat the experiment 10 times to avoid overfitting problems and consider the average of accuracy as the evaluation metric.

    For comparisons, we have train different image classification methods using the 10 sets of training and testing pairs. The methods are: (1) bag of visual words (BoW), (2) fisher vector, (3) Alexnet (CNN), (4) VGG-16 (CNN), (5) VGG-19 (CNN), and (6) Googlenet (CNN). For the first image classification frameworks (BoW and FV), we have used three different feature descriptors, which are: (1) SIFT, (2) C-SIFT, and (3) rgSIFT. For further comparisons, we recommend to use the same evaluation criterion. To download the 10 sets of training and testing pairs that have been used in our experiments, see the following Table:

    Each set contains the following files:

      • g_imgs_training_d.txt: image filenames for training (dorsal-side)

      • g_imgs_training_p.txt: image filenames for training (palmar-side)

      • g_imgs_testing_d.txt: image filenames for testing (dorsal-side)

      • g_imgs_testing_p.txt: image filenames for testing (palmar-side)

      • g_training_d.txt: the true gender of each corresponding image filename in  g_imgs_training_d.txt

      • g_training_p.txt: the true gender of each corresponding image filename in  g_imgs_training_p.txt

      • g_testing_d.txt: the true gender of each corresponding image filename in  g_imgs_testing_d.txt

      • g_testing_p.txt: the true gender of each corresponding image filename in g_imgs_testing_p.txt

    You can use this Matlab code to extract the images used in each experiments. The code generates 10 directories, each one contains the training and testing sets for each gender. Then you can use the imageDatastore function to load them (see CNN_training.m source code).

    Trained CNN models, SVM classifiers, and results

    If the Matlab Neural Network Toolbox Model for Network support package is not installed, then the function provides a link to the required support package in the Add-On Explorer. To install the support package, click the link, and then click Install. Check that the installation is successful by typing the model name (e.g. alexnet, vgg16, vgg19, and googlenet) at the command line.

    *requires Matlab 2016 or higher.

    **requires Matlab 2017b or higher.

    +trained SVM classifiers using our CNN model as a feature extractor, as described in the paper. The SVM classifiers were trained using the concatenated feature vector in which features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector. The LBP/SVM classifiers were trained using the concatenated feature vector in which LBP features and features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector.

    Biometric identification

    For biometric identification, we work with different training and testing sets. We use 10 hand images for training and 4 hand images for testing of each hand side (palmar or dorsal) of 80, 100, and 120 subjects. We repeat the experiment 10 times, with the subjects and images picked randomly each time. We adopt the average identification accuracy as the evaluation metric. For further comparisons, we recommend to use the same evaluation criterion. To download the 10 sets of training and testing pairs that have been used in our experiments, see the following Table:

    Each set contains the following files:

      • id_imgs_training_d_S.txt: image filenames for training (dorsal-side)

      • id_imgs_training_p_S.txt: image filenames for training (palmar-side)

      • id_imgs_testing_d_S.txt: image filenames for testing (dorsal-side)

      • id_imgs_testing_p_S.txt: image filenames for testing (palmar-side)

      • id_training_d_S.txt: the ID of each corresponding image filename in  id_imgs_training_d_S.txt

      • id_training_p_S.txt: the ID of each corresponding image filename in  id_imgs_training_p_S.txt

      • id_testing_d_S.txt: the true ID of each corresponding image filename in  id_imgs_testing_d_S.txt

      • id_testing_d_S.txt: the true ID of each corresponding image filename in id_imgs_testing_p_S.txt

      • S: is the number of subjects: 80, 100, and 120. Read the paper for more details.

    You can use this Matlab code to extract the images used in each experiments. The code generates 10 directories, each one contains the training and testing sets for each set of subjects. Each filename contains the ID of the subject. For example, 0000000_Hand_0000055.jpg means this image for subject number 0000000, the rest of the file name is the original image name. You can use this Matlab code to load all image filenames and extract the corresponding IDs.

    Trained SVM Classifiers and results

    *trained SVM classifiers using our CNN model as a feature extractor, as described in the paper. Each .mat file contains a Classifier object where:

    • Classifier.low: is the SVM classifier trained using the features extracted from the smoothed version of the input image. These CNN-features are obtained from f9 of the 1st stream.

    • Classifier.high: is the SVM classifier trained using the features extracted from the detail layer of the input image. These CNN-features are obtained from f10 of the 2nd stream.

    • Classifier.fusion: is the SVM classifier trained using the features extracted from the fusion layer of our CNN.

    • Classifier.lbp: is the SVM classifier trained using the LBP features.

    • Classifier.all: is the SVM classifier trained using the concatenated feature vector in which features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector.

    Contact us

    Questions and comments can be sent to:

    mafifi[at]eecs[dot]yorku[dot]ca or m.afifi[at]aun[dot]edu[dot]eg


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 598浏览
    • 12下载
    • 0点赞
    • 收藏
    • 分享