Select Language

AI社区

数据要素产业

如何使用Python将给定的图像集进行聚类?

介绍大家好,最近在参加深度学习竞赛时,遇到了一个有趣的问题,即如何将给定的图像集进行聚类,你可能会说,这不是一个简单的分类问题吗?使用卷积神经网络, 就实现,但关键在于没有合适训练数据提供训练。在不想自己收集数据集的情况,我们如何解决这个问题呢?这就是本文的主要内容,即将深度学习直接应用于测试数据(此处为图像),而无需创建训练数据集并在该数据集上训练神经网络。卷积神经网络作为特征提取器首先我们需要讨论为什么需要特征提取器?以及如何使卷积神经网络(CNN)发挥作用。图像数据的特征提取器:假设算法需要像特征一样需要两只眼睛,一只鼻子和一张嘴来将图像分类为面部,但是在不同的图像中,这些特征存在于不同的像素位置,因此简单地将图像扁平化并将其提供给算法是不起作用的。而解决这个问题刚好是CNN的卷积层发挥作用的地方。卷积层作为我们的特征提取器,并将图像分解为越来越精细的细节,我们来看一下下面的例子:

这是一只猫的图像,这是Vgg16的第一个卷积层看到它的样子

请注意不同的图像,这些是我们的CNN所学习的特征图,一些特征图着重于轮廓,一些特征着重于纹理,而某些特征则涉及更细微的细节(如耳和嘴),下一阶段的卷积层将这些特征分解得更细的细节。

上午我们知道了卷积层可以学习图像的特定功能,那么接下来我们将实现编码。实现CNN的卷积层网络:以下代码显示了如何使用预训练的CNN Vgg16获得以上结果:MyModel = tf2.<a onclick="parent.postMessage({'referent':'.tensorflow.keras'}, '*')">keras.applications.VGG16(
   include_top=True, weights='imagenet', input_tensor=None, input_shape=None,
   pooling=None, classes=1000, classifier_activation='softmax'

MyModel.summary()
## lets Define a Function that can show Features learned by CNN's nth convolusion layer
def ShowMeWhatYouLearnt(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..Image'}, '*')">Image, <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..layer'}, '*')">layer, <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..MyModel'}, '*')">MyModel):
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img = img_to_array(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..Image'}, '*')">Image)
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img = np.<a onclick="parent.postMessage({'referent':'.numpy.expand_dims'}, '*')">expand_dims(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img, 0)
   ### preprocessing for img for vgg16
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img = tf2.<a onclick="parent.postMessage({'referent':'.tensorflow.keras'}, '*')">keras.applications.vgg16.preprocess_input(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img)
   ## Now lets define a model which will help us
   ## see what vgg16 sees
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..inputs'}, '*')">inputs = <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..MyModel'}, '*')">MyModel.inputs
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..outputs'}, '*')">outputs = <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..MyModel'}, '*')">MyModel.layers[<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..layer'}, '*')">layer].output
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..model'}, '*')">model = Model(inputs=<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..inputs'}, '*')">inputs, outputs=<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..outputs'}, '*')">outputs)
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..model'}, '*')">model.summary()
   ## let make predictions to see what the Cnn sees

<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..featureMaps'}, '*')">featureMaps = <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..model'}, '*')">model.predict(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..img'}, '*')">img)
   ## Plotting Features
   for a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..maps'}, '*')">maps in <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..featureMaps'}, '*')">featureMaps:
plt.<a onclick="parent.postMessage({'referent':'.matplotlib.pyplot.figure'}, '*')">figure(figsize=(20,20))
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..pltNum'}, '*')">pltNum = 1
       for <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..a'}, '*')">a in range(8):
           for <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..b'}, '*')">b in <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..range'}, '*')">range(8):
plt.<a onclick="parent.postMessage({'referent':'.matplotlib.pyplot.subplot'}, '*')">subplot(8, 8, <a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..pltNum'}, '*')">pltNum)
plt.<a onclick="parent.postMessage({'referent':'.matplotlib.pyplot.imshow'}, '*')">imshow(<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..maps'}, '*')">maps[: ,: ,<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..pltNum'}, '*')">pltNum - 1], cmap='gray')
<a onclick="parent.postMessage({'referent':'.kaggle.usercode.12234793.44545592.ShowMeWhatYouLearnt..pltNum'}, '*')">pltNum += 1
plt.<a onclick="parent.postMessage({'referent':'.matplotlib.pyplot.show'}, '*')">show()
接下来我们将重点介绍如何来创建我们的聚类算法。设计图像聚类算法在本节中,我们使用Kaggle上的 keep-babies-safe 数据集。https://www.kaggle.com/akash14/keep-babies-safe首先,我们创建一个图像聚类模型,来将给定的图像分为两类,即玩具或消费品,以下是来自该数据集的一些图像。

以下代码实现我们的聚类算法:##################### Making Essential imports ############################
import sklearn
import os
import sys
import matplotlib.pyplot as plt
import cv2
import pytesseract
import numpy as np
import pandas as pd
import tensorflow as tf
conf = r'-- oem 2'
#####################################
# Defining a skeleton for our       #
# Dataframe                         #
#####################################
Dataframe = {
   'photo_name' : [],
   'flattenPhoto' : [],
   'text' : [],
   }
#######################################################################################
#      The Approach is to apply transfer learning hence using Resnet50 as my          #
#      pretrained model                                                               #
#######################################################################################
MyModel = tf.keras.models.Sequential()
MyModel.add(tf.keras.applications.ResNet50(
   include_top = False, weights='imagenet',    pooling='avg',
))
# freezing weights for 1st layer
MyModel.layers[0].trainable = False
### Now defining dataloading Function
def LoadDataAndDoEssentials(path, h, w):
   img = cv2.imread(path)
   Dataframe['text'].append(pytesseract.image_to_string(img, config = conf))
   img = cv2.resize(img, (h, w))
   ## Expanding image dims so this represents 1 sample
   img = img = np.expand_dims(img, 0)
   img = tf.keras.applications.resnet50.preprocess_input(img)
   extractedFeatures = MyModel.predict(img)
   extractedFeatures = np.array(extractedFeatures)
   Dataframe['flattenPhoto'].append(extractedFeatures.flatten())
### with this all done lets write the iterrrative loop
def ReadAndStoreMyImages(path):
   list_ = os.listdir(path)
   for mem in list_:
       Dataframe['photo_name'].append(mem)
       imagePath = path + '/' + mem
       LoadDataAndDoEssentials(imagePath, 224, 224)
### lets give the address of our Parent directory and start
path = 'enter your data's path here'
ReadAndStoreMyImages(path)
######################################################
#        lets now do clustering                      #
######################################################
Training_Feature_vector = np.array(Dataframe['flattenPhoto'], dtype = 'float64')
from sklearn.cluster import AgglomerativeClustering
kmeans = AgglomerativeClustering(n_clusters = 2)
kmeans.fit(Training_Feature_vector)
A little explanation for the above code:
上面的代码使用Resnet50(一种经过预先训练的CNN)进行特征提取,我们只需移除其头部或用于预测类别的神经元的最后一层,然后将图像输入到CNN并获得特征向量作为输出,实际上,这是我们的CNN在Resnet50的倒数第二层学习到的所有特征图的扁平数组。可以将此输出向量提供给进行图像聚类的任何聚类算法。让我向你展示通过这种方法创建的簇。

该可视化的代码如下## lets make this a dataframe
import seaborn as sb
import matplotlib.pyplot as plt
dimReducedDataframe = pd.Dataframe(Training_Feature_vector)
dimReducedDataframe = dimReducedDataframe.rename(columns = { 0: 'V1', 1 : 'V2'})
dimReducedDataframe['Category'] = list (df['Class_of_image'])
plt.figure(figsize = (10, 5))
sb.scatterplot(data = dimReducedDataframe, x = 'V1', y = 'V2',hue = 'Category')
plt.grid(True)
plt.show()
结论本文通过解释如何使用深度学习和聚类将视觉上相似的图像聚在一起形成簇,而无需创建数据集并在其上训练CNN。