keras實現多種分類網絡的方式

 更新時間:2020-06-14 01:15:38   作者:佚名   我要評論(0)

Keras應該是最簡單的一種深度學習框架了,入門非常的簡單.
簡單記錄一下keras實現多種分類網絡:如AlexNet、Vgg、ResNet
采用kaggle貓狗大戰的數據作為數據集.
由于

Keras應該是最簡單的一種深度學習框架了,入門非常的簡單.

簡單記錄一下keras實現多種分類網絡:如AlexNet、Vgg、ResNet

采用kaggle貓狗大戰的數據作為數據集.

由于AlexNet采用的是LRN標準化,Keras沒有內置函數實現,這里用batchNormalization代替

收件建立一個model.py的文件,里面存放著alexnet,vgg兩種模型,直接導入就可以了

#coding=utf-8
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D, BatchNormalization
from keras.layers import *
from keras.layers.advanced_activations import LeakyReLU,PReLU
from keras.models import Model
 
def keras_batchnormalization_relu(layer):
 BN = BatchNormalization()(layer)
 ac = PReLU()(BN)
 return ac
 
def AlexNet(resize=227, classes=2):
 model = Sequential()
 # 第一段
 model.add(Conv2D(filters=96, kernel_size=(11, 11),
      strides=(4, 4), padding='valid',
      input_shape=(resize, resize, 3),
      activation='relu'))
 model.add(BatchNormalization())
 model.add(MaxPooling2D(pool_size=(3, 3),
       strides=(2, 2),
       padding='valid'))
 # 第二段
 model.add(Conv2D(filters=256, kernel_size=(5, 5),
      strides=(1, 1), padding='same',
      activation='relu'))
 model.add(BatchNormalization())
 model.add(MaxPooling2D(pool_size=(3, 3),
       strides=(2, 2),
       padding='valid'))
 # 第三段
 model.add(Conv2D(filters=384, kernel_size=(3, 3),
      strides=(1, 1), padding='same',
      activation='relu'))
 model.add(Conv2D(filters=384, kernel_size=(3, 3),
      strides=(1, 1), padding='same',
      activation='relu'))
 model.add(Conv2D(filters=256, kernel_size=(3, 3),
      strides=(1, 1), padding='same',
      activation='relu'))
 model.add(MaxPooling2D(pool_size=(3, 3),
       strides=(2, 2), padding='valid'))
 # 第四段
 model.add(Flatten())
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(0.5))
 
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(0.5))
 
 model.add(Dense(1000, activation='relu'))
 model.add(Dropout(0.5))
 
 # Output Layer
 model.add(Dense(classes,activation='softmax'))
 # model.add(Activation('softmax'))
 
 return model
 
def AlexNet2(inputs, classes=2, prob=0.5):
 '''
 自己寫的函數,嘗試keras另外一種寫法
 :param inputs: 輸入
 :param classes: 類別的個數
 :param prob: dropout的概率
 :return: 模型
 '''
 # Conv2D(32, (3, 3), dilation_rate=(2, 2), padding='same')(inputs)
 print "input shape:", inputs.shape
 
 conv1 = Conv2D(filters=96, kernel_size=(11, 11), strides=(4, 4), padding='valid')(inputs)
 conv1 = keras_batchnormalization_relu(conv1)
 print "conv1 shape:", conv1.shape
 pool1 = MaxPool2D(pool_size=(3, 3), strides=(2, 2))(conv1)
 print "pool1 shape:", pool1.shape
 
 conv2 = Conv2D(filters=256, kernel_size=(5, 5), padding='same')(pool1)
 conv2 = keras_batchnormalization_relu(conv2)
 print "conv2 shape:", conv2.shape
 pool2 = MaxPool2D(pool_size=(3, 3), strides=(2, 2))(conv2)
 print "pool2 shape:", pool2.shape
 
 conv3 = Conv2D(filters=384, kernel_size=(3, 3), padding='same')(pool2)
 conv3 = PReLU()(conv3)
 print "conv3 shape:", conv3.shape
 
 conv4 = Conv2D(filters=384, kernel_size=(3, 3), padding='same')(conv3)
 conv4 = PReLU()(conv4)
 print "conv4 shape:", conv4
 
 conv5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')(conv4)
 conv5 = PReLU()(conv5)
 print "conv5 shape:", conv5
 
 pool3 = MaxPool2D(pool_size=(3, 3), strides=(2, 2))(conv5)
 print "pool3 shape:", pool3.shape
 
 dense1 = Flatten()(pool3)
 dense1 = Dense(4096, activation='relu')(dense1)
 print "dense2 shape:", dense1
 dense1 = Dropout(prob)(dense1)
 # print "dense1 shape:", dense1
 
 dense2 = Dense(4096, activation='relu')(dense1)
 print "dense2 shape:", dense2
 dense2 = Dropout(prob)(dense2)
 # print "dense2 shape:", dense2
 
 predict= Dense(classes, activation='softmax')(dense2)
 
 model = Model(inputs=inputs, outputs=predict)
 return model
 
def vgg13(resize=224, classes=2, prob=0.5):
 model = Sequential()
 model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(resize, resize, 3), padding='same', activation='relu',
      kernel_initializer='uniform'))
 model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(128, (3, 2), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Flatten())
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(prob))
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(prob))
 model.add(Dense(classes, activation='softmax'))
 return model
 
def vgg16(resize=224, classes=2, prob=0.5):
 model = Sequential()
 model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(resize, resize, 3), padding='same', activation='relu',
      kernel_initializer='uniform'))
 model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(128, (3, 2), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu', kernel_initializer='uniform'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Flatten())
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(prob))
 model.add(Dense(4096, activation='relu'))
 model.add(Dropout(prob))
 model.add(Dense(classes, activation='softmax'))
 return model

然后建立一個train.py文件,用于讀取數據和訓練數據的.

#coding=utf-8
import keras
import cv2
import os
import numpy as np
import model
import modelResNet
import tensorflow as tf
from keras.layers import Input, Dense
from keras.preprocessing.image import ImageDataGenerator
 
resize = 224
batch_size = 128
path = "/home/hjxu/PycharmProjects/01_cats_vs_dogs/data"
 
trainDirectory = '/home/hjxu/PycharmProjects/01_cats_vs_dogs/data/train/'
def load_data():
 imgs = os.listdir(path + "/train/")
 num = len(imgs)
 train_data = np.empty((5000, resize, resize, 3), dtype="int32")
 train_label = np.empty((5000, ), dtype="int32")
 test_data = np.empty((5000, resize, resize, 3), dtype="int32")
 test_label = np.empty((5000, ), dtype="int32")
 for i in range(5000):
  if i % 2:
   train_data[i] = cv2.resize(cv2.imread(path + '/train/' + 'dog.' + str(i) + '.jpg'), (resize, resize))
   train_label[i] = 1
  else:
   train_data[i] = cv2.resize(cv2.imread(path + '/train/' + 'cat.' + str(i) + '.jpg'), (resize, resize))
   train_label[i] = 0
 for i in range(5000, 10000):
  if i % 2:
   test_data[i-5000] = cv2.resize(cv2.imread(path + '/train/' + 'dog.' + str(i) + '.jpg'), (resize, resize))
   test_label[i-5000] = 1
  else:
   test_data[i-5000] = cv2.resize(cv2.imread(path + '/train/' + 'cat.' + str(i) + '.jpg'), (resize, resize))
   test_label[i-5000] = 0
 return train_data, train_label, test_data, test_label
 
def main():
 
 train_data, train_label, test_data, test_label = load_data()
 train_data, test_data = train_data.astype('float32'), test_data.astype('float32')
 train_data, test_data = train_data/255, test_data/255
 
 train_label = keras.utils.to_categorical(train_label, 2)
 '''
  #one_hot轉碼,如果使用 categorical_crossentropy,就需要用到to_categorical函數完成轉碼
 '''
 test_label = keras.utils.to_categorical(test_label, 2)
 
 inputs = Input(shape=(224, 224, 3))
 
 modelAlex = model.AlexNet2(inputs, classes=2)
 '''
 導入模型
 '''
 modelAlex.compile(loss='categorical_crossentropy',
     optimizer='sgd',
     metrics=['accuracy'])
 '''
 def compile(self, optimizer, loss, metrics=None, loss_weights=None,
     sample_weight_mode=None, **kwargs):
  optimizer:優化器,為預定義優化器名或優化器對象,參考優化器
  loss: 損失函數,為預定義損失函數名或者一個目標函數
  metrics:列表,包含評估模型在訓練和測試時的性能指標,典型用法是 metrics=['accuracy']
  sample_weight_mode:如果需要按時間步為樣本賦值,需要將改制設置為"temoral"
  如果想用自定義的性能評估函數:如下
   def mean_pred(y_true, y_pred):
   return k.mean(y_pred)
  model.compile(loss = 'binary_crossentropy', metrics=['accuracy', mean_pred],...)
  損失函數同理,再看 keras內置支持的損失函數有
   mean_squared_error
  mean_absolute_error
  mean_absolute_percentage_error
  mean_squared_logarithmic_error
  squared_hinge
  hinge
  categorical_hinge
  logcosh
  categorical_crossentropy
  sparse_categorical_crossentropy
  binary_crossentropy
  kullback_leibler_divergence
  poisson
  cosine_proximity
 '''
 modelAlex.summary()
 '''
 # 打印模型信息
 '''
 modelAlex.fit(train_data, train_label,
    batch_size=batch_size,
    epochs=50,
    validation_split=0.2,
    shuffle=True)
 '''
 def fit(self, x=None,   # x:輸入數據
   y=None,     # y:標簽 Numpy array
   batch_size=32,   # batch_size:訓練時,一個batch的樣本會被計算一次梯度下降
   epochs=1,    # epochs: 訓練的輪數,每個epoch會把訓練集循環一遍
   verbose=1,    # 日志顯示:0表示不在標準輸入輸出流輸出,1表示輸出進度條,2表示每個epoch輸出
   callbacks=None,   # 回調函數
   validation_split=0.,  # 0-1的浮點數,用來指定訓練集一定比例作為驗證集,驗證集不參與訓練
   validation_data=None, # (x,y)的tuple,是指定的驗證集
   shuffle=True,   # 如果是"batch",則是用來處理HDF5數據的特殊情況,將在batch內部將數據打亂
   class_weight=None,  # 字典,將不同的類別映射為不同的權值,用來在訓練過程中調整損失函數的
   sample_weight=None,  # 權值的numpy array,用于訓練的時候調整損失函數
   initial_epoch=0,   # 該參數用于從指定的epoch開始訓練,繼續之前的訓練
   **kwargs):
 返回:返回一個History的對象,其中History.history損失函數和其他指標的數值隨epoch變化的情況
 '''
 scores = modelAlex.evaluate(train_data, train_label, verbose=1)
 print(scores)
 
 scores = modelAlex.evaluate(test_data, test_label, verbose=1)
 print(scores)
 modelAlex.save('my_model_weights2.h5')
 
def main2():
 train_datagen = ImageDataGenerator(rescale=1. / 255,
          shear_range=0.2,
          zoom_range=0.2,
          horizontal_flip=True)
 test_datagen = ImageDataGenerator(rescale=1. / 255)
 train_generator = train_datagen.flow_from_directory(trainDirectory,
              target_size=(224, 224),
              batch_size=32,
              class_mode='binary')
 
 validation_generator = test_datagen.flow_from_directory(trainDirectory,
               target_size=(224, 224),
               batch_size=32,
               class_mode='binary')
 
 inputs = Input(shape=(224, 224, 3))
 # modelAlex = model.AlexNet2(inputs, classes=2)
 modelAlex = model.vgg13(resize=224, classes=2, prob=0.5)
 # modelAlex = modelResNet.ResNet50(shape=224, classes=2)
 modelAlex.compile(loss='sparse_categorical_crossentropy',
      optimizer='sgd',
      metrics=['accuracy'])
 modelAlex.summary()
 
 modelAlex.fit_generator(train_generator,
      steps_per_epoch=1000,
      epochs=60,
      validation_data=validation_generator,
      validation_steps=200)
 
 modelAlex.save('model32.hdf5')
 #
if __name__ == "__main__":
 '''
 如果數據是按照貓狗大戰的數據,都在同一個文件夾下,使用main()函數
 如果數據按照貓和狗分成兩類,則使用main2()函數
 '''
 main2()

得到模型后該怎么測試一張圖像呢?

建立一個testOneImg.py腳本,代碼如下

#coding=utf-8
from keras.preprocessing.image import load_img#load_image作用是載入圖片
from keras.preprocessing.image import img_to_array
from keras.applications.vgg16 import preprocess_input
from keras.applications.vgg16 import decode_predictions
import numpy as np
import cv2
import model
from keras.models import Sequential
 
pats = '/home/hjxu/tf_study/catVsDogsWithKeras/my_model_weights.h5'
modelAlex = model.AlexNet(resize=224, classes=2)
# AlexModel = model.AlexNet(weightPath='/home/hjxu/tf_study/catVsDogsWithKeras/my_model_weights.h5')
 
modelAlex.load_weights(pats)
#
img = cv2.imread('/home/hjxu/tf_study/catVsDogsWithKeras/111.jpg')
img = cv2.resize(img, (224, 224))
x = img_to_array(img/255) # 三維(224,224,3)
 
x = np.expand_dims(x, axis=0) # 四維(1,224,224,3)#因為keras要求的維度是這樣的,所以要增加一個維度
# x = preprocess_input(x) # 預處理
print(x.shape)
y_pred = modelAlex.predict(x) # 預測概率 t1 = time.time() print("測試圖:", decode_predictions(y_pred)) # 輸出五個最高概率(類名, 語義概念, 預測概率)
print y_pred

不得不說,Keras真心簡單方便。

補充知識:keras中的函數式API——殘差連接+權重共享的理解

1、殘差連接

# coding: utf-8
"""殘差連接 residual connection:
  是一種常見的類圖網絡結構,解決了所有大規模深度學習的兩個共性問題:
   1、梯度消失
   2、表示瓶頸
  (甚至,向任何>10層的神經網絡添加殘差連接,都可能會有幫助)

  殘差連接:讓前面某層的輸出作為后面某層的輸入,從而在序列網絡中有效地創造一條捷徑。
       """
from keras import layers

x = ...
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)

y = layers.add([y, x]) # 將原始x與輸出特征相加

# ---------------------如果特征圖尺寸不同,采用線性殘差連接-------------------
x = ...
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.MaxPooling2D(2, strides=2)(y)

residual = layers.Conv2D(128, 1, strides=2, padding='same')(x) # 使用1*1的卷積,將原始張量線性下采樣為y具有相同的形狀

y = layers.add([y, residual]) # 將原始x與輸出特征相加

2、權重共享

即多次調用同一個實例

# coding: utf-8
"""函數式子API:權重共享
  能夠重復的使用同一個實例,這樣相當于重復使用一個層的權重,不需要重新編寫"""
from keras import layers
from keras import Input
from keras.models import Model


lstm = layers.LSTM(32) # 實例化一個LSTM層,后面被調用很多次

# ------------------------左邊分支--------------------------------
left_input = Input(shape=(None, 128))
left_output = lstm(left_input) # 調用lstm實例

# ------------------------右分支---------------------------------
right_input = Input(shape=(None, 128))
right_output = lstm(right_input) # 調用lstm實例

# ------------------------將層進行連接合并------------------------
merged = layers.concatenate([left_output, right_output], axis=-1)

# -----------------------在上面構建一個分類器---------------------
predictions = layers.Dense(1, activation='sigmoid')(merged)

# -------------------------構建模型,并擬合訓練-----------------------------------
model = Model([left_input, right_input], predictions)
model.fit([left_data, right_data], targets)

以上這篇keras實現多種分類網絡的方式就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持腳本之家。

您可能感興趣的文章:

  • 在keras下實現多個模型的融合方式
  • 使用sklearn對多分類的每個類別進行指標評價操作
  • 使用Keras預訓練模型ResNet50進行圖像分類方式
  • Keras實現將兩個模型連接到一起

相關文章

  • keras實現多種分類網絡的方式

    keras實現多種分類網絡的方式

    Keras應該是最簡單的一種深度學習框架了,入門非常的簡單. 簡單記錄一下keras實現多種分類網絡:如AlexNet、Vgg、ResNet 采用kaggle貓狗大戰的數據作為數據集. 由于
    2020-06-14
  • 基于python實現模擬數據結構模型

    基于python實現模擬數據結構模型

    模擬棧 Stack() 創建一個空的新棧。 它不需要參數,并返回一個空棧。 push(item)將一個新項添加到棧的頂部。它需要 item 做參數并不返回任何內容。 pop()
    2020-06-14
  • python新手學習使用庫

    python新手學習使用庫

    本文主要介紹下如何使用第三方庫。 1. 理解第三方庫 Python相當于一個手機,第三方庫相當于手機里各種各樣的APP。 當我們想搭建網站時,可以選擇功能全面的Django、
    2020-06-14
  • Python數據可視化圖實現過程詳解

    Python數據可視化圖實現過程詳解

    python畫分布圖代碼示例: # encoding=utf-8 import matplotlib.pyplot as plt from pylab import * # 支持中文 mpl.rcParams['font.sans-serif'] = ['SimHei']
    2020-06-14
  • keras的siamese(孿生網絡)實現案例

    keras的siamese(孿生網絡)實現案例

    代碼位于keras的官方樣例,并做了微量修改和大量學習?。 最終效果: import keras import numpy as np import matplotlib.pyplot as plt import random f
    2020-06-14
  • 為什么說python適合寫爬蟲

    為什么說python適合寫爬蟲

    抓取網頁本身的接口 相比與其他靜態編程語言,如java,c#,C++,python抓取網頁文檔的接口更簡潔;相比其他動態腳本語言,如perl,shell,python的urllib2包提供了較
    2020-06-14
  • keras 讀取多標簽圖像數據方式

    keras 讀取多標簽圖像數據方式

    我所接觸的多標簽數據,主要包括兩類: 1、一張圖片屬于多個標簽,比如,data:一件藍色的上衣圖片.jpg,label:藍色,上衣。其中label包括兩類標簽,label1第一類:上
    2020-06-14
  • keras實現基于孿生網絡的圖片相似度計算方式

    keras實現基于孿生網絡的圖片相似度計算方式

    我就廢話不多說了,大家還是直接看代碼吧! import keras from keras.layers import Input,Dense,Conv2D from keras.layers import MaxPooling2D,Flatten,Convolu
    2020-06-14
  • Python matplotlib 繪制雙Y軸曲線圖的示例代碼

    Python matplotlib 繪制雙Y軸曲線圖的示例代碼

    Matplotlib簡介 Matplotlib是非常強大的python畫圖工具 Matplotlib可以畫圖線圖、散點圖、等高線圖、條形圖、柱形圖、3D圖形、圖形動畫等。 Matplotlib安裝
    2020-06-14
  • 簡單了解Python變量作用域正確使用方法

    簡單了解Python變量作用域正確使用方法

    在寫代碼的時候,免不了要使用變量。但程序中的一個變量并不一定是在哪里都可以被使用,根據情況不同,會有不同的“有效范圍”。 看這樣一段代碼: def func(x):
    2020-06-14

最新評論

买宝宝用品赚钱吗 山西快乐10分开奖视频 股票作手回忆录pdf 幸运28评测网 股票价格一下子暴跌 青海11选5前3组选走势图 股票趋势 浙江11选5时间 策略论坛 浙江20选5怎么才算中奖 宁夏十一选五走势图前三直 上海11选5开奖助手 一分时时彩技巧 重庆快乐十分计划 黑龙江省 十一选五开 广东11选5带单合买 修正药业股票代码