2018年3月8日 星期四

使用 Keras 多層感知器 MLP 辨識手寫數字 (二)

昨天終於把 Keras 的 MLP 辨識大部分測完, 但 "TensorFlow+Keras 深度學習人工智慧實務應用" 這本書的第七章後面還有增加神經元, 加入 Dropout 避免 overfitting, 以及增加隱藏層三個測試還沒做, 因為測試紀錄的篇幅太長了, 所以剩下的測試記在這裡. 前一篇記錄參考 :

# 使用 Keras 多層感知器 MLP 辨識手寫數字 (一)


1. 增加隱藏層神經元與過度擬合 (Overfitting) 問題 :

前一篇 Keras 測試中的 MLP 隱藏層使用 256 個神經元 (感知器), 如果增加神經元的話是否會提升辨識率呢? 在下面的測試中我打算比較隱藏層神經元數目從 128 開始倍增為 256, 512, 以及 1024 這四種情形的準確度變化. 我將之前的 show_train_history.py 修改為如下的 show_train_history_overfit.py, 此程式要帶一個命令列參數, 用來指定隱藏層神經元數目.

#show_train_history_overfit.py
import sys
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
import matplotlib.pyplot as plt
import numpy as np

def show_train_history(train_history):
    fig=plt.gcf()
    fig.set_size_inches(16, 6)
    plt.subplot(121)
    plt.plot(train_history.history["acc"])
    plt.plot(train_history.history["val_acc"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Accuracy")
    plt.legend(["train", "validation"], loc="upper left")
    plt.subplot(122)
    plt.plot(train_history.history["loss"])
    plt.plot(train_history.history["val_loss"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Loss")
    plt.legend(["train", "validation"], loc="upper left")
    plt.show()

#pre-processing
np.random.seed(10)
(x_train_image, y_train_label), (x_test_image, y_test_label)=mnist.load_data()
x_train=x_train_image.reshape(60000,784).astype('float32')
x_test=x_test_image.reshape(10000,784).astype('float32')
x_train_normalize=x_train/255
x_test_normalize=x_test/255
y_train_onehot=np_utils.to_categorical(y_train_label)
y_test_onehot=np_utils.to_categorical(y_test_label)

#create model
h=int(sys.argv[1]) 
model=Sequential()
model.add(Dense(units=h, input_dim=784, kernel_initializer='normal', activation='relu'))
model.add(Dense(units=10, kernel_initializer='normal', activation='softmax'))
model.summary()

#train model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=200, verbose=2)

#show train history
show_train_history(train_history)
scores=model.evaluate(x_test_normalize, y_test_onehot)
print("Accuracy=", scores[1])
prediction=model.predict_classes(x_test)
print(prediction)

此程式利用 sys.argv 串列取得從命令列傳入之隱藏層神經元數目, 並用來設定 dense_1 層的 units 參數. 下面我分別用 128, 256, 512, 1024 個隱藏層神經元執行訓練與預測 :

D:\Python\test>python show_train_history_overfit.py 128
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 128)               100480
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 2s - loss: 0.5381 - acc: 0.8631 - val_loss: 0.2576 - val_acc: 0.9299
Epoch 2/10
 - 2s - loss: 0.2375 - acc: 0.9328 - val_loss: 0.1953 - val_acc: 0.9443
Epoch 3/10
 - 1s - loss: 0.1831 - acc: 0.9477 - val_loss: 0.1640 - val_acc: 0.9539
Epoch 4/10
 - 2s - loss: 0.1476 - acc: 0.9573 - val_loss: 0.1435 - val_acc: 0.9592
Epoch 5/10
 - 2s - loss: 0.1232 - acc: 0.9647 - val_loss: 0.1268 - val_acc: 0.9639
Epoch 6/10
 - 2s - loss: 0.1049 - acc: 0.9701 - val_loss: 0.1210 - val_acc: 0.9633
Epoch 7/10
 - 1s - loss: 0.0901 - acc: 0.9749 - val_loss: 0.1143 - val_acc: 0.9654
Epoch 8/10
 - 2s - loss: 0.0792 - acc: 0.9779 - val_loss: 0.1053 - val_acc: 0.9687
Epoch 9/10
 - 1s - loss: 0.0698 - acc: 0.9805 - val_loss: 0.0993 - val_acc: 0.9700
Epoch 10/10
 - 1s - loss: 0.0611 - acc: 0.9828 - val_loss: 0.0951 - val_acc: 0.9706
10000/10000 [==============================] - 0s 38us/step
Accuracy= 0.9719
[7 2 1 ... 4 5 6]

將隱藏層神經元倍增為 256 個, 也就是在前一篇已做過的, 執行結果如下 :

D:\Python\test>python show_train_history_overfit.py 256
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 256)               200960
_________________________________________________________________
dense_2 (Dense)              (None, 10)                2570
=================================================================
Total params: 203,530
Trainable params: 203,530
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 3s - loss: 0.4380 - acc: 0.8829 - val_loss: 0.2181 - val_acc: 0.9406
Epoch 2/10
 - 2s - loss: 0.1909 - acc: 0.9457 - val_loss: 0.1556 - val_acc: 0.9560
Epoch 3/10
 - 2s - loss: 0.1356 - acc: 0.9617 - val_loss: 0.1259 - val_acc: 0.9648
Epoch 4/10
 - 2s - loss: 0.1028 - acc: 0.9700 - val_loss: 0.1120 - val_acc: 0.9679
Epoch 5/10
 - 2s - loss: 0.0813 - acc: 0.9772 - val_loss: 0.0983 - val_acc: 0.9718
Epoch 6/10
 - 2s - loss: 0.0661 - acc: 0.9817 - val_loss: 0.0937 - val_acc: 0.9719
Epoch 7/10
 - 2s - loss: 0.0545 - acc: 0.9850 - val_loss: 0.0912 - val_acc: 0.9740
Epoch 8/10
 - 2s - loss: 0.0461 - acc: 0.9876 - val_loss: 0.0829 - val_acc: 0.9765
Epoch 9/10
 - 2s - loss: 0.0381 - acc: 0.9903 - val_loss: 0.0820 - val_acc: 0.9766
Epoch 10/10
 - 2s - loss: 0.0317 - acc: 0.9918 - val_loss: 0.0802 - val_acc: 0.9767
10000/10000 [==============================] - 0s 50us/step
Accuracy= 0.9757 
[7 2 1 ... 4 5 6]

將隱藏層神經元倍增為 512, 執行結果如下 :

D:\Python\test>python show_train_history_overfit.py 512
Using TensorFlow backend.
Traceback (most recent call last):
  File "show_train_history_overfit.py", line 39, in <module>
    h=sys.sys.argv[1]
AttributeError: module 'sys' has no attribute 'sys'

D:\Python\test>python show_train_history_overfit.py 512
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 512)               401920
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 6s - loss: 0.3553 - acc: 0.9017 - val_loss: 0.1737 - val_acc: 0.9524
Epoch 2/10
 - 6s - loss: 0.1454 - acc: 0.9581 - val_loss: 0.1225 - val_acc: 0.9646
Epoch 3/10
 - 6s - loss: 0.0995 - acc: 0.9713 - val_loss: 0.1026 - val_acc: 0.9703
Epoch 4/10
 - 6s - loss: 0.0721 - acc: 0.9796 - val_loss: 0.0943 - val_acc: 0.9721
Epoch 5/10
 - 5s - loss: 0.0543 - acc: 0.9848 - val_loss: 0.0822 - val_acc: 0.9746
Epoch 6/10
 - 5s - loss: 0.0417 - acc: 0.9884 - val_loss: 0.0771 - val_acc: 0.9768
Epoch 7/10
 - 5s - loss: 0.0321 - acc: 0.9911 - val_loss: 0.0803 - val_acc: 0.9762
Epoch 8/10
 - 5s - loss: 0.0249 - acc: 0.9941 - val_loss: 0.0717 - val_acc: 0.9787
Epoch 9/10
 - 5s - loss: 0.0190 - acc: 0.9957 - val_loss: 0.0749 - val_acc: 0.9780
Epoch 10/10
 - 5s - loss: 0.0149 - acc: 0.9970 - val_loss: 0.0723 - val_acc: 0.9786
10000/10000 [==============================] - 1s 84us/step
Accuracy= 0.9785 
[7 2 1 ... 4 5 6]


而傳入 1024 的執行結果為 :

D:\Python\test>python show_train_history_overfit.py 1024
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 1024)              803840
_________________________________________________________________
dense_2 (Dense)              (None, 10)                10250
=================================================================
Total params: 814,090
Trainable params: 814,090
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 10s - loss: 0.2884 - acc: 0.9175 - val_loss: 0.1491 - val_acc: 0.9585
Epoch 2/10
 - 10s - loss: 0.1152 - acc: 0.9664 - val_loss: 0.1067 - val_acc: 0.9687
Epoch 3/10
 - 10s - loss: 0.0734 - acc: 0.9788 - val_loss: 0.0892 - val_acc: 0.9733
Epoch 4/10
 - 10s - loss: 0.0501 - acc: 0.9855 - val_loss: 0.0878 - val_acc: 0.9732
Epoch 5/10
 - 10s - loss: 0.0346 - acc: 0.9908 - val_loss: 0.0762 - val_acc: 0.9758
Epoch 6/10
 - 10s - loss: 0.0242 - acc: 0.9938 - val_loss: 0.0755 - val_acc: 0.9779
Epoch 7/10
 - 10s - loss: 0.0177 - acc: 0.9957 - val_loss: 0.0747 - val_acc: 0.9792
Epoch 8/10
 - 10s - loss: 0.0126 - acc: 0.9971 - val_loss: 0.0725 - val_acc: 0.9791
Epoch 9/10
 - 9s - loss: 0.0079 - acc: 0.9990 - val_loss: 0.0721 - val_acc: 0.9794
Epoch 10/10
 - 9s - loss: 0.0056 - acc: 0.9993 - val_loss: 0.0709 - val_acc: 0.9809
10000/10000 [==============================] - 1s 133us/step
Accuracy= 0.9818   
[7 2 1 ... 4 5 6]

上面四種隱藏層神經元第十輪的訓練成果以及預測準確度整理如下表 :


 hidden  acc (10th train)  val_acc (10th train) acc (predict)
 128 0.9828 0.9706 0.9719
 256 0.9918 0.9767 0,9757 (+0.39%)
 512 0.9970 0.9786 0.9785 (+0.29%)
 1024 0.9993 0.9809 0.9818 (+0.34%)


可見隱藏層神經元數目越多, 不論是訓練或驗證樣本之準確度都會提高, 但在訓練的後期, 訓練樣本準確度高出驗證樣本甚多, 這表示 overfitting 越嚴重, 難怪測試樣本的預測準確度只有稍微提升一點點而已.

將四種條件的訓練準確度成果放在一起比較容易看出來 :

隱藏層神經元數目=128

隱藏層神經元數目=256

隱藏層神經元數目=512

隱藏層神經元數目=1024


可見隱藏層神經元數目越多, 訓練樣本的準確度越快超過驗證樣本準確度 (例如 1024 情形在第一輪過後訓練樣本準確度就超過驗證樣本準確度), 這表示過度擬合情況越明顯. 而且神經元數目越多, 程式執行所需時間也越長.


2. 利用放棄函數 Dropout 改善過度擬合 (Overfitting) 問題 :

在 keras.layers 模組中有一個 Dropout() 函數可以在每次訓練時隨機放棄隱藏層中的部分神經元 (可傳入參數設定放棄比率), 此舉可改善 overfitting 問題. 其用法如下 :

from keras.layers import Dropout         #匯入 Dropout() 函數
model.add(Dropout(0.5))                        #設定要放棄多少比率之隱藏層神經元

我將上面的程式加上 Dropout 功能改寫為如下的 show_train_history_dropout.py 程式, 注意, model.add(Dropout(0.5)) 須放在隱藏層與輸出層的 add() 指令之間 :


#show_train_history_dropout.py
import sys
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout 
from keras.utils import np_utils
import matplotlib.pyplot as plt
import numpy as np

def show_train_history(train_history):
    fig=plt.gcf()
    fig.set_size_inches(16, 6)
    plt.subplot(121)
    plt.plot(train_history.history["acc"])
    plt.plot(train_history.history["val_acc"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Accuracy")
    plt.legend(["train", "validation"], loc="upper left")
    plt.subplot(122)
    plt.plot(train_history.history["loss"])
    plt.plot(train_history.history["val_loss"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Loss")
    plt.legend(["train", "validation"], loc="upper left")
    plt.show()

#pre-processing
np.random.seed(10)
(x_train_image, y_train_label), (x_test_image, y_test_label)=mnist.load_data()
x_train=x_train_image.reshape(60000,784).astype('float32')
x_test=x_test_image.reshape(10000,784).astype('float32')
x_train_normalize=x_train/255
x_test_normalize=x_test/255
y_train_onehot=np_utils.to_categorical(y_train_label)
y_test_onehot=np_utils.to_categorical(y_test_label)

#create model
h=int(sys.argv[1])
model=Sequential()
model.add(Dense(units=h, input_dim=784, kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.5))      #須在隱藏層與輸出層之間加入 Dropout
model.add(Dense(units=10, kernel_initializer='normal', activation='softmax'))
model.summary()

#train model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=200, verbose=2)

#show train history
show_train_history(train_history)
scores=model.evaluate(x_test_normalize, y_test_onehot)
print("Accuracy=", scores[1])
prediction=model.predict_classes(x_test)
print(prediction)


執行結果如下 :

D:\Python\test>python show_train_history_dropout.py 1024
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 1024)              803840
_________________________________________________________________
dropout_1 (Dropout)          (None, 1024)              0
_________________________________________________________________
dense_2 (Dense)              (None, 10)                10250
=================================================================
Total params: 814,090
Trainable params: 814,090
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 12s - loss: 0.3472 - acc: 0.8956 - val_loss: 0.1585 - val_acc: 0.9566
Epoch 2/10
 - 11s - loss: 0.1562 - acc: 0.9547 - val_loss: 0.1154 - val_acc: 0.9647
Epoch 3/10
 - 11s - loss: 0.1149 - acc: 0.9660 - val_loss: 0.0961 - val_acc: 0.9726
Epoch 4/10
 - 11s - loss: 0.0892 - acc: 0.9735 - val_loss: 0.0915 - val_acc: 0.9732
Epoch 5/10
 - 12s - loss: 0.0749 - acc: 0.9775 - val_loss: 0.0802 - val_acc: 0.9761
Epoch 6/10
 - 12s - loss: 0.0626 - acc: 0.9811 - val_loss: 0.0753 - val_acc: 0.9782
Epoch 7/10
 - 12s - loss: 0.0550 - acc: 0.9826 - val_loss: 0.0732 - val_acc: 0.9783
Epoch 8/10
 - 12s - loss: 0.0460 - acc: 0.9856 - val_loss: 0.0752 - val_acc: 0.9782
Epoch 9/10
 - 12s - loss: 0.0404 - acc: 0.9873 - val_loss: 0.0687 - val_acc: 0.9803
Epoch 10/10
 - 12s - loss: 0.0370 - acc: 0.9879 - val_loss: 0.0684 - val_acc: 0.9804 
10000/10000 [==============================] - 1s 123us/step
Accuracy= 0.9811
[7 2 1 ... 4 5 6]

雖然預測準確率經 Dropout 後有稍微變小 (0.9818 -> 0.9811), 但是 overfitting 情形卻改善了, 下表比較了有放棄與沒放棄部分神經元在訓練, 驗證, 以及預測方面之準確度差異, 可見加入 Dropout 之後預測準確度還是維持不錯水準 :


 hidden  acc (10th train)  val_acc (10th train) acc (predict)
 256 0.9918 0.9767 0.9757
 512 0.9970 0.9786 0.9785
 1024 no dropout 0.9993 0.9809 0.9818 
 1024 with dropout 0.9879 0.9804 0.9811


從訓練準確率圖形比較亦可看出訓練準確度 acc 與驗證準確度 val_acc 之間差異縮小了, 可見 overfitting 情形在加入 Dropout 之後獲得改善 :


隱藏層神經元數目=1024 (無 dropout)

隱藏層神經元數目=1024 (有 dropout)



3. 增加隱藏層為兩層 :

由上面的測試可知增加隱藏層神經元數目可提升預測準確度, 同時也帶來 overfitting 問題, 但加入 Dropout 卻可以在兩者之間取得折衷. 既然如此, 若將隱藏層增加為兩層, 同時也加入 Dropout, 則是否能使預測準確率更高並進一步改善 overfitting 呢?

我在上面程式中加入第二隱藏層, 都是具有 Dropout 功能的 1024 個神經元, 改成如下程式 :

#show_train_history_2hiddens_dropouts.py
import sys
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
import matplotlib.pyplot as plt
import numpy as np

def show_train_history(train_history):
    fig=plt.gcf()
    fig.set_size_inches(16, 6)
    plt.subplot(121)
    plt.plot(train_history.history["acc"])
    plt.plot(train_history.history["val_acc"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Accuracy")
    plt.legend(["train", "validation"], loc="upper left")
    plt.subplot(122)
    plt.plot(train_history.history["loss"])
    plt.plot(train_history.history["val_loss"])
    plt.title("Train History")
    plt.xlabel("Epoch")
    plt.ylabel("Loss")
    plt.legend(["train", "validation"], loc="upper left")
    plt.show()

#pre-processing
np.random.seed(10)
(x_train_image, y_train_label), (x_test_image, y_test_label)=mnist.load_data()
x_train=x_train_image.reshape(60000,784).astype('float32')
x_test=x_test_image.reshape(10000,784).astype('float32')
x_train_normalize=x_train/255
x_test_normalize=x_test/255
y_train_onehot=np_utils.to_categorical(y_train_label)
y_test_onehot=np_utils.to_categorical(y_test_label)

#create model
h=int(sys.argv[1])
model=Sequential()
model.add(Dense(units=h, input_dim=784, kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=h, kernel_initializer='normal', activation='relu'))   #第一隱藏層
model.add(Dropout(0.5)) 
model.add(Dense(units=10, kernel_initializer='normal', activation='softmax'))  #第二隱藏層
model.summary()

#train model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=200, verbose=2)

#show train history
show_train_history(train_history)
scores=model.evaluate(x_test_normalize, y_test_onehot)
print("Accuracy=", scores[1])
prediction=model.predict_classes(x_test)
print(prediction)

注意, 兩個隱藏層都是使用 ReLu 來優化, 執行結果如下 :

D:\Python\test>python show_train_history_2hiddens_dropouts.py 1024
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 1024)              803840
_________________________________________________________________
dropout_1 (Dropout)          (None, 1024)              0
_________________________________________________________________
dense_2 (Dense)              (None, 1024)              1049600 
_________________________________________________________________
dropout_2 (Dropout)          (None, 1024)              0
_________________________________________________________________
dense_3 (Dense)              (None, 10)                10250
=================================================================
Total params: 1,863,690
Trainable params: 1,863,690
Non-trainable params: 0
_________________________________________________________________
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
 - 27s - loss: 0.3576 - acc: 0.8881 - val_loss: 0.1292 - val_acc: 0.9618
Epoch 2/10
 - 26s - loss: 0.1546 - acc: 0.9530 - val_loss: 0.1003 - val_acc: 0.9698
Epoch 3/10
 - 26s - loss: 0.1157 - acc: 0.9635 - val_loss: 0.0900 - val_acc: 0.9723
Epoch 4/10
 - 26s - loss: 0.0992 - acc: 0.9691 - val_loss: 0.0921 - val_acc: 0.9714
Epoch 5/10
 - 26s - loss: 0.0860 - acc: 0.9729 - val_loss: 0.0743 - val_acc: 0.9777
Epoch 6/10
 - 26s - loss: 0.0733 - acc: 0.9766 - val_loss: 0.0748 - val_acc: 0.9775
Epoch 7/10
 - 26s - loss: 0.0671 - acc: 0.9781 - val_loss: 0.0774 - val_acc: 0.9772
Epoch 8/10
 - 26s - loss: 0.0621 - acc: 0.9800 - val_loss: 0.0812 - val_acc: 0.9776
Epoch 9/10
 - 26s - loss: 0.0571 - acc: 0.9813 - val_loss: 0.0739 - val_acc: 0.9799
Epoch 10/10
 - 26s - loss: 0.0510 - acc: 0.9838 - val_loss: 0.0759 - val_acc: 0.9800
10000/10000 [==============================] - 3s 251us/step
Accuracy= 0.9808
[7 2 1 ... 4 5 6]

準確度比較表如下, 可見預測準確度 0.9808 與單層時的 0.9811 差異很小 :


 hidden  acc (10th train)  val_acc (10th train) acc (predict)
 256 0.9918 0.9767 0.9757
 512 0.9970 0.9786 0.9785
 1024 no dropout 0.9993 0.9809 0.9818 
 1024 with dropout 0.9879 0.9804 0.9811
 1024 * 2 with dropout 0.9838 0.9800 0.9808


從訓練準確度圖形來看, 雙隱藏層下的訓練與驗證樣本在訓練中期以後差距顯著變小, overfitting 問題獲得大幅改善 :


單隱藏層神經元數目=1024 (有 dropout)

雙隱藏層神經元數目=每層 1024 (有 dropout)


不過雙隱藏層設計也讓神經元數目暴增, 執行速度當然更慢了.

沒有留言:

張貼留言