本系列之前的測試紀錄參考 :
# Windows 安裝深度學習框架 TensorFlow 與 Keras
# 使用 Keras 測試 MNIST 手寫數字辨識資料集
# 使用 Keras 多層感知器 MLP 辨識手寫數字 (一)
# 使用 Keras 多層感知器 MLP 辨識手寫數字 (二)
# 使用 Keras 卷積神經網路 (CNN) 辨識手寫數字
# 使用 Keras 測試 Cifar-10 圖片資料集
# 使用 Keras 多層感知器 MLP 辨識 Cifar-10 圖片
以下根據林大貴寫的 "TensorFlow+Keras 深度學習人工智慧實務應用" 一書第 10 章進行測試並紀錄測試結果, 模型之結構如下圖所示 :
為了改善 Overfitting 問題, 每一個參數層後面都加一個放棄層丟棄 25% 的神經元.
1. 載入資料集 :
匯入 numpy 與 cifar10 套件並載入 Cifar-10 資料集 :
D:\test>python
Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.random.seed(10)
>>> from keras.datasets import cifar10
Using TensorFlow backend.
>>> (x_train_image, y_train_label), (x_test_image, y_test_label)=cifar10.load_data()
顯示資料集的結構 (shape) :
>>> x_train_image.shape
(50000, 32, 32, 3)
>>> x_test_image.shape
(10000, 32, 32, 3)
>>> y_train_label.shape
(50000, 1)
>>> y_test_label.shape
(10000, 1)
2. 資料預處理 :
數字圖片預處理 :
>>> x_train_normalize=x_train_image.astype('float32')/255.0
>>> x_test_normalize=x_test_image.astype('float32')/255.0
標籤預處理 (需 keras.utils.np_utils) :
>>> from keras.utils import np_utils
>>> y_train_onehot=np_utils.to_categorical(y_train_label)
>>> y_test_onehot=np_utils.to_categorical(y_test_label)
3. 建立模型 :
匯入 keras.models 與 keras.layers 下的相關模組 :
>>> from keras.models import Sequential
>>> from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
>>> from keras.layers import ZeroPadding2D,Activation
建立線性堆疊模型, 加入兩層卷積 (丟棄 25% 神經元) + 池化層 :
>>> model=Sequential()
>>> model.add(Conv2D(filters=32,
... kernel_size=(3,3),
... padding='same',
... input_shape=(32,32,3),
... activation='relu'))
>>>
>>> model.add(Dropout(0.25))
>>> model.add(MaxPooling2D(pool_size=(2, 2)))
>>> model.add(Conv2D(filters=64,
... kernel_size=(3,3),
... padding='same',
... activation='relu'))
>>>
>>> model.add(Dropout(0.25))
>>> model.add(MaxPooling2D(pool_size=(2, 2)))
建立分類模型 (MLP) : 平坦層 + 隱藏層 (1024 神經元) + 輸出層 (10 神經元)
>>> model.add(Dropout(0.25))
>>> model.add(Dense(10,activation='softmax'))
檢視模型摘要 :
>>> print(model.summary())
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
dropout_1 (Dropout) (None, 32, 32, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
dropout_2 (Dropout) (None, 16, 16, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 4195328
_________________________________________________________________
dropout_4 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 10250
=================================================================
Total params: 4,224,970
Trainable params: 4,224,970
Non-trainable params: 0
_________________________________________________________________
None
3. 編譯與訓練模型 :
>>> model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
>>> train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=128,verbose=2)
Train on 40000 samples, validate on 10000 samples
Epoch 1/10
2018-04-15 16:53:40.208119: I C:\tf_jenkins\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
- 481s - loss: 1.5019 - acc: 0.4593 - val_loss: 1.2868 - val_acc: 0.5823
Epoch 2/10
- 461s - loss: 1.1382 - acc: 0.5950 - val_loss: 1.1190 - val_acc: 0.6325
Epoch 3/10
- 460s - loss: 0.9844 - acc: 0.6561 - val_loss: 1.0265 - val_acc: 0.6568
Epoch 4/10
- 462s - loss: 0.8762 - acc: 0.6896 - val_loss: 0.9486 - val_acc: 0.6900
Epoch 5/10
- 459s - loss: 0.7851 - acc: 0.7235 - val_loss: 0.8822 - val_acc: 0.7088
Epoch 6/10
- 460s - loss: 0.7018 - acc: 0.7546 - val_loss: 0.8280 - val_acc: 0.7274
Epoch 7/10
- 468s - loss: 0.6218 - acc: 0.7801 - val_loss: 0.8152 - val_acc: 0.7258
Epoch 8/10
- 622s - loss: 0.5555 - acc: 0.8041 - val_loss: 0.7733 - val_acc: 0.7435
Epoch 9/10
- 563s - loss: 0.4856 - acc: 0.8307 - val_loss: 0.7948 - val_acc: 0.7310
Epoch 10/10
- 554s - loss: 0.4308 - acc: 0.8474 - val_loss: 0.7525 - val_acc: 0.7431
呵呵, CNN 果然管用, 訓練來到 0.8474.
繪製訓練結果 :
>>> def show_train_history(train_history):
... fig=plt.gcf()
... fig.set_size_inches(16, 6)
... plt.subplot(121)
... plt.plot(train_history.history["acc"])
... plt.plot(train_history.history["val_acc"])
... plt.title("Train History")
... plt.xlabel("Epoch")
... plt.ylabel("Accuracy")
... plt.legend(["train", "validation"], loc="upper left")
... plt.subplot(122)
... plt.plot(train_history.history["loss"])
... plt.plot(train_history.history["val_loss"])
... plt.title("Train History")
... plt.xlabel("Epoch")
... plt.ylabel("Loss")
... plt.legend(["train", "validation"], loc="upper left")
... plt.show()
...
>>> import matplotlib.pyplot as plt
>>> show_train_history(train_history)
4. 評估預測準確率 :
>>> scores=model.evaluate(x_test_normalize, y_test_onehot)
10000/10000 [==============================] - 35s 3ms/step
>>> print("Accuracy=", scores)
Accuracy= [0.7632420552253724, 0.74]
>>> print("Accuracy=", scores[1])
Accuracy= 0.74
跟驗證集的結果差不多.
5. 預測測試集圖片 :
>>> prediction=model.predict_classes(x_test_normalize)
>>> print(prediction)
[3 8 8 ... 5 4 7]
>>> print(prediction[:10])
[3 8 8 0 6 6 1 6 3 1]
>>> print(y_test_label[:10])
[[3]
[8]
[8]
[0]
[6]
[6]
[1]
[6]
[3]
[1]]
將標籤攤平以利比較 :
標籤 : 3 8 8 0 6 6 1 6 3 1
預測 : 3 8 8 0 6 6 1 6 3 1
將以上指令整合為如下可在命令列執行之程式 :
#show_cifar10_cnn_predict.py
#載入資料集
import time
start=time.time()
import numpy as np
np.random.seed(10)
from keras.datasets import cifar10
(x_train_image, y_train_label), (x_test_image, y_test_label)=cifar10.load_data()
#資料預處理
x_train_normalize=x_train_image.astype('float32')/255.0
x_test_normalize=x_test_image.astype('float32')/255.0
from keras.utils import np_utils
y_train_onehot=np_utils.to_categorical(y_train_label)
y_test_onehot=np_utils.to_categorical(y_test_label)
#建立模型
#建立兩層卷積 (丟棄 25% 神經元) + 池化層
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
from keras.layers import ZeroPadding2D,Activation
model=Sequential()
model.add(Conv2D(filters=32,
kernel_size=(3,3),
padding='same',
input_shape=(32,32,3),
activation='relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64,
kernel_size=(3,3),
padding='same',
activation='relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
#建立分類模型 MLP
model.add(Flatten())
model.add(Dropout(0.25))
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(10,activation='softmax'))
model.summary()
#訓練模型
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=128,verbose=2)
elapsed=time.time()-start
print("Training time=" + str(elapsed) + " Seconds")
show_train_history(train_history)
#繪製訓練結果
def show_train_history(train_history):
fig=plt.gcf()
fig.set_size_inches(16, 6)
plt.subplot(121)
plt.plot(train_history.history["acc"])
plt.plot(train_history.history["val_acc"])
plt.title("Train History")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend(["train", "validation"], loc="upper left")
plt.subplot(122)
plt.plot(train_history.history["loss"])
plt.plot(train_history.history["val_loss"])
plt.title("Train History")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
import matplotlib.pyplot as plt
show_train_history(train_history)
#評估預測準確率
scores=model.evaluate(x_test_normalize, y_test_onehot)
print("Accuracy=", scores[1])
prediction=model.predict_classes(x_test_normalize)
print(prediction)
#預測測試集圖片
prediction=model.predict_classes(x_test_normalize)
print(prediction)
6. 製作混淆矩陣 :
注意這裡 y_test_label 是 2 維陣列, 須用 reshape(-1) 轉成 1 維陣列 :
>>> import pandas as pd
>>> pd.crosstab(y_test_label.reshape(-1), prediction, rownames=['label'],colnames=['predict'])
predict 0 1 2 3 4 5 6 7 8 9
label
0 804 9 64 14 11 7 15 4 45 27
1 16 808 21 20 4 7 15 2 21 86
2 47 2 678 31 75 69 71 14 8 5
3 22 6 98 478 44 213 99 20 7 13
4 18 1 86 51 708 38 61 29 7 1
5 12 1 64 121 32 692 46 24 4 4
6 4 5 39 27 16 24 883 1 1 0
7 16 1 66 26 67 84 12 720 1 7
8 52 29 29 14 7 10 17 3 822 17
9 39 55 24 14 1 11 10 10 29 807
>>> y_test_label.reshape(-1)
array([3, 8, 8, ..., 5, 1, 7])
可見 Cifar-10 測試集圖片中, 類別 3 (cat) 被誤認為類別 5 (dog) 次數最高 (213 次); 其次是反過來類別 5 (dog) 被誤認為類別 3 (cat) 達 12 次; 第三名是類別 3 (cat) 被誤認為類別 6 (frog) 達 99 次, 第四名是類別 3 (cat) 被誤認為類別 2 (bird) 有 98 次.
以上測試顯示先利用 CNN 卷積運算擷取圖片平面特徵後再用 MLP 來學習分類大幅提升了預測準確率, 從大約 0.45 提升到 0.74 左右, 提升幅度約 64% .
參考 :
# Deep learning for complete beginners: convolutional neural networks with keras
# 使用 Keras 測試 Cifar-10 圖片資料集
# 使用 Keras 多層感知器 MLP 辨識 Cifar-10 圖片
以下根據林大貴寫的 "TensorFlow+Keras 深度學習人工智慧實務應用" 一書第 10 章進行測試並紀錄測試結果, 模型之結構如下圖所示 :
為了改善 Overfitting 問題, 每一個參數層後面都加一個放棄層丟棄 25% 的神經元.
1. 載入資料集 :
匯入 numpy 與 cifar10 套件並載入 Cifar-10 資料集 :
D:\test>python
Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.random.seed(10)
>>> from keras.datasets import cifar10
Using TensorFlow backend.
>>> (x_train_image, y_train_label), (x_test_image, y_test_label)=cifar10.load_data()
顯示資料集的結構 (shape) :
>>> x_train_image.shape
(50000, 32, 32, 3)
>>> x_test_image.shape
(10000, 32, 32, 3)
>>> y_train_label.shape
(50000, 1)
>>> y_test_label.shape
(10000, 1)
2. 資料預處理 :
數字圖片預處理 :
>>> x_train_normalize=x_train_image.astype('float32')/255.0
>>> x_test_normalize=x_test_image.astype('float32')/255.0
標籤預處理 (需 keras.utils.np_utils) :
>>> from keras.utils import np_utils
>>> y_train_onehot=np_utils.to_categorical(y_train_label)
>>> y_test_onehot=np_utils.to_categorical(y_test_label)
3. 建立模型 :
匯入 keras.models 與 keras.layers 下的相關模組 :
>>> from keras.models import Sequential
>>> from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
>>> from keras.layers import ZeroPadding2D,Activation
建立線性堆疊模型, 加入兩層卷積 (丟棄 25% 神經元) + 池化層 :
>>> model=Sequential()
>>> model.add(Conv2D(filters=32,
... kernel_size=(3,3),
... padding='same',
... input_shape=(32,32,3),
... activation='relu'))
>>>
>>> model.add(Dropout(0.25))
>>> model.add(MaxPooling2D(pool_size=(2, 2)))
>>> model.add(Conv2D(filters=64,
... kernel_size=(3,3),
... padding='same',
... activation='relu'))
>>>
>>> model.add(Dropout(0.25))
>>> model.add(MaxPooling2D(pool_size=(2, 2)))
建立分類模型 (MLP) : 平坦層 + 隱藏層 (1024 神經元) + 輸出層 (10 神經元)
>>> model.add(Flatten())
>>> model.add(Dropout(0.25))
>>> model.add(Dense(1024,activation='relu')) >>> model.add(Dropout(0.25))
>>> model.add(Dropout(0.25))
>>> model.add(Dense(10,activation='softmax'))
檢視模型摘要 :
>>> print(model.summary())
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
dropout_1 (Dropout) (None, 32, 32, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
dropout_2 (Dropout) (None, 16, 16, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 4195328
_________________________________________________________________
dropout_4 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 10250
=================================================================
Total params: 4,224,970
Trainable params: 4,224,970
Non-trainable params: 0
_________________________________________________________________
None
3. 編譯與訓練模型 :
>>> model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
>>> train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=128,verbose=2)
Train on 40000 samples, validate on 10000 samples
Epoch 1/10
2018-04-15 16:53:40.208119: I C:\tf_jenkins\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
- 481s - loss: 1.5019 - acc: 0.4593 - val_loss: 1.2868 - val_acc: 0.5823
Epoch 2/10
- 461s - loss: 1.1382 - acc: 0.5950 - val_loss: 1.1190 - val_acc: 0.6325
Epoch 3/10
- 460s - loss: 0.9844 - acc: 0.6561 - val_loss: 1.0265 - val_acc: 0.6568
Epoch 4/10
- 462s - loss: 0.8762 - acc: 0.6896 - val_loss: 0.9486 - val_acc: 0.6900
Epoch 5/10
- 459s - loss: 0.7851 - acc: 0.7235 - val_loss: 0.8822 - val_acc: 0.7088
Epoch 6/10
- 460s - loss: 0.7018 - acc: 0.7546 - val_loss: 0.8280 - val_acc: 0.7274
Epoch 7/10
- 468s - loss: 0.6218 - acc: 0.7801 - val_loss: 0.8152 - val_acc: 0.7258
Epoch 8/10
- 622s - loss: 0.5555 - acc: 0.8041 - val_loss: 0.7733 - val_acc: 0.7435
Epoch 9/10
- 563s - loss: 0.4856 - acc: 0.8307 - val_loss: 0.7948 - val_acc: 0.7310
Epoch 10/10
- 554s - loss: 0.4308 - acc: 0.8474 - val_loss: 0.7525 - val_acc: 0.7431
呵呵, CNN 果然管用, 訓練來到 0.8474.
繪製訓練結果 :
>>> def show_train_history(train_history):
... fig=plt.gcf()
... fig.set_size_inches(16, 6)
... plt.subplot(121)
... plt.plot(train_history.history["acc"])
... plt.plot(train_history.history["val_acc"])
... plt.title("Train History")
... plt.xlabel("Epoch")
... plt.ylabel("Accuracy")
... plt.legend(["train", "validation"], loc="upper left")
... plt.subplot(122)
... plt.plot(train_history.history["loss"])
... plt.plot(train_history.history["val_loss"])
... plt.title("Train History")
... plt.xlabel("Epoch")
... plt.ylabel("Loss")
... plt.legend(["train", "validation"], loc="upper left")
... plt.show()
...
>>> import matplotlib.pyplot as plt
>>> show_train_history(train_history)
4. 評估預測準確率 :
>>> scores=model.evaluate(x_test_normalize, y_test_onehot)
10000/10000 [==============================] - 35s 3ms/step
>>> print("Accuracy=", scores)
Accuracy= [0.7632420552253724, 0.74]
>>> print("Accuracy=", scores[1])
Accuracy= 0.74
跟驗證集的結果差不多.
5. 預測測試集圖片 :
>>> prediction=model.predict_classes(x_test_normalize)
>>> print(prediction)
[3 8 8 ... 5 4 7]
>>> print(prediction[:10])
[3 8 8 0 6 6 1 6 3 1]
>>> print(y_test_label[:10])
[[3]
[8]
[8]
[0]
[6]
[6]
[1]
[6]
[3]
[1]]
將標籤攤平以利比較 :
標籤 : 3 8 8 0 6 6 1 6 3 1
預測 : 3 8 8 0 6 6 1 6 3 1
可見前 10 張測試集圖片預測完全正確.
#show_cifar10_cnn_predict.py
#載入資料集
import time
start=time.time()
import numpy as np
np.random.seed(10)
from keras.datasets import cifar10
(x_train_image, y_train_label), (x_test_image, y_test_label)=cifar10.load_data()
#資料預處理
x_train_normalize=x_train_image.astype('float32')/255.0
x_test_normalize=x_test_image.astype('float32')/255.0
from keras.utils import np_utils
y_train_onehot=np_utils.to_categorical(y_train_label)
y_test_onehot=np_utils.to_categorical(y_test_label)
#建立模型
#建立兩層卷積 (丟棄 25% 神經元) + 池化層
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
from keras.layers import ZeroPadding2D,Activation
model=Sequential()
model.add(Conv2D(filters=32,
kernel_size=(3,3),
padding='same',
input_shape=(32,32,3),
activation='relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64,
kernel_size=(3,3),
padding='same',
activation='relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
#建立分類模型 MLP
model.add(Flatten())
model.add(Dropout(0.25))
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(10,activation='softmax'))
model.summary()
#訓練模型
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=128,verbose=2)
elapsed=time.time()-start
print("Training time=" + str(elapsed) + " Seconds")
show_train_history(train_history)
#繪製訓練結果
def show_train_history(train_history):
fig=plt.gcf()
fig.set_size_inches(16, 6)
plt.subplot(121)
plt.plot(train_history.history["acc"])
plt.plot(train_history.history["val_acc"])
plt.title("Train History")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend(["train", "validation"], loc="upper left")
plt.subplot(122)
plt.plot(train_history.history["loss"])
plt.plot(train_history.history["val_loss"])
plt.title("Train History")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
import matplotlib.pyplot as plt
show_train_history(train_history)
#評估預測準確率
scores=model.evaluate(x_test_normalize, y_test_onehot)
print("Accuracy=", scores[1])
prediction=model.predict_classes(x_test_normalize)
print(prediction)
#預測測試集圖片
prediction=model.predict_classes(x_test_normalize)
print(prediction)
6. 製作混淆矩陣 :
注意這裡 y_test_label 是 2 維陣列, 須用 reshape(-1) 轉成 1 維陣列 :
>>> import pandas as pd
>>> pd.crosstab(y_test_label.reshape(-1), prediction, rownames=['label'],colnames=['predict'])
predict 0 1 2 3 4 5 6 7 8 9
label
0 804 9 64 14 11 7 15 4 45 27
1 16 808 21 20 4 7 15 2 21 86
2 47 2 678 31 75 69 71 14 8 5
3 22 6 98 478 44 213 99 20 7 13
4 18 1 86 51 708 38 61 29 7 1
5 12 1 64 121 32 692 46 24 4 4
6 4 5 39 27 16 24 883 1 1 0
7 16 1 66 26 67 84 12 720 1 7
8 52 29 29 14 7 10 17 3 822 17
9 39 55 24 14 1 11 10 10 29 807
>>> y_test_label.reshape(-1)
array([3, 8, 8, ..., 5, 1, 7])
可見 Cifar-10 測試集圖片中, 類別 3 (cat) 被誤認為類別 5 (dog) 次數最高 (213 次); 其次是反過來類別 5 (dog) 被誤認為類別 3 (cat) 達 12 次; 第三名是類別 3 (cat) 被誤認為類別 6 (frog) 達 99 次, 第四名是類別 3 (cat) 被誤認為類別 2 (bird) 有 98 次.
以上測試顯示先利用 CNN 卷積運算擷取圖片平面特徵後再用 MLP 來學習分類大幅提升了預測準確率, 從大約 0.45 提升到 0.74 左右, 提升幅度約 64% .
參考 :
# Deep learning for complete beginners: convolutional neural networks with keras
沒有留言 :
張貼留言