2018年5月13日 星期日

在樹莓派上用 Keras 多層感知器 MLP 辨識手寫數字

最近在忙網路爬蟲, 機器學習就暫停下來了. 今天在鄉下連線 Pi 3 時想說還沒在樹莓派測試 Keras 的機器學習能不能跑, 測一下應該不會花很多時間, 只要照之前在 Win 10 上的步驟走就可以了. 參考 :

使用 Keras 多層感知器 MLP 辨識手寫數字 (一)

測試紀錄如下 :

pi@raspberrypi:~ $ python3 
Python 3.4.2 (default, Oct 19 2014, 13:31:11)
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from keras.datasets import mnist 
Using TensorFlow backend.
>>> from keras.utils import np_utils 
>>> import numpy as np
>>> np.random.seed(10) 
>>> from keras.datasets import mnist 
>>> (x_train_image, y_train_label), (x_test_image, y_test_label)=mnist.load_data() 
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 54s 5us/step
>>> x_train=x_train_image.reshape(60000,784).astype('float32') 
>>> x_test=x_test_image.reshape(10000,784).astype('float32') 
>>> x_train_normalize=x_train/255 
>>> x_test_normalize=x_test/255 
>>> y_train_onehot=np_utils.to_categorical(y_train_label) 
>>> y_test_onehot=np_utils.to_categorical(y_test_label) 
>>> from keras.models import Sequential 
>>> from keras.layers import Dense 
>>> model=Sequential() 
>>> model.add(Dense(units=256, input_dim=784, kernel_initializer='normal', activation='relu')) 
>>> model.add(Dense(units=10, kernel_initializer='normal', activation='softmax')) 
>>> model.summary() 
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 256)               200960
_________________________________________________________________
dense_2 (Dense)              (None, 10)                2570
=================================================================
Total params: 203,530
Trainable params: 203,530
Non-trainable params: 0
_________________________________________________________________
>>> model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])   
>>> import time 
>>> start=time.time() 
>>> train_history=model.fit(x=x_train_normalize, y=y_train_onehot, validation_split=0.2, epochs=10, batch_size=200, verbose=2)   
Train on 48000 samples, validate on 12000 samples 
Epoch 1/10
 - 30s - loss: 0.4379 - acc: 0.8829 - val_loss: 0.2183 - val_acc: 0.9409
Epoch 2/10
 - 25s - loss: 0.1910 - acc: 0.9453 - val_loss: 0.1560 - val_acc: 0.9558
Epoch 3/10
 - 26s - loss: 0.1354 - acc: 0.9615 - val_loss: 0.1261 - val_acc: 0.9651
Epoch 4/10
 - 26s - loss: 0.1028 - acc: 0.9703 - val_loss: 0.1119 - val_acc: 0.9677
Epoch 5/10
 - 26s - loss: 0.0811 - acc: 0.9772 - val_loss: 0.0982 - val_acc: 0.9713
Epoch 6/10
 - 26s - loss: 0.0659 - acc: 0.9818 - val_loss: 0.0936 - val_acc: 0.9722
Epoch 7/10
 - 26s - loss: 0.0543 - acc: 0.9851 - val_loss: 0.0907 - val_acc: 0.9733
Epoch 8/10
 - 25s - loss: 0.0457 - acc: 0.9879 - val_loss: 0.0832 - val_acc: 0.9763
Epoch 9/10
 - 26s - loss: 0.0381 - acc: 0.9903 - val_loss: 0.0820 - val_acc: 0.9764
Epoch 10/10
 - 26s - loss: 0.0316 - acc: 0.9918 - val_loss: 0.0804 - val_acc: 0.9764 
>>> elapsed=time.time()-start 
>>> elapsed
330.68928050994873 
>>> train_history.history["loss"] 
[0.43788781948387623, 0.19102759423355262, 0.13539413533483943, 0.10275283302180468, 0.08105092053301632, 0.06594810921233148, 0.054330366919748484, 0.045713288674596696, 0.03806607090712835, 0.031598116053889194]
>>> train_history.history["acc"] 
[0.8828958324156702, 0.945312496771415, 0.9614999994635582, 0.970291672150294, 0.9771875118215879, 0.981812513868014, 0.9850625117619832, 0.9878958443800608, 0.9903125089903673, 0.9917500078678131]
>>> train_history.history["val_loss"] 
[0.2182527129848798, 0.15599788756420216, 0.12607369947557648, 0.11187813809762398, 0.09817015854641795, 0.09359629011402527, 0.09074133434332907, 0.08321564545234045, 0.08195884495507925, 0.08039293085457758]
>>> train_history.history["val_acc"] 
[0.9409166673819224, 0.9558333347241084, 0.9650833368301391, 0.9676666716734569, 0.9712500085433324, 0.9721666753292084, 0.973333340883255, 0.976250006755193, 0.976416677236557, 0.9764166762431462]
>>> scores=model.evaluate(x_test_normalize, y_test_onehot)
10000/10000 [==============================] - 5s 475us/step
>>> print("Accuracy=", scores) 
Accuracy= [0.07520145836151206, 0.9761]
>>> prediction=model.predict_classes(x_test_normalize) 
>>> print(prediction) 
[7 2 1 ... 4 5 6] 

可知 MLP 在測試集的預測上有 0.976 左右的精確度, 與 Win 10 上跑的結果差不多. 在訓練之前設置的計時器顯示 10 輪訓練週期花了 330 秒, 大約 5 分鐘多, 也沒有很慢, 可見在 Pi 3 上面跑 Keras + TensorFlow 機器學習效能還不算太差.

參考 :

INSTALLING KERAS ON RASPBERRY PI 3
在 Raspberry Pi 2,3 上安装 Tensorflow 搭建深度学习环境

沒有留言:

張貼留言