tensorflow2.0保存和恢復模型3種方法
方法1:只保存模型的權(quán)重和偏置
這種方法不會保存整個網(wǎng)絡(luò)的結(jié)構(gòu),只是保存模型的權(quán)重和偏置,所以在后期恢復模型之前,必須手動創(chuàng)建和之前模型一模一樣的模型,以保證權(quán)重和偏置的維度和保存之前的相同。
tf.keras.model類中的save_weights方法和load_weights方法,參數(shù)解釋我就直接搬運官網(wǎng)的內(nèi)容了。
save_weights( filepath, overwrite=True, save_format=None )
Arguments:
filepath: String, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format.
overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
save_format: Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'.
load_weights( filepath, by_name=False )
實例1:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, optimizers
# step1 加載訓練集和測試集合
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# step2 創(chuàng)建模型
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
# step3 編譯模型 主要是確定優(yōu)化方法,損失函數(shù)等
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# step4 模型訓練 訓練一個epochs
model.fit(x=x_train,
y=y_train,
epochs=1,
)
# step5 模型測試
loss, acc = model.evaluate(x_test, y_test)
print("train model, accuracy:{:5.2f}%".format(100 * acc))
# step6 保存模型的權(quán)重和偏置
model.save_weights('./save_weights/my_save_weights')
# step7 刪除模型
del model
# step8 重新創(chuàng)建模型
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# step9 恢復權(quán)重
model.load_weights('./save_weights/my_save_weights')
# step10 測試模型
loss, acc = model.evaluate(x_test, y_test)
print("Restored model, accuracy:{:5.2f}%".format(100 * acc))
train model, accuracy:96.55%
Restored model, accuracy:96.55%
可以看到在模型的權(quán)重和偏置恢復之后,在測試集合上同樣達到了訓練之前相同的準確率。
方法2:直接保存整個模型
這種方法會將網(wǎng)絡(luò)的結(jié)構(gòu),權(quán)重和優(yōu)化器的狀態(tài)等參數(shù)全部保存下來,后期恢復的時候就沒必要創(chuàng)建新的網(wǎng)絡(luò)了。
tf.keras.model類中的save方法和load_model方法
save( filepath, overwrite=True, include_optimizer=True, save_format=None )
Arguments:
filepath: String, path to SavedModel or H5 file to save the model.
overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
include_optimizer: If True, save optimizer's state together.
save_format: Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. The default is currently 'h5', but will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently disabled (use tf.keras.experimental.export_saved_model instead).
實例2:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, optimizers
# step1 加載訓練集和測試集合
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# step2 創(chuàng)建模型
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
# step3 編譯模型 主要是確定優(yōu)化方法,損失函數(shù)等
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# step4 模型訓練 訓練一個epochs
model.fit(x=x_train,
y=y_train,
epochs=1,
)
# step5 模型測試
loss, acc = model.evaluate(x_test, y_test)
print("train model, accuracy:{:5.2f}%".format(100 * acc))
# step6 保存模型的權(quán)重和偏置
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
# step7 刪除模型
del model # deletes the existing model
# step8 恢復模型
# returns a compiled model
# identical to the previous one
restored_model = tf.keras.models.load_model('my_model.h5')
# step9 測試模型
loss, acc = restored_model.evaluate(x_test, y_test)
print("Restored model, accuracy:{:5.2f}%".format(100 * acc))
train model, accuracy:96.94%
Restored model, accuracy:96.94%
方法3:使用tf.keras.callbacks.ModelCheckpoint方法在訓練過程中保存模型
該方法繼承自tf.keras.callbacks類,一般配合mode.fit函數(shù)使用
以上這篇tensorflow2.0保存和恢復模型3種方法就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
python數(shù)據(jù)預處理之數(shù)據(jù)標準化的幾種處理方式
這篇文章主要介紹了python數(shù)據(jù)預處理之數(shù)據(jù)標準化的幾種處理方式,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2019-07-07
python通過pil將圖片轉(zhuǎn)換成黑白效果的方法
這篇文章主要介紹了python通過pil將圖片轉(zhuǎn)換成黑白效果的方法,實例分析了Python中pil庫的使用技巧,需要的朋友可以參考下2015-03-03
終于明白tf.reduce_sum()函數(shù)和tf.reduce_mean()函數(shù)用法
這篇文章主要介紹了終于明白tf.reduce_sum()函數(shù)和tf.reduce_mean()函數(shù)用法,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2022-11-11

