Keras 使用 Lambda層詳解
我就廢話不多說了,大家還是直接看代碼吧!
from tensorflow.python.keras.models import Sequential, Model from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, MaxPool2D, Dropout, Conv2DTranspose, Lambda, Input, Reshape, Add, Multiply from tensorflow.python.keras.optimizers import Adam def deconv(x): height = x.get_shape()[1].value width = x.get_shape()[2].value new_height = height*2 new_width = width*2 x_resized = tf.image.resize_images(x, [new_height, new_width], tf.image.ResizeMethod.NEAREST_NEIGHBOR) return x_resized def Generator(scope='generator'): imgs_noise = Input(shape=inputs_shape) x = Conv2D(filters=32, kernel_size=(9,9), strides=(1,1), padding='same', activation='relu')(imgs_noise) x = Conv2D(filters=64, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu')(x) x = Conv2D(filters=128, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu')(x) x1 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x) x1 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x1) x2 = Add()([x1, x]) x3 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x2) x3 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x3) x4 = Add()([x3, x2]) x5 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x4) x5 = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu')(x5) x6 = Add()([x5, x4]) x = MaxPool2D(pool_size=(2,2))(x6) x = Lambda(deconv)(x) x = Conv2D(filters=64, kernel_size=(3, 3), strides=(1,1), padding='same',activation='relu')(x) x = Lambda(deconv)(x) x = Conv2D(filters=32, kernel_size=(3, 3), strides=(1,1), padding='same',activation='relu')(x) x = Lambda(deconv)(x) x = Conv2D(filters=3, kernel_size=(3, 3), strides=(1, 1), padding='same',activation='tanh')(x) x = Lambda(lambda x: x+1)(x) y = Lambda(lambda x: x*127.5)(x) model = Model(inputs=imgs_noise, outputs=y) model.summary() return model my_generator = Generator() my_generator.compile(loss='binary_crossentropy', optimizer=Adam(0.7, decay=1e-3), metrics=['accuracy'])
補(bǔ)充知識(shí):含有Lambda自定義層keras模型,保存遇到的問題及解決方案
一,許多應(yīng)用,keras含有的層已經(jīng)不能滿足要求,需要透過Lambda自定義層來實(shí)現(xiàn)一些layer,這個(gè)情況下,只能保存模型的權(quán)重,無法使用model.save來保存模型。保存時(shí)會(huì)報(bào)
TypeError: can't pickle _thread.RLock objects

二,解決方案,為了便于后續(xù)的部署,可以轉(zhuǎn)成tensorflow的PB進(jìn)行部署。
from keras.models import load_model
import tensorflow as tf
import os, sys
from keras import backend as K
from tensorflow.python.framework import graph_util, graph_io
def h5_to_pb(h5_weight_path, output_dir, out_prefix="output_", log_tensorboard=True):
if not os.path.exists(output_dir):
os.mkdir(output_dir)
h5_model = build_model()
h5_model.load_weights(h5_weight_path)
out_nodes = []
for i in range(len(h5_model.outputs)):
out_nodes.append(out_prefix + str(i + 1))
tf.identity(h5_model.output[i], out_prefix + str(i + 1))
model_name = os.path.splitext(os.path.split(h5_weight_path)[-1])[0] + '.pb'
sess = K.get_session()
init_graph = sess.graph.as_graph_def()
main_graph = graph_util.convert_variables_to_constants(sess, init_graph, out_nodes)
graph_io.write_graph(main_graph, output_dir, name=model_name, as_text=False)
if log_tensorboard:
from tensorflow.python.tools import import_pb_to_tensorboard
import_pb_to_tensorboard.import_to_tensorboard(os.path.join(output_dir, model_name), output_dir)
def build_model():
inputs = Input(shape=(784,), name='input_img')
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
y = Dense(10, activation='softmax')(x)
h5_model = Model(inputs=inputs, outputs=y)
return h5_model
if __name__ == '__main__':
if len(sys.argv) == 3:
# usage: python3 h5_to_pb.py h5_weight_path output_dir
h5_to_pb(h5_weight_path=sys.argv[1], output_dir=sys.argv[2])
以上這篇Keras 使用 Lambda層詳解就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
Python中的魔法方法__repr__和__str__用法實(shí)例詳解
這篇文章主要介紹了Python中的__repr__和__str__方法,它們分別用于提供對(duì)象的官方字符串表示和用戶友好的字符串表示,通過重寫這兩個(gè)方法,可以自定義對(duì)象的打印輸出,文中通過代碼將用法介紹的非常詳細(xì),需要的朋友可以參考下2025-02-02
python 實(shí)現(xiàn)二叉搜索樹的四種方法
本文主要介紹了python 實(shí)現(xiàn)二叉搜索樹的四種方法,文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2023-04-04
詳解如何使用Python在PDF文檔中創(chuàng)建動(dòng)作
PDF格式因其跨平臺(tái)兼容性和豐富的功能集而成為許多行業(yè)中的首選文件格式,其中,PDF中的動(dòng)作(Action) 功能尤為突出,本文將介紹如何使用Python在PDF文檔中創(chuàng)建動(dòng)作,需要的朋友可以參考下2024-09-09
Python tabulate結(jié)合loguru打印出美觀方便的日志記錄
在開發(fā)過程中經(jīng)常碰到在本地環(huán)境無法完成聯(lián)調(diào)測(cè)試的情況,必須到統(tǒng)一的聯(lián)機(jī)環(huán)境對(duì)接其他系統(tǒng)測(cè)試。往往是出現(xiàn)了BUG難以查找數(shù)據(jù)記錄及時(shí)定位到錯(cuò)誤出現(xiàn)的位置。本文將利用tabulate結(jié)合loguru實(shí)現(xiàn)打印出美觀方便的日志記錄,需要的可以參考一下2022-10-10
使用Pytorch Geometric進(jìn)行鏈接預(yù)測(cè)的實(shí)現(xiàn)代碼
PyTorch Geometric (PyG)是構(gòu)建圖神經(jīng)網(wǎng)絡(luò)模型和實(shí)驗(yàn)各種圖卷積的主要工具,在本文中我們將通過鏈接預(yù)測(cè)來對(duì)其進(jìn)行介紹,文中有詳細(xì)的代碼示例供大家參考,需要的朋友可以參考下2023-10-10
Python爬蟲實(shí)現(xiàn)爬取百度百科詞條功能實(shí)例
這篇文章主要介紹了Python爬蟲實(shí)現(xiàn)爬取百度百科詞條功能,結(jié)合完整實(shí)例形式分析了Python爬蟲的基本原理及爬取百度百科詞條的步驟、網(wǎng)頁下載、解析、數(shù)據(jù)輸出等相關(guān)操作技巧,需要的朋友可以參考下2019-04-04

