基于python的BP神經網(wǎng)絡及異或實現(xiàn)過程解析
更新時間:2019年09月30日 10:05:32 作者:沙克的世界
這篇文章主要介紹了基于python的BP神經網(wǎng)絡及異或實現(xiàn)過程解析,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下
BP神經網(wǎng)絡是最簡單的神經網(wǎng)絡模型了,三層能夠模擬非線性函數(shù)效果。

難點:
- 如何確定初始化參數(shù)?
- 如何確定隱含層節(jié)點數(shù)量?
- 迭代多少次?如何更快收斂?
- 如何獲得全局最優(yōu)解?
'''
neural networks
created on 2019.9.24
author: vince
'''
import math
import logging
import numpy
import random
import matplotlib.pyplot as plt
'''
neural network
'''
class NeuralNetwork:
def __init__(self, layer_nums, iter_num = 10000, batch_size = 1):
self.__ILI = 0;
self.__HLI = 1;
self.__OLI = 2;
self.__TLN = 3;
if len(layer_nums) != self.__TLN:
raise Exception("layer_nums length must be 3");
self.__layer_nums = layer_nums; #array [layer0_num, layer1_num ...layerN_num]
self.__iter_num = iter_num;
self.__batch_size = batch_size;
def train(self, X, Y):
X = numpy.array(X);
Y = numpy.array(Y);
self.L = [];
#initialize parameters
self.__weight = [];
self.__bias = [];
self.__step_len = [];
for layer_index in range(1, self.__TLN):
self.__weight.append(numpy.random.rand(self.__layer_nums[layer_index - 1], self.__layer_nums[layer_index]) * 2 - 1.0);
self.__bias.append(numpy.random.rand(self.__layer_nums[layer_index]) * 2 - 1.0);
self.__step_len.append(0.3);
logging.info("bias:%s" % (self.__bias));
logging.info("weight:%s" % (self.__weight));
for iter_index in range(self.__iter_num):
sample_index = random.randint(0, len(X) - 1);
logging.debug("-----round:%s, select sample %s-----" % (iter_index, sample_index));
output = self.forward_pass(X[sample_index]);
g = (-output[2] + Y[sample_index]) * self.activation_drive(output[2]);
logging.debug("g:%s" % (g));
for j in range(len(output[1])):
self.__weight[1][j] += self.__step_len[1] * g * output[1][j];
self.__bias[1] -= self.__step_len[1] * g;
e = [];
for i in range(self.__layer_nums[self.__HLI]):
e.append(numpy.dot(g, self.__weight[1][i]) * self.activation_drive(output[1][i]));
e = numpy.array(e);
logging.debug("e:%s" % (e));
for j in range(len(output[0])):
self.__weight[0][j] += self.__step_len[0] * e * output[0][j];
self.__bias[0] -= self.__step_len[0] * e;
l = 0;
for i in range(len(X)):
predictions = self.forward_pass(X[i])[2];
l += 0.5 * numpy.sum((predictions - Y[i]) ** 2);
l /= len(X);
self.L.append(l);
logging.debug("bias:%s" % (self.__bias));
logging.debug("weight:%s" % (self.__weight));
logging.debug("loss:%s" % (l));
logging.info("bias:%s" % (self.__bias));
logging.info("weight:%s" % (self.__weight));
logging.info("L:%s" % (self.L));
def activation(self, z):
return (1.0 / (1.0 + numpy.exp(-z)));
def activation_drive(self, y):
return y * (1.0 - y);
def forward_pass(self, x):
data = numpy.copy(x);
result = [];
result.append(data);
for layer_index in range(self.__TLN - 1):
data = self.activation(numpy.dot(data, self.__weight[layer_index]) - self.__bias[layer_index]);
result.append(data);
return numpy.array(result);
def predict(self, x):
return self.forward_pass(x)[self.__OLI];
def main():
logging.basicConfig(level = logging.INFO,
format = '%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
datefmt = '%a, %d %b %Y %H:%M:%S');
logging.info("trainning begin.");
nn = NeuralNetwork([2, 2, 1]);
X = numpy.array([[0, 0], [1, 0], [1, 1], [0, 1]]);
Y = numpy.array([0, 1, 0, 1]);
nn.train(X, Y);
logging.info("trainning end. predict begin.");
for x in X:
print(x, nn.predict(x));
plt.plot(nn.L)
plt.show();
if __name__ == "__main__":
main();
具體收斂效果

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。
相關文章
Python數(shù)學建模StatsModels統(tǒng)計回歸之線性回歸示例詳解
這篇文章主要為大家介紹了Python數(shù)學建模中StatsModels統(tǒng)計回歸之線性回歸的示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助2021-10-10
python GUI庫圖形界面開發(fā)之PyQt5拖放控件實例詳解
這篇文章主要介紹了python GUI庫圖形界面開發(fā)之PyQt5使用拖放控件實例詳解,需要的朋友可以參考下2020-02-02
如何配置關聯(lián)Python 解釋器 Anaconda的教程(圖解)
這篇文章主要介紹了如何配置關聯(lián)Python 解釋器 Anaconda的教程,本文通過圖文并茂的形式給大家介紹的非常詳細,對大家的學習火鍋工作具有一定的參考借鑒價值,需要的朋友可以參考下2020-04-04
Python優(yōu)化技巧之利用ctypes提高執(zhí)行速度
ctypes是Python的一個外部庫,提供和C語言兼容的數(shù)據(jù)類型,可以很方便地調用C DLL中的函數(shù)。今天我們就來詳細探討下ctypes庫的使用技巧2016-09-09

