PyTorch搭建LSTM實現(xiàn)多變量多步長時序負荷預(yù)測
I. 前言
在前面的兩篇文章PyTorch搭建LSTM實現(xiàn)時間序列預(yù)測(負荷預(yù)測)和PyTorch搭建LSTM實現(xiàn)多變量時間序列預(yù)測(負荷預(yù)測)中,我們利用LSTM分別實現(xiàn)了單變量單步長時間序列預(yù)測和多變量單步長時間序列預(yù)測。
本篇文章主要考慮用PyTorch搭建LSTM實現(xiàn)多變量多步長時間序列預(yù)測。
系列文章:
PyTorch搭建雙向LSTM實現(xiàn)時間序列負荷預(yù)測
PyTorch搭建LSTM實現(xiàn)多變量時序負荷預(yù)測
PyTorch深度學(xué)習(xí)LSTM從input輸入到Linear輸出
PyTorch搭建LSTM實現(xiàn)時間序列負荷預(yù)測
II. 數(shù)據(jù)處理
數(shù)據(jù)集為某個地區(qū)某段時間內(nèi)的電力負荷數(shù)據(jù),除了負荷以外,還包括溫度、濕度等信息。
本文中,我們根據(jù)前24個時刻的負荷以及該時刻的環(huán)境變量來預(yù)測接下來4個時刻的負荷(步長可調(diào))。
def load_data(file_name):
global MAX, MIN
df = pd.read_csv(os.path.dirname(os.getcwd()) + '/data/new_data/' + file_name, encoding='gbk')
columns = df.columns
df.fillna(df.mean(), inplace=True)
MAX = np.max(df[columns[1]])
MIN = np.min(df[columns[1]])
df[columns[1]] = (df[columns[1]] - MIN) / (MAX - MIN)
return df
class MyDataset(Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, item):
return self.data[item]
def __len__(self):
return len(self.data)
def nn_seq(file_name, B, num):
print('data processing...')
data = load_data(file_name)
load = data[data.columns[1]]
load = load.tolist()
data = data.values.tolist()
seq = []
for i in range(0, len(data) - 24 - num, num):
train_seq = []
train_label = []
for j in range(i, i + 24):
x = [load[j]]
for c in range(2, 8):
x.append(data[j][c])
train_seq.append(x)
for j in range(i + 24, i + 24 + num):
train_label.append(load[j])
train_seq = torch.FloatTensor(train_seq)
train_label = torch.FloatTensor(train_label).view(-1)
seq.append((train_seq, train_label))
# print(seq[-1])
Dtr = seq[0:int(len(seq) * 0.7)]
Dte = seq[int(len(seq) * 0.7):len(seq)]
train_len = int(len(Dtr) / B) * B
test_len = int(len(Dte) / B) * B
Dtr, Dte = Dtr[:train_len], Dte[:test_len]
train = MyDataset(Dtr)
test = MyDataset(Dte)
Dtr = DataLoader(dataset=train, batch_size=B, shuffle=False, num_workers=0)
Dte = DataLoader(dataset=test, batch_size=B, shuffle=False, num_workers=0)
return Dtr, Dte
其中num表示需要預(yù)測的步長,如num=4表示預(yù)測接下來4個時刻的負荷。
任意輸出其中一條數(shù)據(jù):
(tensor([[0.5830, 1.0000, 0.9091, 0.6957, 0.8333, 0.4884, 0.5122],
[0.6215, 1.0000, 0.9091, 0.7391, 0.8333, 0.4884, 0.5122],
[0.5954, 1.0000, 0.9091, 0.7826, 0.8333, 0.4884, 0.5122],
[0.5391, 1.0000, 0.9091, 0.8261, 0.8333, 0.4884, 0.5122],
[0.5351, 1.0000, 0.9091, 0.8696, 0.8333, 0.4884, 0.5122],
[0.5169, 1.0000, 0.9091, 0.9130, 0.8333, 0.4884, 0.5122],
[0.4694, 1.0000, 0.9091, 0.9565, 0.8333, 0.4884, 0.5122],
[0.4489, 1.0000, 0.9091, 1.0000, 0.8333, 0.4884, 0.5122],
[0.4885, 1.0000, 0.9091, 0.0000, 1.0000, 0.3256, 0.3902],
[0.4612, 1.0000, 0.9091, 0.0435, 1.0000, 0.3256, 0.3902],
[0.4229, 1.0000, 0.9091, 0.0870, 1.0000, 0.3256, 0.3902],
[0.4173, 1.0000, 0.9091, 0.1304, 1.0000, 0.3256, 0.3902],
[0.4503, 1.0000, 0.9091, 0.1739, 1.0000, 0.3256, 0.3902],
[0.4502, 1.0000, 0.9091, 0.2174, 1.0000, 0.3256, 0.3902],
[0.5426, 1.0000, 0.9091, 0.2609, 1.0000, 0.3256, 0.3902],
[0.5579, 1.0000, 0.9091, 0.3043, 1.0000, 0.3256, 0.3902],
[0.6035, 1.0000, 0.9091, 0.3478, 1.0000, 0.3256, 0.3902],
[0.6540, 1.0000, 0.9091, 0.3913, 1.0000, 0.3256, 0.3902],
[0.6181, 1.0000, 0.9091, 0.4348, 1.0000, 0.3256, 0.3902],
[0.6334, 1.0000, 0.9091, 0.4783, 1.0000, 0.3256, 0.3902],
[0.6297, 1.0000, 0.9091, 0.5217, 1.0000, 0.3256, 0.3902],
[0.5610, 1.0000, 0.9091, 0.5652, 1.0000, 0.3256, 0.3902],
[0.5957, 1.0000, 0.9091, 0.6087, 1.0000, 0.3256, 0.3902],
[0.6427, 1.0000, 0.9091, 0.6522, 1.0000, 0.3256, 0.3902]]), tensor([0.6360, 0.6996, 0.6889, 0.6434]))
數(shù)據(jù)格式為(X, Y)。其中X一共24行,表示前24個時刻的負荷值和該時刻的環(huán)境變量。Y一共四個值,表示需要預(yù)測的四個負荷值。需要注意的是,此時input_size=7,output_size=4。
III. LSTM模型
這里采用了深入理解PyTorch中LSTM的輸入和輸出(從input輸入到Linear輸出)中的模型:
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size, batch_size):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.output_size = output_size
self.num_directions = 1
self.batch_size = batch_size
self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=True)
self.linear = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input_seq):
h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
# print(input_seq.size())
seq_len = input_seq.shape[1]
# input(batch_size, seq_len, input_size)
input_seq = input_seq.view(self.batch_size, seq_len, self.input_size)
# output(batch_size, seq_len, num_directions * hidden_size)
output, _ = self.lstm(input_seq, (h_0, c_0))
# print('output.size=', output.size())
# print(self.batch_size * seq_len, self.hidden_size)
output = output.contiguous().view(self.batch_size * seq_len, self.hidden_size) # (5 * 30, 64)
pred = self.linear(output) # pred()
# print('pred=', pred.shape)
pred = pred.view(self.batch_size, seq_len, -1)
pred = pred[:, -1, :]
return pred
IV. 訓(xùn)練和預(yù)測
訓(xùn)練和預(yù)測代碼和前幾篇都差不多,只是需要注意input_size和output_size的大小。
訓(xùn)練了100輪,預(yù)測接下來四個時刻的負荷值,MAPE為7.53%:

V. 源碼及數(shù)據(jù)
源碼及數(shù)據(jù)我放在了GitHub上,LSTM-Load-Forecasting
以上就是PyTorch搭建LSTM實現(xiàn)多變量多步長時序負荷預(yù)測的詳細內(nèi)容,更多關(guān)于LSTM多變量多步長時序負荷預(yù)測的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
selenium框架中driver.close()和driver.quit()關(guān)閉瀏覽器
這篇文章主要介紹了selenium框架中driver.close()和driver.quit()關(guān)閉瀏覽器,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2020-12-12
Python實戰(zhàn)之天氣預(yù)報系統(tǒng)的實現(xiàn)
本文主要和大家介紹了如何用代碼寫一款Python版天氣預(yù)報系統(tǒng),是Tkinter界面化的,還會制作溫度折線圖跟氣溫餅圖哦!感興趣的小伙伴可以嘗試一下2022-12-12

