基于h5py的使用及數(shù)據(jù)封裝代碼
1. h5py簡(jiǎn)單介紹
h5py文件是存放兩類對(duì)象的容器,數(shù)據(jù)集(dataset)和組(group),dataset類似數(shù)組類的數(shù)據(jù)集合,和numpy的數(shù)組差不多。group是像文件夾一樣的容器,它好比python中的字典,有鍵(key)和值(value)。group中可以存放dataset或者其他的group。”鍵”就是組成員的名稱,”值”就是組成員對(duì)象本身(組或者數(shù)據(jù)集),下面來(lái)看下如何創(chuàng)建組和數(shù)據(jù)集。
1.1 創(chuàng)建一個(gè)h5py文件
import h5py
#要是讀取文件的話,就把w換成r
f=h5py.File("myh5py.hdf5","w")
在當(dāng)前目錄下會(huì)生成一個(gè)myh5py.hdf5文件。
2. 創(chuàng)建dataset數(shù)據(jù)集
import h5py
f=h5py.File("myh5py.hdf5","w")
#deset1是數(shù)據(jù)集的name,(20,)代表數(shù)據(jù)集的shape,i代表的是數(shù)據(jù)集的元素類型
d1=f.create_dataset("dset1", (20,), 'i')
for key in f.keys():
print(key)
print(f[key].name)
print(f[key].shape)
print(f[key].value)
輸出:
dset1
/dset1
(20,)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
import h5py
import numpy as np
f=h5py.File("myh5py.hdf5","w")
a=np.arange(20)
d1=f.create_dataset("dset1",data=a)
for key in f.keys():
print(f[key].name)
print(f[key].value)
輸出:
/dset1
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
2. hpf5用于封裝訓(xùn)練集和測(cè)試集
#============================================================
# This prepare the hdf5 datasets of the DRIVE database
#============================================================
import os
import h5py
import numpy as np
from PIL import Image
def write_hdf5(arr,outfile):
with h5py.File(outfile,"w") as f:
f.create_dataset("image", data=arr, dtype=arr.dtype)
#------------Path of the images --------------------------------------------------------------
#train
original_imgs_train = "./DRIVE/training/images/"
groundTruth_imgs_train = "./DRIVE/training/1st_manual/"
borderMasks_imgs_train = "./DRIVE/training/mask/"
#test
original_imgs_test = "./DRIVE/test/images/"
groundTruth_imgs_test = "./DRIVE/test/1st_manual/"
borderMasks_imgs_test = "./DRIVE/test/mask/"
#---------------------------------------------------------------------------------------------
Nimgs = 20
channels = 3
height = 584
width = 565
dataset_path = "./DRIVE_datasets_training_testing/"
def get_datasets(imgs_dir,groundTruth_dir,borderMasks_dir,train_test="null"):
imgs = np.empty((Nimgs,height,width,channels))
groundTruth = np.empty((Nimgs,height,width))
border_masks = np.empty((Nimgs,height,width))
for path, subdirs, files in os.walk(imgs_dir): #list all files, directories in the path
for i in range(len(files)):
#original
print "original image: " +files[i]
img = Image.open(imgs_dir+files[i])
imgs[i] = np.asarray(img)
#corresponding ground truth
groundTruth_name = files[i][0:2] + "_manual1.gif"
print "ground truth name: " + groundTruth_name
g_truth = Image.open(groundTruth_dir + groundTruth_name)
groundTruth[i] = np.asarray(g_truth)
#corresponding border masks
border_masks_name = ""
if train_test=="train":
border_masks_name = files[i][0:2] + "_training_mask.gif"
elif train_test=="test":
border_masks_name = files[i][0:2] + "_test_mask.gif"
else:
print "specify if train or test!!"
exit()
print "border masks name: " + border_masks_name
b_mask = Image.open(borderMasks_dir + border_masks_name)
border_masks[i] = np.asarray(b_mask)
print "imgs max: " +str(np.max(imgs))
print "imgs min: " +str(np.min(imgs))
assert(np.max(groundTruth)==255 and np.max(border_masks)==255)
assert(np.min(groundTruth)==0 and np.min(border_masks)==0)
print "ground truth and border masks are correctly withih pixel value range 0-255 (black-white)"
#reshaping for my standard tensors
imgs = np.transpose(imgs,(0,3,1,2))
assert(imgs.shape == (Nimgs,channels,height,width))
groundTruth = np.reshape(groundTruth,(Nimgs,1,height,width))
border_masks = np.reshape(border_masks,(Nimgs,1,height,width))
assert(groundTruth.shape == (Nimgs,1,height,width))
assert(border_masks.shape == (Nimgs,1,height,width))
return imgs, groundTruth, border_masks
if not os.path.exists(dataset_path):
os.makedirs(dataset_path)
#getting the training datasets
imgs_train, groundTruth_train, border_masks_train = get_datasets(original_imgs_train,groundTruth_imgs_train,borderMasks_imgs_train,"train")
print "saving train datasets"
write_hdf5(imgs_train, dataset_path + "DRIVE_dataset_imgs_train.hdf5")
write_hdf5(groundTruth_train, dataset_path + "DRIVE_dataset_groundTruth_train.hdf5")
write_hdf5(border_masks_train,dataset_path + "DRIVE_dataset_borderMasks_train.hdf5")
#getting the testing datasets
imgs_test, groundTruth_test, border_masks_test = get_datasets(original_imgs_test,groundTruth_imgs_test,borderMasks_imgs_test,"test")
print "saving test datasets"
write_hdf5(imgs_test,dataset_path + "DRIVE_dataset_imgs_test.hdf5")
write_hdf5(groundTruth_test, dataset_path + "DRIVE_dataset_groundTruth_test.hdf5")
write_hdf5(border_masks_test,dataset_path + "DRIVE_dataset_borderMasks_test.hdf5")
遍歷文件夾下的所有文件 os.walk( dir )
for parent, dir_names, file_names in os.walk(parent_dir): for i in file_names: print file_name
parent: 父路徑
dir_names: 子文件夾
file_names: 文件名
以上這篇基于h5py的使用及數(shù)據(jù)封裝代碼就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
Python+matplotlib實(shí)現(xiàn)循環(huán)作圖的方法詳解
這篇文章主要為大家介紹了Python如何利用matplotlib實(shí)現(xiàn)循環(huán)作圖的,文中的示例代碼講解詳細(xì),感興趣的小伙伴可以跟隨小編一起學(xué)習(xí)學(xué)習(xí)2022-06-06
Python中的CSV文件使用"with"語(yǔ)句的方式詳解
with語(yǔ)句的主要用法是對(duì)語(yǔ)句中使用的對(duì)象進(jìn)行異常安全的清除.確保文件已關(guān)閉,鎖定已釋放,上下文恢復(fù)等.本文通過(guò)實(shí)例代碼給大家介紹Python中的CSV文件使用"with"語(yǔ)句的相關(guān)知識(shí),感興趣的朋友一起看看吧2018-10-10
python anaconda 安裝 環(huán)境變量 升級(jí) 以及特殊庫(kù)安裝的方法
下面小編就為大家?guī)?lái)一篇python anaconda 安裝 環(huán)境變量 升級(jí) 以及特殊庫(kù)安裝的方法。2017-06-06
python數(shù)據(jù)擬合之scipy.optimize.curve_fit解讀
這篇文章主要介紹了python數(shù)據(jù)擬合之scipy.optimize.curve_fit解讀,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2022-12-12
Python之time模塊的時(shí)間戳,時(shí)間字符串格式化與轉(zhuǎn)換方法(13位時(shí)間戳)
今天小編就為大家分享一篇Python之time模塊的時(shí)間戳,時(shí)間字符串格式化與轉(zhuǎn)換方法(13位時(shí)間戳),具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-08-08
Python之numpy.random.seed()和numpy.random.RandomState()區(qū)別及說(shuō)明
這篇文章主要介紹了Python之numpy.random.seed()和numpy.random.RandomState()區(qū)別及說(shuō)明,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2017-10-10

