Keras复现 ICCV 2017论文 AOD—net

2024-03-14 19:59
文章标签 论文 net 复现 keras 2017 iccv aod

本文主要是介绍Keras复现 ICCV 2017论文 AOD—net,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

 

声明:本人是无神论者,信奉“神经网络黑盒不黑”。


本机配置:

python 3.5(.1)

keras 任何版本

tensorflow

NVIDIA显卡+CUDA 8.0+cuDNN(训练时间CPU时间就很短,嫌弃predict速度较慢的建议使用GPU)

opencv-python

numpy


文章简介:

Li, Boyi, et al. “Aod-net: All-in-one dehazing network.” Proceedings of the IEEE International Conference on Computer Vision. Vol. 1. No. 4. 2017.

https://arxiv.org/abs/1707.06543

AOD网络是第一个端对端去雾的网络,不依赖任何中间参数估计,AOD—net是基于大气散射模型配置的模型,网络可以无缝嵌入到任何深层网络中,本人认为,该文章能荣登ICCV首先归功于其物理模型的引入,ICCV喜欢无神论,“我的神经网络是根据物理模型造出来的”,这句话不知道能干掉多少去雾文章,其次,文章首次实现端对端,模型轻量可嵌套,倘若电脑上有相关python+keras配置,相对图片去雾或增强对比度简直比美图秀秀还方便,效果就更不用说了(虽然我做的效果比较一般)。

模型:

去雾原理:(就是比较基础的散射模型)


 

程序:

数据集下载地址:https://sites.google.com/view/reside-dehaze-datasets/reside-standard?authuser=0

需要比较强的VPN,清华校园网的VPN翻不过去,本人用的express。

jupyter:

from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Lambda
from keras import backend as K
import cv2, numpy as np
import glob
from keras.activations import relu 
import keras as keras
from keras.models import Modelfrom keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras import backend as K
img_width, img_height = 640, 480
import os
from scipy.misc import imsave
from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
from  sklearn.model_selection import train_test_split
import imageio
import glob
from skimage import transform as tffrom scipy import ndimage
import matplotlib.pyplot as plt
import matplotlib.image as plt_img
import scipy
import scipy
import skimage
import re
#import LRFinder
import math as mfrom keras_contrib.losses import DSSIMObjective
from keras import backend as K
from pathlib import Pathimport numpy as np
from keras_contrib.losses import DSSIMObjective
from keras import backend as K
from skimage.measure import compare_ssim, compare_psnrimport cv2
import numpy as np
from matplotlib import pyplot as plt
def train_generator(train_path, label_path, batch_size):L = len(train_path)#this line is just to make the generator infinite, keras needs that    while True:batch_start = 0batch_end = batch_sizewhile batch_start < L:limit = min(batch_end, L)X = get_image(train_path[batch_start:limit])#print train_path[batch_start:limit]# Modify this to find similar labeled names in label directoryY = list(map(lambda x:(re.sub(".*/","",x)), train_path[batch_start:limit]))Y = list(map(lambda x:(re.sub("_.*.png",".png",x)), Y))Y = list(map(lambda x:(re.sub("[hazy]","",x)), Y))Y = list(map(lambda x:(re.sub("^",train_label_dir + '/',x)), Y))Y = get_image(Y)batch_start += batch_size   batch_end += batch_sizeyield (X,Y)   
def creat_AODNet(self,l2_regularization=0.0001):l2_reg = l2_regularizationinputs=Input(shape=(self.heigt,self.width,self.channel))conv1 = Conv2D(3, (1, 1), kernel_initializer='random_normal', activation='relu',padding="same" ,kernel_regularizer=l2(l2_reg),name="conv1")(inputs)conv2 = Conv2D(3, (3, 3), kernel_initializer='random_normal', activation='relu',padding="same" ,kernel_regularizer=l2(l2_reg),name="conv2")(conv1)concat1=concatenate([conv1,conv2],axis=-1,name="concat1")conv3 = Conv2D(3, (5, 5), kernel_initializer='random_normal', activation='relu', padding="same",kernel_regularizer=l2(l2_reg), name="conv3")(concat1)concat2 = concatenate([conv2, conv3], axis=-1,name="concat2")conv4 = Conv2D(3, (5, 5), kernel_initializer='random_normal', activation='relu', padding="same",kernel_regularizer=l2(l2_reg), name="conv4")(concat2)concat3 = concatenate([conv1,conv2, conv3,conv4], axis=-1, name="concat3")K_x = Conv2D(3, (5, 5), kernel_initializer='random_normal', activation='relu', padding="same",kernel_regularizer=l2(l2_reg), name="K_x")(concat3)"""说明:I(x) = J(x)*t(x) + A*(1 − t(x))J(x)=K(x)*I(x)-K(x)+bwhere :J(x)is the scene radiance (i.e., the ideal “clean image”)I(x) is observed hazy imageA denotes the global atmosphericlightt(x) is the transmission matrix """mul=Multiply(name="mul")([K_x,inputs])sub=Subtract(name="sub")([mul,K_x])add_b=Lambda(lambda x:1+x,name="add_b")(sub)output=Lambda(lambda x:relu(x),name="output")(add_b)model=Model(inputs=inputs,outputs=output)return model

 

 

if __name__ == '__main__':train_data_dir = '../../jupyter/AOD_net/data/hazy'train_label_dir = '../../jupyter/AOD_net/data/clear'#train_data_dir = '../../TAMU/CSCE_633/Project/OTS/train'#train_label_dir = '../../TAMU/CSCE_633/Project/OTS/clear'batch_size = 1if K.image_data_format() == 'channels_first':input_shape = (3, img_height, img_width)else:input_shape = (img_height, img_width, 3)
model_checkpoint = ModelCheckpoint('aod_net.h5', #monitor='val_loss', #save_best_only=True,)tensorboard_vis = TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=batch_size, write_graph=True, write_grads=False, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)from keras import optimizersprint('Fitting model...')model = get_unet(1, False) model.summary()

 

#train_path = glob.glob("AOD_net/data/hazy/*png")
train_path = glob.glob(train_data_dir+"/*.png")
label_path = glob.glob(train_label_dir+'/'+'*.png')
epochs = 1
train_X, validation_X = train_test_split(train_path, test_size=0.0, shuffle=True,)
steps_per_epoch = int(len(train_X)/batch_size)train_set_size = len(train_path)
print("Train set size is:", train_set_size, ",", "Steps per epochs is:", steps_per_epoch, ",", "batch size is:", batch_size, "total epochs is:", epochs)
#assert (train_set_size == (steps_per_epoch*batch_size))
#assert (train_set_size == (len(glob.glob(train_label_dir+'/'+'*.png'))))
#assert (train_set_size == (len(label_path)))validation_steps = int(len(validation_X)/batch_size)
print ("Validation steps:", validation_steps)      

 

model.load_weights('aod.h5')
model.save('aod_1_epoch_weights.h5')
model.fit_generator(generator=train_generator(train_X, label_path, batch_size),#validation_data=generate_arrays_from_file(validation_X, label_path, batch_size),steps_per_epoch=steps_per_epoch, #validation_steps = validation_steps,epochs=epochs,verbose=1,callbacks=[model_checkpoint, tensorboard_vis],#use_multiprocessing=False,#max_queue_size = False,)
def step_decay(epoch):initial_lrate = 0.0001drop = 0.5epochs_drop = 2lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))print("current lrate")print(lrate)return lrate
class LossHistory(keras.callbacks.Callback):def on_train_begin(self, logs={}):self.losses = []self.acc = []self.val_losses = []self.val_acc = []def on_batch_end(self, batch, logs={}):#  print(logs.get('loss'))#   print(logs.get('acc'))self.losses.append(logs.get('loss'))self.acc.append(logs.get('acc'))self.val_losses.append(logs.get('val_loss'))self.val_acc.append(logs.get('val_acc'))plt.plot(self.acc,'r--')plt.plot(self.losses,'b')plt.plot(self.val_acc,'g--')plt.plot(self.val_losses,'y')#  plt.show()
test_data_dir = glob.glob('../../jupyter/outdoor/hazy' + "/*.jpg")print (test_data_dir)
model = load_model('aod_net.h5',custom_objects={#'sum2':sum2,'relu':relu,#'range_error':range_error,#'heat_error':heat_error,})
test_pred_dir = 'dehazed_clahe_wo_dcp'def save_image(Y_predicted_array, filenames):if not os.path.exists(test_pred_dir):os.mkdir(test_pred_dir)for idx, file_name in enumerate(filenames):f = re.sub(".jpg","", file_name)f = re.sub(".*/","", f)f = re.sub("[hazy]","",f)scipy.misc.toimage(Y_predicted_array[idx]).save(os.path.join(test_pred_dir, f + "_dehazed.png"))
def show_image(file_data):plt.figure(figsize=(20, 6))plt.imshow(file_data)plt.show()
batch_size = 1assert batch_size == 1steps = int(m.ceil(len(test_data_dir)/float(batch_size)))print ("Steps:", steps, "Length of test set:", len(test_data_dir))assert (steps*batch_size >= len(test_data_dir))for step_idx in range(0,steps):Y_array = get_image(test_data_dir[step_idx*batch_size:(step_idx+1)*batch_size])Y_predicted_array = model.predict(Y_array, batch_size=batch_size, verbose=1)#print Y_predicted_array.shape#show_image(X_test[0]) YY = np.array(Y_predicted_array[0]/130).astype("int")YY_SAVE = np.array(Y_predicted_array).astype("int")save_image(YY_SAVE, test_data_dir[step_idx*batch_size:(step_idx+1)*batch_size])show_image(Y_array[0])show_image(YY)

测试:

 

这篇关于Keras复现 ICCV 2017论文 AOD—net的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:https://blog.csdn.net/qq_18998101/article/details/88567784
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/809562

相关文章

基于.NET编写工具类解决JSON乱码问题

《基于.NET编写工具类解决JSON乱码问题》在开发过程中,我们经常会遇到JSON数据处理的问题,尤其是在数据传输和解析过程中,很容易出现编码错误导致的乱码问题,下面我们就来编写一个.NET工具类来解... 目录问题背景核心原理工具类实现使用示例总结在开发过程中,我们经常会遇到jsON数据处理的问题,尤其是

Node.js net模块的使用示例

《Node.jsnet模块的使用示例》本文主要介绍了Node.jsnet模块的使用示例,net模块支持TCP通信,处理TCP连接和数据传输,具有一定的参考价值,感兴趣的可以了解一下... 目录简介引入 net 模块核心概念TCP (传输控制协议)Socket服务器TCP 服务器创建基本服务器服务器配置选项服

.NET利用C#字节流动态操作Excel文件

《.NET利用C#字节流动态操作Excel文件》在.NET开发中,通过字节流动态操作Excel文件提供了一种高效且灵活的方式处理数据,本文将演示如何在.NET平台使用C#通过字节流创建,读取,编辑及保... 目录用C#创建并保存Excel工作簿为字节流用C#通过字节流直接读取Excel文件数据用C#通过字节

poj 1258 Agri-Net(最小生成树模板代码)

感觉用这题来当模板更适合。 题意就是给你邻接矩阵求最小生成树啦。~ prim代码:效率很高。172k...0ms。 #include<stdio.h>#include<algorithm>using namespace std;const int MaxN = 101;const int INF = 0x3f3f3f3f;int g[MaxN][MaxN];int n

AI hospital 论文Idea

一、Benchmarking Large Language Models on Communicative Medical Coaching: A Dataset and a Novel System论文地址含代码 大多数现有模型和工具主要迎合以患者为中心的服务。这项工作深入探讨了LLMs在提高医疗专业人员的沟通能力。目标是构建一个模拟实践环境,人类医生(即医学学习者)可以在其中与患者代理进行医学

如何在Visual Studio中调试.NET源码

今天偶然在看别人代码时,发现在他的代码里使用了Any判断List<T>是否为空。 我一般的做法是先判断是否为null,再判断Count。 看了一下Count的源码如下: 1 [__DynamicallyInvokable]2 public int Count3 {4 [__DynamicallyInvokable]5 get

2、PF-Net点云补全

2、PF-Net 点云补全 PF-Net论文链接:PF-Net PF-Net (Point Fractal Network for 3D Point Cloud Completion)是一种专门为三维点云补全设计的深度学习模型。点云补全实际上和图片补全是一个逻辑,都是采用GAN模型的思想来进行补全,在图片补全中,将部分像素点删除并且标记,然后卷积特征提取预测、判别器判别,来训练模型,生成的像

论文翻译:arxiv-2024 Benchmark Data Contamination of Large Language Models: A Survey

Benchmark Data Contamination of Large Language Models: A Survey https://arxiv.org/abs/2406.04244 大规模语言模型的基准数据污染:一项综述 文章目录 大规模语言模型的基准数据污染:一项综述摘要1 引言 摘要 大规模语言模型(LLMs),如GPT-4、Claude-3和Gemini的快

论文阅读笔记: Segment Anything

文章目录 Segment Anything摘要引言任务模型数据引擎数据集负责任的人工智能 Segment Anything Model图像编码器提示编码器mask解码器解决歧义损失和训练 Segment Anything 论文地址: https://arxiv.org/abs/2304.02643 代码地址:https://github.com/facebookresear

论文翻译:ICLR-2024 PROVING TEST SET CONTAMINATION IN BLACK BOX LANGUAGE MODELS

PROVING TEST SET CONTAMINATION IN BLACK BOX LANGUAGE MODELS https://openreview.net/forum?id=KS8mIvetg2 验证测试集污染在黑盒语言模型中 文章目录 验证测试集污染在黑盒语言模型中摘要1 引言 摘要 大型语言模型是在大量互联网数据上训练的,这引发了人们的担忧和猜测,即它们可能已