本文主要是介绍Pytorch使用DataLoader, num_workers!=0时的内存泄露,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
-
描述一下背景,和遇到的问题:
我在做一个超大数据集的多分类,设备Ubuntu 22.04+i9 13900K+Nvidia 4090+64GB RAM,第一次的训练的训练集有700万张,训练成功。后面收集到更多数据集,数据增强后达到了1000万张。但第二次训练4个小时后,就被系统杀掉进程了,原因是Out of Memory。找了很久的原因,发现内存随着训练step的增加而线性增加,猜测是内存泄露,最后定位到了DataLoader的num_workers参数(只要num_workers=0就没有问题)。
-
真正原因:
Python(Pytorch)中的list转换成tensor时,会发生内存泄漏,要避免list的使用,可以通过使用np.array来代替list。
-
解决办法:
自定义DataLoader中的Dataset类,然后Dataset类中的list全部用np.array来代替。这样的话,DataLoader将np.array转换成Tensor的过程就不会发生内存泄露。
-
下面给两个错误的示例代码和一个正确的代码:(都是我自己犯过的错误)
1.错误的DataLoader加载数据集方法1
# 加载数据
train_data = datasets.ImageFolder(root=TRAIN_DIR_ARG, transform=transform)
valid_data = datasets.ImageFolder(root=VALIDATION_DIR, transform=transform)
test_data = datasets.ImageFolder(root=TEST_DIR, transform=transform)train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=8)
valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)
test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)
2.错误的DataLoader加载数据集方法2(重写了Dataset方法)
class CustomDataset(Dataset):def __init__(self, data_dir, transform=None):self.data_dir = data_dirself.transform = transformself.image_paths = []self.labels = []# 遍历数据目录并收集图像文件路径和对应的标签classes = os.listdir(data_dir)for i, class_name in enumerate(classes):class_dir = os.path.join(data_dir, class_name)if os.path.isdir(class_dir):for image_name in os.listdir(class_dir):image_path = os.path.join(class_dir, image_name)self.image_paths.append(image_path)self.labels.append(i)def __len__(self):return len(self.image_paths)def __getitem__(self, idx):image_path = self.image_paths[idx]label = self.labels[idx]# # 在需要时加载图像image = Image.open(image_path)if self.transform:image = self.transform(image)return image, labeltrain_data = CustomDataset(data_dir=TRAIN_DIR_ARG, transform=transform)
valid_data = CustomDataset(data_dir=VALIDATION_DIR, transform=transform)
test_data = CustomDataset(data_dir=TEST_DIR, transform=transform)train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=18)
valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)
test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8, pin_memory=False)
3.重写Dataset的正确方法(重写了Dataset方法,list全部转成np.array)
class CustomDataset(Dataset):def __init__(self, data_dir, transform=None):self.data_dir = data_dirself.transform = transformself.image_paths = [] # 使用Python列表self.labels = [] # 使用Python列表# 遍历数据目录并收集图像文件路径和对应的标签classes = os.listdir(data_dir)for i, class_name in enumerate(classes):class_dir = os.path.join(data_dir, class_name)if os.path.isdir(class_dir):for image_name in os.listdir(class_dir):image_path = os.path.join(class_dir, image_name)self.image_paths.append(image_path) # 添加到Python列表self.labels.append(i) # 添加到Python列表# 转换为NumPy数组,这里就是解决内存泄露的关键代码self.image_paths = np.array(self.image_paths)self.labels = np.array(self.labels)def __len__(self):return len(self.image_paths)def __getitem__(self, idx):image_path = self.image_paths[idx]label = self.labels[idx]# 在需要时加载图像image = Image.open(image_path)if self.transform:image = self.transform(image)# 将图像数据转换为NumPy数组image = np.array(image)return image, labeltrain_data = CustomDataset(data_dir=TRAIN_DIR_ARG, transform=transform)
valid_data = CustomDataset(data_dir=VALIDATION_DIR, transform=transform)
test_data = CustomDataset(data_dir=TEST_DIR, transform=transform)train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=18)
valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)
test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8, pin_memory=False)
这篇关于Pytorch使用DataLoader, num_workers!=0时的内存泄露的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!