【笔记】scatter_函数:用法如 torch.zeros(target.size(0), 2).scatter_(1,target,1).to(self.device)

2024-01-29 06:40

本文主要是介绍【笔记】scatter_函数:用法如 torch.zeros(target.size(0), 2).scatter_(1,target,1).to(self.device),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

target内容:

tensor([0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
        0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
        0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0,
        0, 1, 1, 0])

类型:

<class 'torch.Tensor'>

target.shape

torch.Size([100])

target.size()

torch.Size([100])

程序1:error 

要知道错误的原因

RuntimeError: Expected index [1, 100] to be smaller than self [100, 2] apart from dimension 1 and to be smaller size than src [100, 2]
 

程序2: true

import torch
from PIL import Image
import numpy as np
import torch.nn as nn
import os
from torch.utils.data import Dataset, DataLoaderclass mydataset(Dataset):def __init__(self, path):self.path = pathself.dataset = os.listdir(self.path)self.mean = [0.4878, 0.4545, 0.4168]self.std = [0.2623, 0.2555, 0.2577]def __getitem__(self, index):name = self.dataset[index]name_list = name.split(".")target = int(name_list[0])target = torch.tensor(target)img = Image.open(os.path.join(self.path, name))img = np.array(img) / 255# 去均值img = (img - self.mean) / self.std# img 是 float64data = torch.tensor(img, dtype=torch.float32).permute(2, 0, 1)return data, targetdef __len__(self):return len(self.dataset)class mynetwork(nn.Module):def __init__(self):super(mynetwork, self).__init__()# 有序容器self.line1 = nn.Sequential(nn.Linear(3 * 100 * 100, 5120),nn.ReLU(),nn.Linear(5120, 256),nn.ReLU(),nn.Linear(256, 128),nn.ReLU(),nn.Linear(128, 2560),nn.ReLU(),nn.Linear(2560, 512),nn.ReLU(),nn.Linear(512, 256),nn.ReLU(),nn.Linear(256, 2),)#  parse  vt. 解析;从语法上分析def forward(self, parse):data = torch.reshape(parse, shape=(-1, 3 * 100 * 100))return self.line1(data)class train(object):def __init__(self, path):self.path = pathself.test_dataset = mydataset(self.path)self.train_dataset = mydataset(self.path)self.criterion = torch.nn.MSELoss()self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")self.net = mynetwork().to(self.device)self.optimize = torch.optim.Adam(self.net.parameters())def dataloader(self, batch):train_data_loader = DataLoader(dataset=self.train_dataset, batch_size=batch, shuffle=True)test_data_loader = DataLoader(dataset=self.test_dataset, batch_size=batch, shuffle=True)return train_data_loader, test_data_loaderdef trainnet(self, batch, epoch):train_data_loader, test_data_loader = self.dataloader(batch)losses = []accuracy = []for i in range(epoch):for j, (input, target) in enumerate(train_data_loader):input = input.to(self.device)output = self.net(input)print(target,type(target),target.shape,target.size())target = torch.zeros(target.size(0), 2).scatter_(1,target.view(1,-1),1).to(self.device)print(target,type(target),target.shape,target.size())print(target)input()if __name__ == "__main__":path = r"./cat_dog/img"t = train(path)t.trainnet(100, 10)

输出:

/home/wangbin/anaconda3/envs/deep_learning/bin/python3.7 /media/wangbin/F/深度学习_程序/dog_cat/cat_dog.py
tensor([0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0,0, 1, 1, 0]) <class 'torch.Tensor'> torch.Size([100]) torch.Size([100])
tensor([[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.]], device='cuda:0') <class 'torch.Tensor'> torch.Size([100, 2]) torch.Size([100, 2])
tensor([[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[0., 1.],[0., 1.],[1., 0.],[0., 1.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[1., 0.],[0., 1.],[0., 1.],[1., 0.]], device='cuda:0')

附:

 

函数资料:

torch._C._TensorBase._TensorBase def scatter_(self,
             dim: int,
             index: Any,
             src: Any,
             reduce: str = None) -> None
scatter_(dim, index, src, reduce=None) -> Tensor
Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.
For a 3-D tensor, self is updated as:
self[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0
self[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1
self[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2
This is the reverse operation of the manner described in ~Tensor.gather.
self, index and src (if it is a Tensor) should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim.
Moreover, as for ~Tensor.gather, the values of index must be between 0 and self.size(dim) - 1 inclusive, and all values in a row along the specified dimension dim must be unique.
Additionally accepts an optional reduce argument that allows specification of an optional reduction operation, which is applied to all values in the tensor src into self at the indicies specified in the index. For each value in src, the reduction operation is applied to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.
Given a 3-D tensor and reduction using the multiplication operation, self is updated as:
self[index[i][j][k]][j][k] *= src[i][j][k]  # if dim == 0
self[i][index[i][j][k]][k] *= src[i][j][k]  # if dim == 1
self[i][j][index[i][j][k]] *= src[i][j][k]  # if dim == 2
Reducing with the addition operation is the same as using ~torch.Tensor.scatter_add_.
Note
Reduction is not yet implemented for the CUDA backend.
Example:
>>> x = torch.rand(2, 5)
>>> x
tensor([[ 0.3992,  0.2908,  0.9044,  0.4850,  0.6004],
        [ 0.5735,  0.9006,  0.6797,  0.4152,  0.1732]])
>>> torch.zeros(3, 5).scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
tensor([[ 0.3992,  0.9006,  0.6797,  0.4850,  0.6004],
        [ 0.0000,  0.2908,  0.0000,  0.4152,  0.0000],
        [ 0.5735,  0.0000,  0.9044,  0.0000,  0.1732]])

>>> z = torch.zeros(2, 4).scatter_(1, torch.tensor([[2], [3]]), 1.23)
>>> z
tensor([[ 0.0000,  0.0000,  1.2300,  0.0000],
        [ 0.0000,  0.0000,  0.0000,  1.2300]])

>>> z = torch.ones(2, 4).scatter_(1, torch.tensor([[2], [3]]), 1.23, reduce='multiply')
>>> z
tensor([[1.0000, 1.0000, 1.2300, 1.0000],
        [1.0000, 1.0000, 1.0000, 1.2300]])
Params:
dim – the axis along which to index
index – the indices of elements to scatter, can be either empty or the same size of src. When empty, the operation returns identity
src – the source element(s) to scatter, incase `value` is not specified
reduce – reduction operation to apply, can be either 'add' or 'multiply'.
  < Python 3.7 (deep_learning) >

这篇关于【笔记】scatter_函数:用法如 torch.zeros(target.size(0), 2).scatter_(1,target,1).to(self.device)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/656061

相关文章

hdu1171(母函数或多重背包)

题意:把物品分成两份,使得价值最接近 可以用背包,或者是母函数来解,母函数(1 + x^v+x^2v+.....+x^num*v)(1 + x^v+x^2v+.....+x^num*v)(1 + x^v+x^2v+.....+x^num*v) 其中指数为价值,每一项的数目为(该物品数+1)个 代码如下: #include<iostream>#include<algorithm>

【学习笔记】 陈强-机器学习-Python-Ch15 人工神经网络(1)sklearn

系列文章目录 监督学习:参数方法 【学习笔记】 陈强-机器学习-Python-Ch4 线性回归 【学习笔记】 陈强-机器学习-Python-Ch5 逻辑回归 【课后题练习】 陈强-机器学习-Python-Ch5 逻辑回归(SAheart.csv) 【学习笔记】 陈强-机器学习-Python-Ch6 多项逻辑回归 【学习笔记 及 课后题练习】 陈强-机器学习-Python-Ch7 判别分析 【学

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识

C++操作符重载实例(独立函数)

C++操作符重载实例,我们把坐标值CVector的加法进行重载,计算c3=c1+c2时,也就是计算x3=x1+x2,y3=y1+y2,今天我们以独立函数的方式重载操作符+(加号),以下是C++代码: c1802.cpp源代码: D:\YcjWork\CppTour>vim c1802.cpp #include <iostream>using namespace std;/*** 以独立函数

bytes.split的用法和注意事项

当然,我很乐意详细介绍 bytes.Split 的用法和注意事项。这个函数是 Go 标准库中 bytes 包的一个重要组成部分,用于分割字节切片。 基本用法 bytes.Split 的函数签名如下: func Split(s, sep []byte) [][]byte s 是要分割的字节切片sep 是用作分隔符的字节切片返回值是一个二维字节切片,包含分割后的结果 基本使用示例: pa

论文阅读笔记: Segment Anything

文章目录 Segment Anything摘要引言任务模型数据引擎数据集负责任的人工智能 Segment Anything Model图像编码器提示编码器mask解码器解决歧义损失和训练 Segment Anything 论文地址: https://arxiv.org/abs/2304.02643 代码地址:https://github.com/facebookresear

函数式编程思想

我们经常会用到各种各样的编程思想,例如面向过程、面向对象。不过笔者在该博客简单介绍一下函数式编程思想. 如果对函数式编程思想进行概括,就是f(x) = na(x) , y=uf(x)…至于其他的编程思想,可能是y=a(x)+b(x)+c(x)…,也有可能是y=f(x)=f(x)/a + f(x)/b+f(x)/c… 面向过程的指令式编程 面向过程,简单理解就是y=a(x)+b(x)+c(x)

数学建模笔记—— 非线性规划

数学建模笔记—— 非线性规划 非线性规划1. 模型原理1.1 非线性规划的标准型1.2 非线性规划求解的Matlab函数 2. 典型例题3. matlab代码求解3.1 例1 一个简单示例3.2 例2 选址问题1. 第一问 线性规划2. 第二问 非线性规划 非线性规划 非线性规划是一种求解目标函数或约束条件中有一个或几个非线性函数的最优化问题的方法。运筹学的一个重要分支。2

【C++学习笔记 20】C++中的智能指针

智能指针的功能 在上一篇笔记提到了在栈和堆上创建变量的区别,使用new关键字创建变量时,需要搭配delete关键字销毁变量。而智能指针的作用就是调用new分配内存时,不必自己去调用delete,甚至不用调用new。 智能指针实际上就是对原始指针的包装。 unique_ptr 最简单的智能指针,是一种作用域指针,意思是当指针超出该作用域时,会自动调用delete。它名为unique的原因是这个

查看提交历史 —— Git 学习笔记 11

查看提交历史 查看提交历史 不带任何选项的git log-p选项--stat 选项--pretty=oneline选项--pretty=format选项git log常用选项列表参考资料 在提交了若干更新,又或者克隆了某个项目之后,你也许想回顾下提交历史。 完成这个任务最简单而又有效的 工具是 git log 命令。 接下来的例子会用一个用于演示的 simplegit