pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装

本文主要是介绍pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

pycharm安装pytorch报错 提示系列问题  torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装  

 

DEPRECATION: The -b/--build/--build-dir/--build-directory option is deprecated. pip 20.3 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with --no-clean. You can find discussion regarding this at https://github.com/pypa/pip/issues/8333.

如上报错来自参考

https://www.jb51.net/article/194349.htm

参考有同学遇到类似的问题的解决方案

https://blog.csdn.net/weixin_41010198/article/details/103107083

 

 

 

 

ERROR: Could not find a version that satisfies the requirement torch==1.6.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple

DEPRECATION: The -b/--build/--build-dir/--build-directory option is deprecated. pip 20.3 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with --no-clean. You can find discussion regarding this at https://github.com/pypa/pip/issues/8333.
ERROR: Could not find a version that satisfies the requirement torch==1.6.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.6.0

 

三  最终解决方案

是参考的如下的方式解决pycharm 导入pytorch问题的,真是周折

主要参考的如下连接的,将pycharm的system interceptor切换为anaconda的python.exe的环境变量

https://www.jb51.net/article/181954.htm

同时从上面的博文中也可以得出切换pycharm的环境变量的方法,可以使在pycharm中进行如下设置

之所以采用这种方案,是因为代码在本地启动的anaconda   jupyter notebook中是可以调用pytorch的,版本是1.1.0,但是在pycharm中就是安装不上,无论是1.1.0 还是0.41  1.6的版本都安装不上,分析可能是环境变量问题,

萌生了使用anaconda的环境变量 (主要是编译器)的想法,看到如上博文,参考更改了项目的解析器,代码中引用的torch包就找到了,不报找不到对应包的问题了

 

 

实验代码为NLP 的Seq2seq代码

# code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor
import torch
import numpy as np
import torch.nn as nn
import torch.utils.data as Datadevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# S: Symbol that shows starting of decoding input
# E: Symbol that shows starting of decoding output
# ?: Symbol that will fill in blank sequence if current batch data size is short than n_stepletter = [c for c in 'SE?abcdefghijklmnopqrstuvwxyz']
letter2idx = {n: i for i, n in enumerate(letter)}seq_data = [['man', 'women'], ['black', 'white'], ['king', 'queen'], ['girl', 'boy'], ['up', 'down'], ['high', 'low']]# Seq2Seq Parameter
n_step = max([max(len(i), len(j)) for i, j in seq_data]) # max_len(=5)
n_hidden = 128
n_class = len(letter2idx) # classfication problem
batch_size = 3#3
def make_data(seq_data):enc_input_all, dec_input_all, dec_output_all = [], [], []for seq in seq_data:for i in range(2):seq[i] = seq[i] + '?' * (n_step - len(seq[i])) # 'man??', 'women'enc_input = [letter2idx[n] for n in (seq[0] + 'E')] # ['m', 'a', 'n', '?', '?', 'E']dec_input = [letter2idx[n] for n in ('S' + seq[1])] # ['S', 'w', 'o', 'm', 'e', 'n']dec_output = [letter2idx[n] for n in (seq[1] + 'E')] # ['w', 'o', 'm', 'e', 'n', 'E']enc_input_all.append(np.eye(n_class)[enc_input])dec_input_all.append(np.eye(n_class)[dec_input])dec_output_all.append(dec_output) # not one-hot# make tensorreturn torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all)'''
enc_input_all: [6, n_step+1 (because of 'E'), n_class]
dec_input_all: [6, n_step+1 (because of 'S'), n_class]
dec_output_all: [6, n_step+1 (because of 'E')]
'''
enc_input_all, dec_input_all, dec_output_all = make_data(seq_data)
#4
class TranslateDataSet(Data.Dataset):def __init__(self, enc_input_all, dec_input_all, dec_output_all):self.enc_input_all = enc_input_allself.dec_input_all = dec_input_allself.dec_output_all = dec_output_alldef __len__(self):  # return dataset sizereturn len(self.enc_input_all)def __getitem__(self, idx):return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx]loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True)
#5
def make_data(seq_data):enc_input_all, dec_input_all, dec_output_all = [], [], []for seq in seq_data:for i in range(2):seq[i] = seq[i] + '?' * (n_step - len(seq[i])) # 'man??', 'women'enc_input = [letter2idx[n] for n in (seq[0] + 'E')] # ['m', 'a', 'n', '?', '?', 'E']dec_input = [letter2idx[n] for n in ('S' + seq[1])] # ['S', 'w', 'o', 'm', 'e', 'n']dec_output = [letter2idx[n] for n in (seq[1] + 'E')] # ['w', 'o', 'm', 'e', 'n', 'E']enc_input_all.append(np.eye(n_class)[enc_input])dec_input_all.append(np.eye(n_class)[dec_input])dec_output_all.append(dec_output) # not one-hot# make tensorreturn torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all)'''
enc_input_all: [6, n_step+1 (because of 'E'), n_class]
dec_input_all: [6, n_step+1 (because of 'S'), n_class]
dec_output_all: [6, n_step+1 (because of 'E')]
'''
enc_input_all, dec_input_all, dec_output_all = make_data(seq_data)
#6
class TranslateDataSet(Data.Dataset):def __init__(self, enc_input_all, dec_input_all, dec_output_all):self.enc_input_all = enc_input_allself.dec_input_all = dec_input_allself.dec_output_all = dec_output_alldef __len__(self):  # return dataset sizereturn len(self.enc_input_all)def __getitem__(self, idx):return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx]loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True)
#7# Model
class Seq2Seq(nn.Module):def __init__(self):super(Seq2Seq, self).__init__()self.encoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # encoderself.decoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # decoderself.fc = nn.Linear(n_hidden, n_class)def forward(self, enc_input, enc_hidden, dec_input):# enc_input(=input_batch): [batch_size, n_step+1, n_class]# dec_inpu(=output_batch): [batch_size, n_step+1, n_class]enc_input = enc_input.transpose(0, 1) # enc_input: [n_step+1, batch_size, n_class]dec_input = dec_input.transpose(0, 1) # dec_input: [n_step+1, batch_size, n_class]# h_t : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]_, h_t = self.encoder(enc_input, enc_hidden)# outputs : [n_step+1, batch_size, num_directions(=1) * n_hidden(=128)]outputs, _ = self.decoder(dec_input, h_t)model = self.fc(outputs) # model : [n_step+1, batch_size, n_class]return modelmodel = Seq2Seq().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
#8
for epoch in range(5000):for enc_input_batch, dec_input_batch, dec_output_batch in loader:# make hidden shape [num_layers * num_directions, batch_size, n_hidden]h_0 = torch.zeros(1, batch_size, n_hidden).to(device)(enc_input_batch, dec_intput_batch, dec_output_batch) = (enc_input_batch.to(device), dec_input_batch.to(device), dec_output_batch.to(device))# enc_input_batch : [batch_size, n_step+1, n_class]# dec_intput_batch : [batch_size, n_step+1, n_class]# dec_output_batch : [batch_size, n_step+1], not one-hotpred = model(enc_input_batch, h_0, dec_intput_batch)# pred : [n_step+1, batch_size, n_class]pred = pred.transpose(0, 1)  # [batch_size, n_step+1(=6), n_class]loss = 0for i in range(len(dec_output_batch)):# pred[i] : [n_step+1, n_class]# dec_output_batch[i] : [n_step+1]loss += criterion(pred[i], dec_output_batch[i])if (epoch + 1) % 1000 == 0:print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))optimizer.zero_grad()loss.backward()optimizer.step()
#9
# Test
def translate(word):enc_input, dec_input, _ = make_data([[word, '?' * n_step]])enc_input, dec_input = enc_input.to(device), dec_input.to(device)# make hidden shape [num_layers * num_directions, batch_size, n_hidden]print("enc_input=",enc_input, "dec_input=",dec_input,"n_hidden=",n_hidden)hidden = torch.zeros(1, 1, n_hidden).to(device)output = model(enc_input, hidden, dec_input)# output : [n_step+1, batch_size, n_class]print("output.data=",output.data)predict = output.data.max(2, keepdim=True)[1] # select n_class dimensionprint("predict=",predict)decoded = [letter[i] for i in predict]translated = ''.join(decoded[:decoded.index('E')])return translated.replace('?', '')print('test')
print('man ->', translate('man'))
print('mans ->', translate('mans'))
print('king ->', translate('king'))
print('black ->', translate('black'))
print('up ->', translate('up'))
#10
#执行结果

C:\ProgramData\Anaconda3\python.exe C:/Users/pc/PycharmProjects/seq2seq/Seq2Seq.py
C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py:60: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
  "num_layers={}".format(dropout, num_layers))
Epoch: 1000 cost = 0.001993
Epoch: 1000 cost = 0.002054
Epoch: 2000 cost = 0.000441
Epoch: 2000 cost = 0.000415
Epoch: 3000 cost = 0.000128
Epoch: 3000 cost = 0.000134
Epoch: 4000 cost = 0.000043
Epoch: 4000 cost = 0.000045
Epoch: 5000 cost = 0.000015
Epoch: 5000 cost = 0.000016
test
enc_input= tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) dec_input= tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) n_hidden= 128
output.data= tensor([[[-4.3065e+00, -4.8961e+00, -1.0978e+00, -4.1016e+00,  1.6963e+00,
          -3.4909e+00,  1.3702e+00, -3.3739e-02, -4.2637e+00, -3.8116e+00,
          -1.6880e+00, -2.9487e+00, -4.1300e+00, -3.1433e+00,  1.6024e+00,
          -2.8452e+00,  1.0767e+00, -5.6262e+00, -4.0048e+00,  1.0652e+00,
          -4.4673e+00, -4.5837e+00, -4.2722e+00, -3.1410e+00, -3.9433e+00,
           1.5158e+01, -4.0215e+00, -4.3383e+00, -4.2129e+00]],

        [[-2.3903e+00,  7.6515e-01,  2.1158e+00, -2.1588e+00, -3.0337e+00,
          -2.6551e+00,  7.7586e-01, -5.4225e+00, -2.8381e+00, -2.8219e+00,
           2.1630e+00, -2.2365e+00, -3.1426e+00, -2.7296e+00,  1.1102e+00,
          -7.0369e-01,  2.7494e+00,  1.6497e+01, -2.7483e+00,  4.9587e-01,
          -2.6725e+00, -3.1282e+00, -2.2622e+00,  1.7541e+00, -2.8997e+00,
          -6.0975e+00, -2.9473e+00, -4.8188e+00, -2.5179e+00]],

        [[-2.6203e+00,  1.7803e+00,  1.8365e+00, -2.0814e+00, -8.7015e-02,
          -1.9189e+00, -4.1026e+00,  1.7874e+00, -1.9336e+00, -1.9122e+00,
          -7.5097e+00,  1.6539e+00, -2.0918e+00, -2.1359e+00, -1.5856e+00,
           1.5040e+01, -6.0545e+00,  1.0486e+00, -2.1752e+00, -3.8106e+00,
          -2.2595e+00, -2.4848e+00, -1.9771e+00,  1.5679e+00, -2.1174e+00,
           2.0894e+00, -1.6449e+00,  1.7625e+00, -2.0804e+00]],

        [[-2.4143e+00,  2.9779e+00,  3.3485e-01, -1.7429e+00, -4.8116e+00,
          -1.6712e+00, -1.8053e+00,  1.5846e+01, -2.0152e+00, -1.9535e+00,
          -5.5402e+00, -7.0520e+00, -2.2360e+00, -1.9970e+00, -2.4646e+00,
          -2.3482e+00,  2.8388e+00, -4.2379e+00, -2.1541e+00, -4.0036e-01,
          -1.7277e+00, -2.3172e+00,  2.6084e+00, -2.3780e+00, -1.9208e+00,
           2.9743e+00, -1.7947e+00, -2.6649e-02, -1.6980e+00]],

        [[-1.4365e+00,  1.1394e+00,  3.4600e+00, -1.5666e+00, -1.6747e+00,
          -1.5963e+00, -1.9778e-01,  2.6304e+00, -1.2731e+00, -1.4576e+00,
          -3.7625e+00, -7.0933e+00, -1.2083e+00, -1.6387e+00, -4.2228e+00,
          -3.8364e+00,  1.6177e+01, -4.5725e-01, -1.2973e+00,  6.8912e-01,
          -1.9087e+00, -1.7284e+00, -6.8999e+00,  6.5842e-03, -1.5360e+00,
           1.2516e+00, -1.2836e+00, -1.3752e+00, -1.4469e+00]],

        [[-8.9884e-01,  1.8602e+01,  3.8394e+00, -7.6437e-01, -2.0698e+00,
          -7.1664e-01,  3.8823e+00, -7.9843e-01, -1.1896e+00, -3.4626e-01,
          -7.7051e+00, -7.4298e+00, -7.6236e-01, -9.4240e-01, -1.5845e+00,
           3.8690e+00,  2.7345e+00,  2.6617e+00, -3.8022e-01, -1.0083e+00,
          -8.5122e-01, -1.0864e+00, -2.5288e+00,  9.2861e-01, -9.4337e-01,
          -3.5223e+00, -5.6646e-01, -6.5137e+00, -9.3167e-01]]])
predict= tensor([[[25]],

        [[17]],

        [[15]],

        [[ 7]],

        [[16]],

        [[ 1]]])
man -> women
enc_input= tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) dec_input= tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) n_hidden= 128
output.data= tensor([[[-4.2364e+00, -5.1443e+00, -1.2253e+00, -3.9696e+00,  1.7959e+00,
          -3.4189e+00,  1.1670e+00,  1.4500e-01, -4.1254e+00, -3.6900e+00,
          -1.5142e+00, -2.7320e+00, -4.0051e+00, -3.0514e+00,  1.4039e+00,
          -2.6893e+00,  8.5148e-01, -5.8578e+00, -3.9052e+00,  8.6857e-01,
          -4.3486e+00, -4.4292e+00, -4.1178e+00, -3.0776e+00, -3.8222e+00,
           1.5356e+01, -3.9074e+00, -4.1002e+00, -4.0902e+00]],

        [[-2.3948e+00,  5.5543e-01,  2.1681e+00, -2.1550e+00, -2.9457e+00,
          -2.6404e+00,  7.8076e-01, -5.3944e+00, -2.8324e+00, -2.8036e+00,
           2.5039e+00, -2.1335e+00, -3.1115e+00, -2.7270e+00,  1.1344e+00,
          -8.8501e-01,  2.7650e+00,  1.6197e+01, -2.7496e+00,  5.4389e-01,
          -2.6732e+00, -3.1151e+00, -2.1891e+00,  1.6821e+00, -2.8828e+00,
          -6.0967e+00, -2.9272e+00, -4.7865e+00, -2.5106e+00]],

        [[-2.6012e+00,  1.6449e+00,  1.9008e+00, -2.0793e+00, -6.8719e-02,
          -1.8982e+00, -4.1096e+00,  1.7365e+00, -1.9221e+00, -1.9143e+00,
          -7.3788e+00,  1.9041e+00, -2.0739e+00, -2.1198e+00, -1.5268e+00,
           1.5047e+01, -6.1965e+00,  1.0572e+00, -2.1663e+00, -3.7946e+00,
          -2.2383e+00, -2.4685e+00, -1.9318e+00,  1.5860e+00, -2.1032e+00,
           1.9331e+00, -1.6418e+00,  1.7963e+00, -2.0671e+00]],

        [[-2.4150e+00,  2.9655e+00,  2.6004e-01, -1.7344e+00, -4.8468e+00,
          -1.6555e+00, -1.8036e+00,  1.5898e+01, -2.0078e+00, -1.9563e+00,
          -5.5235e+00, -6.9839e+00, -2.2330e+00, -1.9891e+00, -2.4521e+00,
          -2.3151e+00,  2.6808e+00, -4.2703e+00, -2.1547e+00, -4.0710e-01,
          -1.7177e+00, -2.3013e+00,  2.7612e+00, -2.3445e+00, -1.9045e+00,
           2.9985e+00, -1.7977e+00, -1.8816e-02, -1.6960e+00]],

        [[-1.4256e+00,  1.0906e+00,  3.3821e+00, -1.5478e+00, -1.6942e+00,
          -1.5854e+00, -2.4417e-01,  2.7178e+00, -1.2530e+00, -1.4479e+00,
          -3.7336e+00, -7.0572e+00, -1.1972e+00, -1.6188e+00, -4.2215e+00,
          -3.8705e+00,  1.6215e+01, -5.0594e-01, -1.2779e+00,  7.0080e-01,
          -1.8873e+00, -1.7079e+00, -6.8577e+00, -1.5080e-03, -1.5215e+00,
           1.2660e+00, -1.2671e+00, -1.3414e+00, -1.4342e+00]],

        [[-8.9704e-01,  1.8598e+01,  3.8363e+00, -7.6451e-01, -2.0665e+00,
          -7.1585e-01,  3.8746e+00, -7.9766e-01, -1.1897e+00, -3.4330e-01,
          -7.6859e+00, -7.4154e+00, -7.5932e-01, -9.3892e-01, -1.5778e+00,
           3.8570e+00,  2.7336e+00,  2.6286e+00, -3.8035e-01, -1.0038e+00,
          -8.5127e-01, -1.0872e+00, -2.5204e+00,  9.3966e-01, -9.4037e-01,
          -3.5173e+00, -5.6645e-01, -6.5331e+00, -9.3239e-01]]])
predict= tensor([[[25]],

        [[17]],

        [[15]],

        [[ 7]],

        [[16]],

        [[ 1]]])
mans -> women
enc_input= tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) dec_input= tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) n_hidden= 128
output.data= tensor([[[-2.8283, -2.3626, -0.1669, -2.7062, -4.9340, -2.6686,  1.1879,
          -0.4988, -2.5126, -2.7524, -0.8902, -0.8576, -2.1348, -2.0980,
           1.3107, -5.7952,  0.4938, -1.6269, -2.5288, 14.2410, -3.2220,
          -2.8979,  0.5059,  0.8293, -2.9407,  0.1144, -2.8318, -2.2397,
          -2.8485]],

        [[-2.7481,  0.1070, -0.2417, -2.6742, -3.5322, -2.7356, -2.0069,
          -0.2905, -2.8494, -2.4679,  0.4449, -0.2035, -2.8989, -2.5681,
          -1.2044,  0.7797, -1.2203,  1.3424, -2.0246,  1.0619, -2.7756,
          -2.6097, -0.8477, 14.0103, -2.4926, -3.1346, -3.0548, -2.6245,
          -2.6824]],

        [[-3.0481,  0.2810, -1.4473, -1.8614, -3.3435, -2.6207, -5.5615,
          15.5710, -2.8886, -2.0847, -3.2127,  0.4409, -2.4811, -2.2347,
          -3.1258,  2.3124, -3.0728, -3.4695, -2.5387, -2.7836, -2.3338,
          -2.6220,  2.1170,  1.8315, -2.4378,  2.5099, -2.3928, -0.9716,
          -2.1889]],

        [[-2.3486,  2.1898, -0.0518, -1.6043, -4.2200, -1.6043, -3.1870,
          16.6410, -2.0038, -1.8146, -4.4528, -5.3590, -1.9544, -1.7610,
          -1.7343, -3.0047,  4.0441, -5.1262, -1.9987, -0.3312, -1.8511,
          -2.2385,  1.3912, -2.0107, -1.8433,  2.5781, -1.7077, -0.8625,
          -1.4298]],

        [[-1.1249,  2.6187,  3.2088, -1.2680, -1.8667, -1.3058,  0.2252,
           2.6478, -1.0426, -1.0876, -3.6653, -7.0211, -0.8896, -1.3647,
          -3.9822, -3.6295, 15.3443, -0.4771, -0.9410,  0.4940, -1.5341,
          -1.3913, -6.1356,  0.1833, -1.1524,  0.3265, -1.0332, -1.8967,
          -1.1357]],

        [[-0.6864, 18.7087,  3.2266, -0.6188, -2.0001, -0.5869,  3.7535,
          -0.3294, -1.0357, -0.1939, -8.1245, -7.4850, -0.6122, -0.7862,
          -1.3038,  4.1259,  2.6617,  2.5700, -0.1909, -1.1126, -0.6744,
          -0.9144, -2.6198,  0.8000, -0.7998, -3.4453, -0.3744, -6.2346,
          -0.7107]]])
predict= tensor([[[19]],

        [[23]],

        [[ 7]],

        [[ 7]],

        [[16]],

        [[ 1]]])
king -> queen
enc_input= tensor([[[0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) dec_input= tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) n_hidden= 128
output.data= tensor([[[-2.0689, -8.7098, -4.8898, -1.6018,  2.4492, -1.4456, -2.5575,
           1.2529, -1.5553, -1.4590,  0.8418,  1.3503, -1.8665, -1.5689,
          -0.1531, -1.7443, -1.7615, -6.4119, -1.6575, -1.2020, -2.1042,
          -1.6743, -1.8010, -0.4804, -1.3815, 15.5830, -1.6576,  0.2545,
          -1.9206]],

        [[-1.8489, -7.2973,  1.5389, -1.4012, -1.7340, -1.4996, -0.6075,
          -4.5550, -2.0124, -1.3346, 15.1152,  1.9257, -1.5385, -1.3881,
           0.6504, -5.3116,  0.2489,  2.4873, -1.6146,  0.9236, -1.7539,
          -1.6810,  2.1537,  1.2465, -1.5198, -2.3739, -1.1541, -0.0671,
          -1.3845]],

        [[-1.7372, -9.3584,  2.6335, -1.4762,  2.5457, -1.0611, -4.0521,
           0.1155, -1.6446, -1.7453,  1.8787, 15.4588, -1.1961, -1.3059,
           0.2514,  3.1567, -8.6701, -0.8013, -1.5303, -0.6646, -1.6321,
          -1.8999, -0.3239,  0.4056, -1.6625,  0.3260, -1.5085,  3.0456,
          -1.6100]],

        [[-1.2088,  1.6715,  0.6705, -0.9774, -7.1186, -1.0423, -0.8351,
           3.2168, -1.2495, -1.0304,  1.5145,  0.5656, -1.1819, -1.0560,
          -1.5316, -1.0357, -6.6219, -1.4356, -1.3063,  0.9956, -1.1131,
          -1.3230, 15.0737, -0.2129, -1.1119, -0.9749, -1.0831,  0.7700,
          -1.3064]],

        [[-1.6763, -5.5338,  2.3379, -0.7959, -1.2500, -1.2617, -5.5603,
          15.5783, -1.2844, -1.3196, -2.3954,  1.8382, -1.3597, -0.8014,
          -3.4878, -1.6384,  3.2091, -6.1552, -1.2064,  0.5083, -0.8395,
          -1.4700, -2.6530,  0.3554, -1.0272,  0.8245, -1.1982,  2.1557,
          -0.9269]],

        [[-0.7122, 17.1544,  3.1363, -0.6060, -3.1694, -0.4530,  2.9015,
           1.3534, -0.7256, -0.1687, -4.6771, -6.5512, -0.6577, -0.5677,
          -1.7814,  0.7493,  1.3163, -0.4522, -0.4733, -0.3166, -0.8819,
          -0.8310,  0.9430,  1.2143, -0.3004, -2.3155, -0.3772, -7.6786,
          -0.6709]]])
predict= tensor([[[25]],

        [[10]],

        [[11]],

        [[22]],

        [[ 7]],

        [[ 1]]])
black -> white
enc_input= tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) dec_input= tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
          0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]) n_hidden= 128
output.data= tensor([[[-2.2227,  1.1039,  0.9361, -1.7916,  0.9939, -1.8225, 14.8655,
          -6.5431, -2.2393, -2.3357, -1.2254, -2.7620, -2.2117, -2.2088,
           2.6410, -2.9425, -0.4038,  0.8187, -1.6252,  1.9952, -1.7778,
          -2.2960, -2.2505, -0.8751, -1.6060,  0.0786, -2.1974, -7.0257,
          -2.2020]],

        [[-1.6809,  2.2360, -1.8729, -1.2541, -2.6357, -1.6801,  0.3821,
          -2.6765, -1.3480, -1.4661, -2.2543, -2.3800, -1.3111, -1.1596,
           1.9145,  2.8206,  0.6113, 16.3379, -1.1935, -0.3985, -1.2032,
          -1.4498, -2.1519,  1.9817, -1.3006, -5.3040, -1.1795, -4.1014,
          -1.0693]],

        [[-3.2444, -0.5409, -5.0188, -2.8395, -0.0612, -2.9714, -1.1149,
           2.3132, -2.9405, -2.2777, -6.1815, -4.3277, -2.7849, -2.7514,
          -0.9576,  2.2393, -0.0854, -2.6481, -2.7784, -1.7263, -3.5978,
          -3.3209, -1.3265, -0.5880, -3.0756, 14.5987, -2.7313, -1.5224,
          -3.0121]],

        [[-2.4068, -0.7567,  2.4065, -2.1517, -3.5193, -2.1153, -1.1563,
           2.7205, -2.1444, -2.0500,  0.1838, -7.6772, -2.0536, -2.5851,
          -3.0046, -5.9824, 14.7826,  1.0193, -1.8719,  0.4687, -2.2422,
          -2.4837, -2.6767, -0.7756, -2.2014,  1.1184, -2.0253,  0.7376,
          -2.3218]],

        [[-2.4552,  1.6956, 15.1841, -2.6467,  0.6999, -2.0372,  0.4377,
          -1.0396, -2.6377, -2.2472, -4.1860, -0.9231, -2.2536, -2.4956,
          -3.8611,  1.2366,  1.9323, -1.3519, -2.2468, -0.7968, -3.0316,
          -2.3819, -5.4405, -1.0079, -2.5828, -1.3621, -2.2209, -2.9338,
          -2.3236]],

        [[-0.8805, 18.8412,  3.7769, -0.8079, -3.3419, -0.2910,  3.2971,
           1.4113, -0.8959, -0.4190, -8.3294, -7.0438, -0.9324, -0.9681,
          -1.5665,  3.6978, -0.3938,  3.2983, -0.7220, -0.8588, -0.9130,
          -1.1669, -0.0783,  0.1662, -0.8183, -3.6403, -0.5167, -5.5895,
          -0.7154]]])
predict= tensor([[[ 6]],

        [[17]],

        [[25]],

        [[16]],

        [[ 2]],

        [[ 1]]])
up -> down

 

代码来自

https://github.com/wmathor/nlp-tutorial/blob/master/4-1.Seq2Seq/Seq2Seq_Torch.ipynb

 

 

 

这篇关于pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/691933

相关文章

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

Spring Security 从入门到进阶系列教程

Spring Security 入门系列 《保护 Web 应用的安全》 《Spring-Security-入门(一):登录与退出》 《Spring-Security-入门(二):基于数据库验证》 《Spring-Security-入门(三):密码加密》 《Spring-Security-入门(四):自定义-Filter》 《Spring-Security-入门(五):在 Sprin

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7

Centos7安装Mongodb4

1、下载源码包 curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz 2、解压 放到 /usr/local/ 目录下 tar -zxvf mongodb-linux-x86_64-rhel70-4.2.1.tgzmv mongodb-linux-x86_64-rhel70-4.2.1/

好题——hdu2522(小数问题:求1/n的第一个循环节)

好喜欢这题,第一次做小数问题,一开始真心没思路,然后参考了网上的一些资料。 知识点***********************************无限不循环小数即无理数,不能写作两整数之比*****************************(一开始没想到,小学没学好) 此题1/n肯定是一个有限循环小数,了解这些后就能做此题了。 按照除法的机制,用一个函数表示出来就可以了,代码如下

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu

Centos7安装JDK1.8保姆版

工欲善其事,必先利其器。这句话同样适用于学习Java编程。在开始Java的学习旅程之前,我们必须首先配置好适合的开发环境。 通过事先准备好这些工具和配置,我们可以避免在学习过程中遇到因环境问题导致的代码异常或错误。一个稳定、高效的开发环境能够让我们更加专注于代码的学习和编写,提升学习效率,减少不必要的困扰和挫折感。因此,在学习Java之初,投入一些时间和精力来配置好开发环境是非常值得的。这将为我

Android实现任意版本设置默认的锁屏壁纸和桌面壁纸(两张壁纸可不一致)

客户有些需求需要设置默认壁纸和锁屏壁纸  在默认情况下 这两个壁纸是相同的  如果需要默认的锁屏壁纸和桌面壁纸不一样 需要额外修改 Android13实现 替换默认桌面壁纸: 将图片文件替换frameworks/base/core/res/res/drawable-nodpi/default_wallpaper.*  (注意不能是bmp格式) 替换默认锁屏壁纸: 将图片资源放入vendo

购买磨轮平衡机时应该注意什么问题和技巧

在购买磨轮平衡机时,您应该注意以下几个关键点: 平衡精度 平衡精度是衡量平衡机性能的核心指标,直接影响到不平衡量的检测与校准的准确性,从而决定磨轮的振动和噪声水平。高精度的平衡机能显著减少振动和噪声,提高磨削加工的精度。 转速范围 宽广的转速范围意味着平衡机能够处理更多种类的磨轮,适应不同的工作条件和规格要求。 振动监测能力 振动监测能力是评估平衡机性能的重要因素。通过传感器实时监