主流大模型测试程序-用于导出算子列表

2024-05-03 13:04

本文主要是介绍主流大模型测试程序-用于导出算子列表,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

主流大模型测试程序-用于导出算子列表

  • 一.参考链接
  • 二.下载链接
  • 三.测试程序
  • 四.算子列表

需要多少算子才能覆盖主流大模型呢,于是 基于__torch_dispatch__机制的dump方法 dump出算子及参数列表,考虑到设备内存容量,设置为一层

一.参考链接

  • 基于__torch_dispatch__机制的dump方法
  • python序列化、反序列化函数的参数,用于问题复现

二.下载链接

下载链接
https://huggingface.co/google-bert/bert-base-chinese
https://modelscope.cn/models/baichuan-inc/baichuan-7B/summary
https://modelscope.cn/models/baichuan-inc/Baichuan2-13B-Chat/files
https://modelscope.cn/models/ZhipuAI/ChatGLM-6B/files
https://modelscope.cn/models/ZhipuAI/chatglm2-6b/files
https://modelscope.cn/models/ZhipuAI/chatglm3-6b/files
https://modelscope.cn/models/deepseek-ai/deepseek-moe-16b-chat/files
https://modelscope.cn/models/deepseek-ai/deepseek-coder-33b-base/files
https://modelscope.cn/models/AI-ModelScope/falcon-7b-instruct/files
https://modelscope.cn/models/AI-ModelScope/gpt2/files
https://modelscope.cn/models/AI-ModelScope/gemma-7b/files
https://www.modelscope.cn/models/colossalai/grok-1-pytorch/files
https://modelscope.cn/models/CHUPer/internLM/files
https://huggingface.co/internlm/internlm2-20b/tree/main
https://modelscope.cn/models/skyline2006/llama-13b/files
https://modelscope.cn/models/Cookize/Llama-2-13B-chat/files
https://modelscope.cn/models/LLM-Research/Llama3-8B-Chinese-Chat/files
https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1/tree/main
https://huggingface.co/allenai/OLMo-7B/tree/main
https://huggingface.co/apple/OpenELM-3B/tree/main
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/tree/main
https://modelscope.cn/models/qwen/Qwen-14B-Chat/files
https://modelscope.cn/models/qwen/Qwen1.5-7B/files
https://huggingface.co/google-t5/t5-base/tree/main
https://modelscope.cn/models/xverse/XVERSE-7B/files
https://modelscope.cn/models/01ai/Yi-34B/files
https://huggingface.co/IEITYuan/Yuan2-51B-hf/tree/main

三.测试程序

import warnings 
warnings.filterwarnings("ignore")
import copy
import sys
import torch
import multiprocessing as mp
from tqdm import tqdmop_mapping={}
class llm_forward:def __init__(self,func):global op_mapping  op_mapping[func.__name__]=funcself.func=funcdef __call__(self,*args,**kwargs):return self.func(*args,**kwargs)try:from torch_hook import TorchDumper,TorchDumpDispatchMode
except:class TorchDumpDispatchMode:passclass TorchDumper:def __init__(self,*args,**kwargs):        passdef __enter__(self):passdef __exit__(self, exc_type, exc_val, exc_tb):pass@llm_forward
def bert_base_chinese(use_half,device):from transformers import AutoModelForMaskedLM,BertConfigconfig=BertConfig.from_pretrained("bert_base_chinese/config.json")config.num_hidden_layers=1model = AutoModelForMaskedLM.from_config(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_position_embeddings))with TorchDumper(TorchDumpDispatchMode,op_log_path="bert_base_chinesee.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Baichuan2_13B_Chat(use_half,device):import syssys.path.insert(0,"./Baichuan2_13B_Chat")from configuration_baichuan2 import BaichuanConfigfrom modeling_baichuan2 import BaichuanForCausalLMconfig=BaichuanConfig.from_pretrained("Baichuan2_13B_Chat/config.json")config.num_hidden_layers=1model = BaichuanForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.model_max_length//4))with TorchDumper(TorchDumpDispatchMode,op_log_path="Baichuan2_13B_Chat.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def baichuan_7B(use_half,device):import sysimport ossys.path.insert(0,os.path.join(os.getcwd(),"baichuan_7B"))from configuration_baichuan import BaiChuanConfigfrom modeling_baichuan import BaiChuanForCausalLMconfig=BaiChuanConfig.from_pretrained("baichuan_7B/config.json")config.num_hidden_layers=1model = BaiChuanForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_position_embeddings//4))with TorchDumper(TorchDumpDispatchMode,op_log_path="baichuan_7B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def ChatGLM_6B(use_half,device):import syssys.path.append("./ChatGLM_6B")from configuration_chatglm import ChatGLMConfigfrom modeling_chatglm import ChatGLMModelconfig=ChatGLMConfig.from_pretrained("ChatGLM_6B/config.json")config.num_layers=1model = ChatGLMModel(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_sequence_length))input_tokens[:,0]=config.bos_token_idinput_tokens[:,2]=config.mask_token_id  input_tokens[:,-1]=config.eos_token_idwith TorchDumper(TorchDumpDispatchMode,op_log_path="ChatGLM_6B.pkl"):output=model(input_tokens.to(device))logits=output.last_hidden_stateloss=logits.mean()-1.0loss.backward()@llm_forward
def ChatGLM2_6B(use_half,device):import syssys.path.append("./ChatGLM2_6B")from configuration_chatglm import ChatGLMConfigfrom modeling_chatglm import ChatGLMModelconfig=ChatGLMConfig.from_pretrained("ChatGLM2_6B/config.json")config.num_layers=1model = ChatGLMModel(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.padded_vocab_size,(1,config.seq_length//10))with TorchDumper(TorchDumpDispatchMode,op_log_path="ChatGLM2_6B.pkl"):output=model(input_tokens.to(device))logits=output.last_hidden_stateloss=logits.mean()-1.0loss.backward()@llm_forward
def ChatGLM3_6B(use_half,device):import syssys.path.append("./ChatGLM3_6B")from configuration_chatglm import ChatGLMConfigfrom modeling_chatglm import ChatGLMModelconfig=ChatGLMConfig.from_pretrained("ChatGLM3_6B/config.json")config.num_layers=1model = ChatGLMModel(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.padded_vocab_size,(1,config.seq_length//4))with TorchDumper(TorchDumpDispatchMode,op_log_path="ChatGLM3_6B.pkl"):output=model(input_tokens.to(device))logits=output.last_hidden_stateloss=logits.mean()-1.0loss.backward()@llm_forward
def deepseek_moe_16b_chat(use_half,device):import syssys.path.append("./deepseek_moe_16b_chat")from configuration_deepseek import DeepseekConfigfrom modeling_deepseek import DeepseekForCausalLMconfig=DeepseekConfig.from_pretrained("deepseek_moe_16b_chat/config.json")config.num_hidden_layers=1model = DeepseekForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_position_embeddings))with TorchDumper(TorchDumpDispatchMode,op_log_path="deepseek_moe_16b_chat.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def deepseek_coder_33b_base(use_half,device):from transformers.models.llama import LlamaForCausalLM, LlamaConfigconfig=LlamaConfig.from_pretrained("deepseek_coder_33b_base/config.json")config.num_hidden_layers=1model = LlamaForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_position_embeddings//10))with TorchDumper(TorchDumpDispatchMode,op_log_path="deepseek_coder_33b_base.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def falcon_7b_instruct(use_half,device):import syssys.path.append("./falcon_7b_instruct")from configuration_RW import RWConfigfrom modelling_RW import RWForCausalLMconfig=RWConfig.from_pretrained("falcon_7b_instruct/config.json")config.n_layer=1model = RWForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="falcon_7b_instruct.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def GPT2(use_half,device):from transformers import GPT2LMHeadModel, GPT2Configconfig=GPT2Config.from_pretrained("GPT2/config.json")config.n_layer=1model = GPT2LMHeadModel(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="GPT2.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def gemma_7b(use_half,device):import syssys.path.append("./gemma_7b")from config import GemmaConfigfrom model import GemmaForCausalLMconfig=GemmaConfig.from_pretrained("gemma_7b/config.json")config.num_hidden_layers=1model = GemmaForCausalLM(config)if use_half:model=model.half()model.train().to(device)max_seq_len=512batch_size=1prompt_tokens=torch.randint(0,config.vocab_size,(batch_size,max_seq_len)).to(device)temperature= 0.95top_p  = 1.0top_k = 100# build KV cacheskv_caches = []for _ in range(config.num_hidden_layers):size = (batch_size, max_seq_len, config.num_key_value_heads,config.head_dim)dtype = config.get_dtype()k_cache = torch.zeros(size=size, dtype=dtype).to(device)v_cache = torch.zeros(size=size, dtype=dtype).to(device)kv_caches.append((k_cache, v_cache))# prepare inputsinput_token_ids_tensor = torch.full((batch_size, max_seq_len),0,dtype=torch.int64)input_token_ids_tensor = input_token_ids_tensor.to(device)input_positions_tensor = torch.arange(0, max_seq_len,dtype=torch.int64).to(device)mask_tensor = torch.full((1, 1, max_seq_len, max_seq_len),-2.3819763e38).to(torch.float)mask_tensor = torch.triu(mask_tensor, diagonal=1).to(device)output_positions_tensor = torch.LongTensor([max_seq_len - 1]).to(device)temperatures_tensor = None if not temperature else torch.FloatTensor([temperature] * batch_size).to(device)top_ps_tensor = torch.FloatTensor([top_p] * batch_size).to(device)top_ks_tensor = torch.LongTensor([top_k] * batch_size).to(device)with TorchDumper(TorchDumpDispatchMode,op_log_path="gemma_7b.pkl"):output=model(prompt_tokens,input_positions_tensor,None,kv_caches,mask_tensor,output_positions_tensor,temperatures_tensor,top_ps_tensor,top_ks_tensor)_,logits=outputloss=logits.mean()-1.0loss.backward()@llm_forward
def grok1_pytorch(use_half,device):import syssys.path.append("./grok1_pytorch")from configuration_grok1 import Grok1Configfrom modeling_grok1 import Grok1ModelForCausalLMconfig=Grok1Config.from_pretrained("grok1_pytorch/config.json")config.num_hidden_layers=1config.num_experts=1config.num_experts_per_tok=1model = Grok1ModelForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="grok1_pytorch.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def internLM(use_half,device):import syssys.path.append("./internLM")from configuration_internlm import InternLMConfigfrom modeling_internlm import InternLMForCausalLMconfig=InternLMConfig.from_pretrained("internLM/config.json")config.num_hidden_layers=1model = InternLMForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="internLM.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def internlm2_20b(use_half,device):import syssys.path.append("./internlm2_20b")from configuration_internlm2 import InternLM2Configfrom modeling_internlm2 import InternLM2ForCausalLMconfig=InternLM2Config.from_pretrained("internlm2_20b/config.json")config.num_hidden_layers=1model = InternLM2ForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="internlm2_20b.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def llama_13b(use_half,device):from transformers.models.llama import LlamaForCausalLM, LlamaConfigconfig=LlamaConfig.from_pretrained("llama_13b/config.json")config.num_hidden_layers=1model = LlamaForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_sequence_length))with TorchDumper(TorchDumpDispatchMode,op_log_path="llama_13b.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Llama2_13B_chat(use_half,device):from transformers.models.llama import LlamaForCausalLM, LlamaConfigconfig=LlamaConfig.from_pretrained("Llama2_13B_chat/config.json")config.num_hidden_layers=1model = LlamaForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,128))with TorchDumper(TorchDumpDispatchMode,op_log_path="Llama2_13B_chat.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Llama3_8B_Chinese_Chat(use_half,device):from transformers.models.llama import LlamaForCausalLM, LlamaConfigconfig=LlamaConfig.from_pretrained("Llama3_8B_Chinese_Chat/config.json")config.num_hidden_layers=1model = LlamaForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,128))with TorchDumper(TorchDumpDispatchMode,op_log_path="Llama3_8B_Chinese_Chat.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Mixtral_8x22B(use_half,device):import syssys.path.append("./Mixtral_8x22B")from configuration_mixtral import MixtralConfigfrom modeling_mixtral import MixtralForCausalLMconfig=MixtralConfig.from_pretrained("Mixtral_8x22B/config.json")config.num_hidden_layers=1model = MixtralForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="Mixtral_8x22B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def OLMo_7B(use_half,device):import syssys.path.append("./OLMo_7B")from configuration_olmo import OLMoConfigfrom modeling_olmo import OLMoForCausalLMconfig=OLMoConfig.from_pretrained("OLMo_7B/config.json")config.n_layers=1model = OLMoForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="OLMo_7B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Phi3_mini_4k_instruct(use_half,device):import syssys.path.append("./Phi3_mini_4k_instruct")from configuration_phi3 import Phi3Configfrom modeling_phi3 import Phi3ForCausalLMconfig=Phi3Config.from_pretrained("Phi3_mini_4k_instruct/config.json")config.num_hidden_layers=1model = Phi3ForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="Phi3_mini_4k_instruct.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def OpenELM_3B(use_half,device):import syssys.path.append("./OpenELM_3B")from configuration_openelm import OpenELMConfigfrom modeling_openelm import OpenELMForCausalLMconfig=OpenELMConfig.from_pretrained("OpenELM_3B/config.json")config.num_transformer_layers=1model = OpenELMForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="OpenELM_3B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Qwen_14B_Chat(use_half,device):import syssys.path.append("./Qwen_14B_Chat")from configuration_qwen import QWenConfigfrom modeling_qwen import QWenLMHeadModelconfig=QWenConfig.from_pretrained("Qwen_14B_Chat/config.json")config.num_hidden_layers=1model = QWenLMHeadModel(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="Qwen_14B_Chat.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Qwen1_5_7B(use_half,device):import syssys.path.append("./Qwen1_5_7B")from configuration_qwen2 import Qwen2Configfrom modeling_qwen2 import Qwen2ForCausalLMconfig=Qwen2Config.from_pretrained("Qwen1_5_7B/config.json")config.num_hidden_layers=1model = Qwen2ForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="Qwen1_5_7B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def t5_base(use_half,device):import syssys.path.append("./t5_base")from transformers import T5Config, T5ForConditionalGenerationconfig=T5Config.from_pretrained("t5_base/config.json")config.num_layers=1config.max_new_tokens=512model = T5ForConditionalGeneration(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_new_tokens))with TorchDumper(TorchDumpDispatchMode,op_log_path="t5_base.pkl"):output=model.generate(input_tokens.to(device))#logits=output#loss=logits.mean()-1.0#loss.backward()@llm_forward
def XVERSE_7B(use_half,device):import syssys.path.append("./XVERSE_7B")from configuration_xverse import XverseConfigfrom modeling_xverse import XverseForCausalLMconfig=XverseConfig.from_pretrained("XVERSE_7B/config.json")config.num_hidden_layers=1model = XverseForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,512))with TorchDumper(TorchDumpDispatchMode,op_log_path="XVERSE_7B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Yi_34B(use_half,device):from transformers.models.llama import LlamaForCausalLM, LlamaConfigconfig=LlamaConfig.from_pretrained("Yi_34B/config.json")config.num_hidden_layers=1model = LlamaForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,config.max_position_embeddings//10))with TorchDumper(TorchDumpDispatchMode,op_log_path="Yi_34B.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()@llm_forward
def Yuan2_51B_hf(use_half,device):import syssys.path.append("./Yuan2_51B_hf")from configuration_yuan import YuanConfigfrom yuan_hf_model import YuanForCausalLMconfig=YuanConfig.from_pretrained("Yuan2_51B_hf/config.json")config.num_hidden_layers=1config.intermediate_size=2048config.model_max_length=config.max_position_embeddings=2model = YuanForCausalLM(config)if use_half:model=model.half()model.train().to(device)input_tokens=torch.randint(0,config.vocab_size,(1,2))with TorchDumper(TorchDumpDispatchMode,op_log_path="Yuan2_51B_hf.pkl"):output=model(input_tokens.to(device))logits=output.logitsloss=logits.mean()-1.0loss.backward()def main():global op_mappingdevice="cuda"use_half=Truepbar=tqdm(list(op_mapping.keys()))for name in pbar:        torch.manual_seed(1)p = mp.Process(target=op_mapping[name],args=(use_half,device))p.start()p.join()torch.cuda.empty_cache()pbar.set_description("%s" % (name))if __name__=='__main__':main()

四.算子列表

算子列表
aten.abs.default
aten.addmm.default
aten.add.Tensor
aten.add_.Tensor
aten.alias.default
aten.all.default
aten.any.default
aten.arange.default
aten.arange.start
aten.arange.start_step
aten.argmax.default
aten.as_strided.default
aten.baddbmm.default
aten.bitwise_not.default
aten.bitwise_or.Tensor
aten.bmm.default
aten.cat.default
aten.clamp_min.default
aten.clamp.Tensor
aten.clone.default
aten._conj.default
aten.convolution_backward.default
aten.convolution.default
aten.copy_.default
aten.cos.default
aten.cumsum.default
aten.diagonal_copy.default
aten.div.Scalar
aten.div.Tensor
aten.div_.Tensor
aten.embedding.default
aten.embedding_dense_backward.default
aten.empty.memory_format
aten.empty.names
aten.eq.Scalar
aten.eq.Tensor
aten.expand.default
aten.fill_.Scalar
aten.fill_.Tensor
aten.full.default
aten.full_like.default
aten.gather.default
aten.gelu_backward.default
aten.gelu.default
aten.ge.Scalar
aten.ge.Tensor
aten.gt.Scalar
aten.gt.Tensor
aten.index_add_.default
aten.index_copy_.default
aten.index_put.default
aten.index_select.default
aten.index.Tensor
aten.isinf.default
aten.is_same_size.default
aten.le.Tensor
aten.lift_fresh.default
aten.linalg_vector_norm.default
aten._local_scalar_dense.default
aten.log.default
aten.logical_not.default
aten.lt.Scalar
aten.lt.Tensor
aten.masked_fill.Scalar
aten.masked_fill_.Scalar
aten.max.default
aten.maximum.default
aten.mean.default
aten.mean.dim
aten.minimum.default
aten.mm.default
aten.mul.Scalar
aten.mul.Tensor
aten.multinomial.default
aten.native_dropout_backward.default
aten.native_dropout.default
aten.native_layer_norm_backward.default
aten.native_layer_norm.default
aten.neg.default
aten.ne.Tensor
aten.new_empty.default
aten.new_empty_strided.default
aten.new_zeros.default
aten.nonzero.default
aten.ones.default
aten.ones_like.default
aten.permute.default
aten.pow.Scalar
aten.pow.Tensor_Scalar
aten.prod.dim_int
aten.reciprocal.default
aten.relu.default
aten.repeat.default
aten.rsqrt.default
aten.rsub.Scalar
aten.scalar_tensor.default
aten._scaled_dot_product_efficient_attention_backward.default
aten._scaled_dot_product_efficient_attention.default
aten.scatter.src
aten.scatter_.value
aten.select_backward.default
aten.select.int
aten.set_.source_Storage
aten.set_.source_Storage_storage_offset
aten.silu_backward.default
aten.silu.default
aten.sin.default
aten.slice_backward.default
aten.slice.Tensor
aten._softmax_backward_data.default
aten._softmax.default
aten.sort.default
aten.split.Tensor
aten.split_with_sizes.default
aten.squeeze.dim
aten.stack.default
aten.sub.Tensor
aten.sum.dim_IntList
aten.tanh_backward.default
aten.tanh.default
aten.t.default
aten._to_copy.default
aten.topk.default
aten.transpose.int
aten.tril.default
aten.tril_.default
aten.triu.default
aten.unbind.int
aten._unsafe_view.default
aten.unsqueeze.default
aten.unsqueeze_.default
aten.view_as_complex.default
aten.view_as_real.default
aten.view.default
aten.where.self
aten.zeros.default
aten.zeros_like.default

这篇关于主流大模型测试程序-用于导出算子列表的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/956812

相关文章

java中使用POI生成Excel并导出过程

《java中使用POI生成Excel并导出过程》:本文主要介绍java中使用POI生成Excel并导出过程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录需求说明及实现方式需求完成通用代码版本1版本2结果展示type参数为atype参数为b总结注:本文章中代码均为

Java的IO模型、Netty原理解析

《Java的IO模型、Netty原理解析》Java的I/O是以流的方式进行数据输入输出的,Java的类库涉及很多领域的IO内容:标准的输入输出,文件的操作、网络上的数据传输流、字符串流、对象流等,这篇... 目录1.什么是IO2.同步与异步、阻塞与非阻塞3.三种IO模型BIO(blocking I/O)NI

基于Flask框架添加多个AI模型的API并进行交互

《基于Flask框架添加多个AI模型的API并进行交互》:本文主要介绍如何基于Flask框架开发AI模型API管理系统,允许用户添加、删除不同AI模型的API密钥,感兴趣的可以了解下... 目录1. 概述2. 后端代码说明2.1 依赖库导入2.2 应用初始化2.3 API 存储字典2.4 路由函数2.5 应

Python实现将MySQL中所有表的数据都导出为CSV文件并压缩

《Python实现将MySQL中所有表的数据都导出为CSV文件并压缩》这篇文章主要为大家详细介绍了如何使用Python将MySQL数据库中所有表的数据都导出为CSV文件到一个目录,并压缩为zip文件到... python将mysql数据库中所有表的数据都导出为CSV文件到一个目录,并压缩为zip文件到另一个

Python中DataFrame转列表的最全指南

《Python中DataFrame转列表的最全指南》在Python数据分析中,Pandas的DataFrame是最常用的数据结构之一,本文将为你详解5种主流DataFrame转换为列表的方法,大家可以... 目录引言一、基础转换方法解析1. tolist()直接转换法2. values.tolist()矩阵

Android App安装列表获取方法(实践方案)

《AndroidApp安装列表获取方法(实践方案)》文章介绍了Android11及以上版本获取应用列表的方案调整,包括权限配置、白名单配置和action配置三种方式,并提供了相应的Java和Kotl... 目录前言实现方案         方案概述一、 androidManifest 三种配置方式

python展开嵌套列表的多种方法

《python展开嵌套列表的多种方法》本文主要介绍了python展开嵌套列表的多种方法,包括for循环、列表推导式和sum函数三种方法,具有一定的参考价值,感兴趣的可以了解一下... 目录一、嵌套列表格式二、嵌套列表展开方法(一)for循环(1)for循环+append()(2)for循环+pyPhWiFd

Python容器类型之列表/字典/元组/集合方式

《Python容器类型之列表/字典/元组/集合方式》:本文主要介绍Python容器类型之列表/字典/元组/集合方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1. 列表(List) - 有序可变序列1.1 基本特性1.2 核心操作1.3 应用场景2. 字典(D

Java导入、导出excel用法步骤保姆级教程(附封装好的工具类)

《Java导入、导出excel用法步骤保姆级教程(附封装好的工具类)》:本文主要介绍Java导入、导出excel的相关资料,讲解了使用Java和ApachePOI库将数据导出为Excel文件,包括... 目录前言一、引入Apache POI依赖二、用法&步骤2.1 创建Excel的元素2.3 样式和字体2.

java导出pdf文件的详细实现方法

《java导出pdf文件的详细实现方法》:本文主要介绍java导出pdf文件的详细实现方法,包括制作模板、获取中文字体文件、实现后端服务以及前端发起请求并生成下载链接,需要的朋友可以参考下... 目录使用注意点包含内容1、制作pdf模板2、获取pdf导出中文需要的文件3、实现4、前端发起请求并生成下载链接使