本文主要是介绍神经网络模型浮点运算量和参数量计算,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
神经网络FLOPs以及模型参数量计算
- 多种方法
多种方法
//
# if __name__ == "__main__":
# from torchinfo import summary
# from thop import profile
# from torchstat import stat
# from ptflops import get_model_complexity_info
#
# in_size = (1, 160, 160)
# model = ILF_Net(1, 48)
# # print(model)
# input = torch.randn(1, 1, 160, 160)
#
# print(" 方法二:torchstat : \n")
# stat(model.to('cpu'), (1, 160, 160))
# print("\n\n =========================================\n\n")
#
# print(" 方法三:ptflops : \n")
# flops, params = get_model_complexity_info(model, (1, 160, 160), as_strings=True, print_per_layer_stat=True,
# verbose=True)
# print('Flops: ', flops)
# print('Params: ', params)
# print("\n\n =========================================\n\n")
#
# print(" 方法四:thop : \n")
# inputs = torch.randn(1, 1, 160, 160)
# flops, params = profile(model, (inputs,))
# print('flops: ', flops)
# print('params: ', params)
#
# print("\n\n =========================================\n\n")
这篇关于神经网络模型浮点运算量和参数量计算的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!