【MetaLearning】有关Pytorch的元学习库higher的基本用法

2023-11-22 05:04

本文主要是介绍【MetaLearning】有关Pytorch的元学习库higher的基本用法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

【MetaLearning】有关Pytorch的元学习库higher的基本用法

文章目录

  • 【MetaLearning】有关Pytorch的元学习库higher的基本用法
    • 1. 基本介绍
    • 2. Toy Example
    • Reference

1. 基本介绍

higher.innerloop_ctxhigher库的上下文管理器,用于创建内部循环(inner loop)的上下文,内部循环通常用于元学习场景,其中在模型参数更新的内部循环中进行一些额外的操作。

这个上下文管理器主要有五个参数:(详细请参考官方库说明)

higher.innerloop_ctx(model, opt, device=None, copy_initial_weights=True, override=None, track_higher_grads=True)
  • 第一个参数model是需要进行内部循环的模型,通常是你的元模型
  • 第二个参数opt是优化器,这是你用来更新模型参数的优化器
  • 第三个参数copy_initial_weights是一个布尔值,用于指定是否在每个内部循环之前复制初始权重,如果设置为True则表示在每个内部循环之前都会将模型的初始权重进行复制,以确保每个内部循环都从相同的初始权重开始。如果设置为False,则所有的内部循环共享相同的权重模型。
  • 第四个参数override是一个字典,例如override={'lr':lr_tensor, "momentum': momentum_tensor},用于指定在内部循环期间覆盖优化器的参数,比如在这里示例中,lr_tensormomentum_tensor是张量,用于指定内部循环期间覆盖的学习率和动量。
  • 第五个参数track_higher_grads是一个布尔值,用于跟踪更高阶的梯度,如果是True,则内部循环中计算的梯度将被跟踪以支持高阶的梯度计算,如果设置为False,则不会跟踪高阶梯度。

with语句块中,通过(fmodel, diffopt)获取内部循环的上下文。fmodel表示内部循环中的模型,diffopt表示内部循环中的优化器,在这个上下文中,你可以执行内部循环的计算和参数更新。

下面给出一个基本的使用示例,演示如何使用higher.innerloop_ctx,使用higher库需要习惯下列的转变

从通常使用pytorch的用法

model = MyModel()
opt = torch.optim.Adam(model.parameters())for xs, ys in data:opt.zero_grad()logits = model(xs)loss = loss_function(logits, ys)loss.backward()opt.step()

转变到

model = MyModel()
opt = torch.optim.Adam(model.parameters())with higher.innerloop_ctx(model, opt) as (fmodel, diffopt):for xs, ys in data:logits = fmodel(xs)  # modified `params` can also be passed as a kwargloss = loss_function(logits, ys)  # no need to call loss.backwards()diffopt.step(loss)  # note that `step` must take `loss` as an argument!,这一步相当于使用了loss.backward()和opt.step()# At the end of your inner loop you can obtain these e.g. ...grad_of_grads = torch.autograd.grad(meta_loss_fn(fmodel.parameters()), fmodel.parameters(time=0))

训练模型和执行diffopt.step 来更新fmodel之间的区别在于,fmodel不会像原始部分中的opt.step()那样就地更新参数。 相反,每次调用 diffopt.step时都会以这样的方式创建新版本的参数,即fmodel将在下一步中使用新的参数,但所有以前的参数仍会保留。

运行的原理是什么呢?举个例子,fmodelfmodel.parameters(time=0)开始迭代(这里的time=0表示就是第0次迭代),当我们调用diffopt.stepN次之后,我们可以使用fmodel.parameters(time=i)来访问,其中i可以取到1N,并且我们仍然可以访问fmodel.parameters(time=0),这个结果和迭代之前是一样的,这是为什么呢?

因为fmodel的创建依赖于参数copy_initial_weights,如果copy_initial_weights=True,那么fmodel.parameters(time=0)是从原模型clone’d别且是detach’ed(即是从原模型克隆过来并且进行分离计算图了),如果copy_initial_weights=False,那么只是进行了clone’d并没有detach‘ed。

放一段原文在这里方便大家理解

I.e. fmodel starts with only fmodel.parameters(time=0) available, but after you called diffopt.step N times you can ask fmodel to give you fmodel.parameters(time=i) for any i up to N inclusive. Notice that fmodel.parameters(time=0) doesn’t change in this process at all, just every time fmodel is applied to some input it will use the latest version of parameters it currently has.

Now, what exactly is fmodel.parameters(time=0)? It is created here and depends on copy_initial_weights. If copy_initial_weights==True then fmodel.parameters(time=0) are clone’d and detach’ed parameters of model. Otherwise they are only clone’d, but not detach’ed!

That means that when we do meta-optimization step, the original model’s parameters will actually accumulate gradients if and only if copy_initial_weights==False. And in MAML we want to optimize model’s starting weights so we actually do need to get gradients from meta-optimization step.

2. Toy Example

import torch
import torch.nn as nn
import torch.optim as optim
import higher
import numpy as npnp.random.seed(1)
torch.manual_seed(3)
N = 100
actual_multiplier = 3.5
meta_lr = 0.00001
loops = 5 # how many iterations in the inner loop we want to dox = torch.tensor(np.random.random((N,1)), dtype=torch.float64) # features for inner training loop
y = x * actual_multiplier # target for inner training loop
model = nn.Linear(1, 1, bias=False).double() # simplest possible model - multiple input x by weight w without bias
meta_opt = optim.SGD(model.parameters(), lr=meta_lr, momentum=0.)def run_inner_loop_once(model, verbose, copy_initial_weights):lr_tensor = torch.tensor([0.3], requires_grad=True)momentum_tensor = torch.tensor([0.5], requires_grad=True)opt = optim.SGD(model.parameters(), lr=0.3, momentum=0.5)with higher.innerloop_ctx(model, opt, copy_initial_weights=copy_initial_weights, override={'lr': lr_tensor, 'momentum': momentum_tensor}) as (fmodel, diffopt):for j in range(loops):if verbose:print('Starting inner loop step j=={0}'.format(j))print('    Representation of fmodel.parameters(time={0}): {1}'.format(j, str(list(fmodel.parameters(time=j)))))print('    Notice that fmodel.parameters() is same as fmodel.parameters(time={0}): {1}'.format(j, (list(fmodel.parameters())[0] is list(fmodel.parameters(time=j))[0])))out = fmodel(x)if verbose:print('    Notice how `out` is `x` multiplied by the latest version of weight: {0:.4} * {1:.4} == {2:.4}'.format(x[0,0].item(), list(fmodel.parameters())[0].item(), out[0].item()))loss = ((out - y)**2).mean()diffopt.step(loss)if verbose:# after all inner training let's see all steps' parameter tensorsprint()print("Let's print all intermediate parameters versions after inner loop is done:")for j in range(loops+1):print('    For j=={0} parameter is: {1}'.format(j, str(list(fmodel.parameters(time=j)))))print()# let's imagine now that our meta-learning optimization is trying to check how far we got in the end from the actual_multiplierweight_learned_after_full_inner_loop = list(fmodel.parameters())[0]meta_loss = (weight_learned_after_full_inner_loop - actual_multiplier)**2print('  Final meta-loss: {0}'.format(meta_loss.item()))meta_loss.backward() # will only propagate gradient to original model parameter's `grad` if copy_initial_weight=Falseif verbose:print('  Gradient of final loss we got for lr and momentum: {0} and {1}'.format(lr_tensor.grad, momentum_tensor.grad))print('  If you change number of iterations "loops" to much larger number final loss will be stable and the values above will be smaller')return meta_loss.item()print('=================== Run Inner Loop First Time (copy_initial_weights=True) =================\n')
meta_loss_val1 = run_inner_loop_once(model, verbose=True, copy_initial_weights=True)
print("\nLet's see if we got any gradient for initial model parameters: {0}\n".format(list(model.parameters())[0].grad))print('=================== Run Inner Loop Second Time (copy_initial_weights=False) =================\n')
meta_loss_val2 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)
print("\nLet's see if we got any gradient for initial model parameters: {0}\n".format(list(model.parameters())[0].grad))print('=================== Run Inner Loop Third Time (copy_initial_weights=False) =================\n')
final_meta_gradient = list(model.parameters())[0].grad.item()
# Now let's double-check `higher` library is actually doing what it promised to do, not just giving us
# a bunch of hand-wavy statements and difficult to read code.
# We will do a simple SGD step using meta_opt changing initial weight for the training and see how meta loss changed
meta_opt.step()
meta_opt.zero_grad()
meta_step = - meta_lr * final_meta_gradient # how much meta_opt actually shifted inital weight value
# before we run inner loop third time, we update the meta parameter firstly.
meta_loss_val3 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)meta_loss_gradient_approximation = (meta_loss_val3 - meta_loss_val2) / meta_stepprint()
print('Side-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: {0:.4} VS {1:.4}'.format(meta_loss_gradient_approximation, final_meta_gradient))

结果如下

=================== Run Inner Loop First Time (copy_initial_weights=True) =================Starting inner loop step j==0Representation of fmodel.parameters(time=0): [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]Notice that fmodel.parameters() is same as fmodel.parameters(time=0): TrueNotice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.9915 == -0.4135
Starting inner loop step j==1Representation of fmodel.parameters(time=1): [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]Notice that fmodel.parameters() is same as fmodel.parameters(time=1): TrueNotice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.1217 == -0.05075
Starting inner loop step j==2Representation of fmodel.parameters(time=2): [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]Notice that fmodel.parameters() is same as fmodel.parameters(time=2): TrueNotice how `out` is `x` multiplied by the latest version of weight: 0.417 * 1.015 == 0.4231
Starting inner loop step j==3Representation of fmodel.parameters(time=3): [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]Notice that fmodel.parameters() is same as fmodel.parameters(time=3): TrueNotice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.064 == 0.8607
Starting inner loop step j==4Representation of fmodel.parameters(time=4): [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]Notice that fmodel.parameters() is same as fmodel.parameters(time=4): TrueNotice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.867 == 1.196Let's print all intermediate parameters versions after inner loop is done:For j==0 parameter is: [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]For j==1 parameter is: [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]For j==2 parameter is: [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]For j==3 parameter is: [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]For j==4 parameter is: [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]For j==5 parameter is: [tensor([[3.3908]], dtype=torch.float64, grad_fn=<AddBackward0>)]Final meta-loss: 0.011927987982895929Gradient of final loss we got for lr and momentum: tensor([-1.6295]) and tensor([-0.9496])If you change number of iterations "loops" to much larger number final loss will be stable and the values above will be smallerLet's see if we got any gradient for initial model parameters: None=================== Run Inner Loop Second Time (copy_initial_weights=False) =================Final meta-loss: 0.011927987982895929Let's see if we got any gradient for initial model parameters: tensor([[-0.0053]], dtype=torch.float64)=================== Run Inner Loop Third Time (copy_initial_weights=False) =================Final meta-loss: 0.01192798770078706Side-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: -0.005311 VS -0.005311

Reference

Parper: Generalized Inner Loop Meta-Learning
What does the copy_initial_weights documentation mean in the higher library for Pytorch?

这篇关于【MetaLearning】有关Pytorch的元学习库higher的基本用法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/407707

相关文章

MySQL中between and的基本用法、范围查询示例详解

《MySQL中betweenand的基本用法、范围查询示例详解》BETWEENAND操作符在MySQL中用于选择在两个值之间的数据,包括边界值,它支持数值和日期类型,示例展示了如何使用BETWEEN... 目录一、between and语法二、使用示例2.1、betwphpeen and数值查询2.2、be

MySQL基本表查询操作汇总之单表查询+多表操作大全

《MySQL基本表查询操作汇总之单表查询+多表操作大全》本文全面介绍了MySQL单表查询与多表操作的关键技术,包括基本语法、高级查询、表别名使用、多表连接及子查询等,并提供了丰富的实例,感兴趣的朋友跟... 目录一、单表查询整合(一)通用模版展示(二)举例说明(三)注意事项(四)Mapper简单举例简单查询

Java序列化之serialVersionUID的用法解读

《Java序列化之serialVersionUID的用法解读》Java序列化之serialVersionUID:本文介绍了Java对象的序列化和反序列化过程,强调了serialVersionUID的作... 目录JavChina编程a序列化之serialVersionUID什么是序列化为什么要序列化serialV

python3中正则表达式处理函数用法总结

《python3中正则表达式处理函数用法总结》Python中的正则表达式是一个强大的文本处理工具,用于匹配、查找、替换等操作,在Python中正则表达式的操作主要通过内置的re模块来实现,这篇文章主要... 目录前言re.match函数re.search方法re.match 与 re.search的区别检索

MySQL 中的 JSON_CONTAIN用法示例详解

《MySQL中的JSON_CONTAIN用法示例详解》JSON_CONTAINS函数用于检查一个JSON文档中是否包含另一个JSON文档,这篇文章给大家介绍JSON_CONTAINS的用法、语法、... 目录深入了解 mysql 中的 jsON_CONTAINS1. JSON_CONTAINS 函数的概述2

JDK21对虚拟线程的几种用法实践指南

《JDK21对虚拟线程的几种用法实践指南》虚拟线程是Java中的一种轻量级线程,由JVM管理,特别适合于I/O密集型任务,:本文主要介绍JDK21对虚拟线程的几种用法,文中通过代码介绍的非常详细,... 目录一、参考官方文档二、什么是虚拟线程三、几种用法1、Thread.ofVirtual().start(

Redis 基本数据类型和使用详解

《Redis基本数据类型和使用详解》String是Redis最基本的数据类型,一个键对应一个值,它的功能十分强大,可以存储字符串、整数、浮点数等多种数据格式,本文给大家介绍Redis基本数据类型和... 目录一、Redis 入门介绍二、Redis 的五大基本数据类型2.1 String 类型2.2 Hash

Java8 Collectors.toMap() 的两种用法

《Java8Collectors.toMap()的两种用法》Collectors.toMap():JDK8中提供,用于将Stream流转换为Map,本文给大家介绍Java8Collector... 目录一、简单介绍用法1:根据某一属性,对对象的实例或属性做映射用法2:根据某一属性,对对象集合进行去重二、Du

Python中isinstance()函数原理解释及详细用法示例

《Python中isinstance()函数原理解释及详细用法示例》isinstance()是Python内置的一个非常有用的函数,用于检查一个对象是否属于指定的类型或类型元组中的某一个类型,它是Py... 目录python中isinstance()函数原理解释及详细用法指南一、isinstance()函数

Python中的sort方法、sorted函数与lambda表达式及用法详解

《Python中的sort方法、sorted函数与lambda表达式及用法详解》文章对比了Python中list.sort()与sorted()函数的区别,指出sort()原地排序返回None,sor... 目录1. sort()方法1.1 sort()方法1.2 基本语法和参数A. reverse参数B.