本文主要是介绍图神经网络七日打卡营学习笔记及心得,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
学习来源百度AIstudio:https://aistudio.baidu.com/aistudio/index
Day-1 图学习
Part1 什么是图
图的两个基本元素:点、边
图是一种统一描述复杂事物的语言
常见的图:社交网络、推荐系统、化学分子结构…
Part2 什么是图学习
图学习: Graph Learning。深度学习中的一个子领域,强调处理的数据对象为图。
与一般深度学习的区别:能够方便地处理不规则数据(树、图),同时也可以处理规则数据(如图像)。
Part3 图学习的应用
我们可以把图学习的应用分为节点级别任务、边级别任务、图级别任务。 课程中介绍了以下几种任务。
节点级别任务:金融诈骗检测(典型的节点分类)、自动驾驶中的3D点云目标检测
边级别任务:推荐系统(典型的边预测)
图级别任务:气味识别(典型的图分类)、发现“宇宙”
Part4 图学习是怎么做的
图游走类算法:通过在图上的游走,获得多个节点序列,再利用 Skip Gram 模型训练得到节点表示(下节课内容)
图神经网络算法:端到端模型,利用消息传递机制实现。
知识图谱嵌入算法:专门用于知识图谱的相关算法。
Part5 PGL 图学习库简介
Github 链接:https://github.com/PaddlePaddle/PGL
API文档: https://pgl.readthedocs.io/en/latest/
Part6 熟悉 PGL 使用
- 环境安装 !pip install pgl
- 使用以下代码来构图:
import pgl
from pgl import graph # 导入 PGL 中的图模块
import paddle.fluid as fluid # 导入飞桨框架
import numpy as npdef build_graph():# 定义图中的节点数目,我们使用数字来表示图中的每个节点num_nodes = 10# 定义图中的边集edge_list = [(2, 0), (2, 1), (3, 1),(4, 0), (5, 0),(6, 0), (6, 4), (6, 5), (7, 0), (7, 1),(7, 2), (7, 3), (8, 0), (9, 7)]# 随机初始化节点特征,特征维度为 dd = 16feature = np.random.randn(num_nodes, d).astype("float32")# 随机地为每条边赋值一个权重edge_feature = np.random.randn(len(edge_list), 1).astype("float32")# 创建图对象,最多四个输入g = graph.Graph(num_nodes = num_nodes,edges = edge_list,node_feat = {'feature':feature},edge_feat ={'edge_feature': edge_feature})return gg = build_graph()
接下来我们打印图的一些信息:
print('图中共计 %d 个节点' % g.num_nodes)
print('图中共计 %d 条边' % g.num_edges)
- 定义图模型
我们可以定义下面的一个简单图模型层,这里的结构是添加了边权重信息的类 GCN 层。
# 定义一个同时传递节点特征和边权重的简单模型层。
def model_layer(gw, nfeat, efeat, hidden_size, name, activation):'''gw: GraphWrapper 图数据容器,用于在定义模型的时候使用,后续训练时再feed入真实数据nfeat: 节点特征efeat: 边权重hidden_size: 模型隐藏层维度activation: 使用的激活函数'''# 定义 send 函数def send_func(src_feat, dst_feat, edge_feat):# 将源节点的节点特征和边权重共同作为消息发送return src_feat['h'] * edge_feat['e']# 定义 recv 函数def recv_func(feat):# 目标节点接收源节点消息,采用 sum 的聚合方式return fluid.layers.sequence_pool(feat, pool_type='sum')# 触发消息传递机制msg = gw.send(send_func, nfeat_list=[('h', nfeat)], efeat_list=[('e', efeat)])output = gw.recv(msg, recv_func)output = fluid.layers.fc(output,size=hidden_size,bias_attr=False,act=activation,name=name)return output
- 模型定义
这里我们简单的把上述定义好的模型层堆叠两层,作为我们的最终模型。
class Model(object):def __init__(self, graph):"""graph: 我们前面创建好的图"""# 创建 GraphWrapper 图数据容器,用于在定义模型的时候使用,后续训练时再feed入真实数据self.gw = pgl.graph_wrapper.GraphWrapper(name='graph',node_feat=graph.node_feat_info(),edge_feat=graph.edge_feat_info())# 作用同 GraphWrapper,此处用作节点标签的容器self.node_label = fluid.layers.data("node_label", shape=[None, 1],dtype="float32", append_batch_size=False)def build_model(self):# 定义两层model_layeroutput = model_layer(self.gw, self.gw.node_feat['feature'], self.gw.edge_feat['edge_feature'],hidden_size=8, name='layer_1', activation='relu')output = model_layer(self.gw, output, self.gw.edge_feat['edge_feature'],hidden_size=1, name='layer_2', activation=None)# 对于二分类任务,可以使用以下 API 计算损失loss = fluid.layers.sigmoid_cross_entropy_with_logits(x=output, label=self.node_label)# 计算平均损失loss = fluid.layers.mean(loss)# 计算准确率prob = fluid.layers.sigmoid(output)pred = prob > 0.5pred = fluid.layers.cast(prob > 0.5, dtype="float32")correct = fluid.layers.equal(pred, self.node_label)correct = fluid.layers.cast(correct, dtype="float32")acc = fluid.layers.reduce_mean(correct)return loss, acc
- 训练前准备
# 是否在 GPU 或 CPU 环境运行
use_cuda = False
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()# 定义程序,也就是我们的 Program
startup_program = fluid.Program() # 用于初始化模型参数
train_program = fluid.Program() # 训练时使用的主程序,包含前向计算和反向梯度计算
test_program = fluid.Program() # 测试时使用的程序,只包含前向计算with fluid.program_guard(train_program, startup_program):model = Model(g)# 创建模型和计算 Lossloss, acc = model.build_model()# 选择Adam优化器,学习率设置为0.01adam = fluid.optimizer.Adam(learning_rate=0.01)adam.minimize(loss) # 计算梯度和执行梯度反向传播过程# 复制构造 test_program,与 train_program的区别在于不需要梯度计算和反向过程。
test_program = train_program.clone(for_test=True)# 定义一个在 place(CPU)上的Executor来执行program
exe = fluid.Executor(place)
# 参数初始化
exe.run(startup_program) # 获取真实图数据
feed_dict = model.gw.to_feed(g)
# 获取真实标签数据
# 由于我们是做节点分类任务,因此可以简单的用0、1表示节点类别。其中,黄色点标签为0,绿色点标签为1。
y = [0,1,1,1,0,0,0,1,0,1]
label = np.array(y, dtype="float32")
label = np.expand_dims(label, -1)
feed_dict['node_label'] = label
- 开始训练
for epoch in range(30):train_loss = exe.run(train_program,feed=feed_dict, # feed入真实训练数据fetch_list=[loss], # fetch出需要的计算结果return_numpy=True)[0]print('Epoch %d | Loss: %f' % (epoch, train_loss))
Day-2 图游走类模型
1.生成单条 DeepWalk 游走序列
import numpy as np%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx # networkx是一个常用的绘制复杂图形的Python包。import pgl
构建graph
在进行deepwalk游走之前,我们需要构建一个图网络。
图网络的构建需要用到Graph类,Graph类的具体实现可以参考 PGL/pgl/graph.py
简单展示一下如果构建一个图网络:
def build_graph():# 定义节点的个数;每个节点用一个数字表示,即从0~9num_node = 10# 添加节点之间的边,每条边用一个tuple表示为: (src, dst)edge_list = [(2, 0), (2, 1), (3, 1),(4, 0), (0, 5), (6, 0), (6, 4), (5, 6), (7, 0), (1, 7),(2, 7), (7, 3), (8, 0), (9, 7)]g = pgl.graph.Graph(num_nodes = num_node, edges = edge_list)return g# 创建一个图对象,用于保存图网络的各种数据。
g = build_graph()def display_graph(g):nx_G = nx.Graph()nx_G.add_nodes_from(range(g.num_nodes))nx_G.add_edges_from(g.edges)pos = nx.spring_layout(nx_G, iterations=50)nx.draw(nx_G, pos,with_labels=True,node_color=['y','y','g','g','g','y','y','g','y','g'], node_size=1000)plt.show()display_graph(g)
Deepwalk 采样
DeepWalk会等概率的选取下一个相邻节点加入路径,直至达到最大路径长度,或者没有下一个节点可选。
因此, 假如我们想要得到一条walk, 我们需要输入一个graph, 起始节点ID, 游走的深度walk_len。
def deepwalk(graph, start_node, walk_len):walk = [start_node] # 初始化游走序列for d in range(walk_len): # 最大长度范围内进行采样current_node = walk[-1] successors = graph.successor(np.array([current_node])) # graph.successor: 获取当前节点的后继邻居print("当前节点: %d" % current_node)print("后继邻居", successors[0])succ = successors[0]if len(succ) == 0:breaknext_node = np.random.choice(succ, 1)walk.extend(next_node)return walkwalk = deepwalk(g, 2, 4)
print(walk)
Day-3 图神经网络模型(一)
今天课堂主要讲解了三个部分:GCN 算法、GAT 算法、Message Passing 消息传递机制。
GCN参数补充解释
主要是帮助大家理解消息传递机制的一些参数类型。
这里我们给出一个简化版本的 GCN 模型,帮助大家理解PGL框架实现消息传递的流程。
import paddle.fluid.layers as Ldef gcn_layer(gw, feature, hidden_size, activation, name, norm=None):"""描述:通过GCN层计算新的节点表示输入:gw - GraphWrapper对象feature - 节点表示 (num_nodes, feature_size)hidden_size - GCN层的隐藏层维度 intactivation - 激活函数 strname - GCN层名称 strnorm - 标准化tensor float32 (num_nodes,),None表示不标准化输出:新的节点表示 (num_nodes, hidden_size)"""# send函数def send_func(src_feat, dst_feat, edge_feat):"""描述:用于send节点信息。函数名可自定义,参数列表固定输入:src_feat - 源节点的表示字典 {name:(num_edges, feature_size)}dst_feat - 目标节点表示字典 {name:(num_edges, feature_size)}edge_feat - 与边(src, dst)相关的特征字典 {name:(num_edges, feature_size)}输出:存储发送信息的张量或字典 (num_edges, feature_size) or {name:(num_edges, feature_size)}"""return src_feat["h"] # 直接返回源节点表示作为信息# send和recv函数是搭配实现的,send的输出就是recv函数的输入# recv函数def recv_func(msg):"""描述:对接收到的msg进行聚合。函数名可自定义,参数列表固定输出:新的节点表示张量 (num_nodes, feature_size)"""return L.sequence_pool(msg, pool_type='sum') # 对接收到的消息求和### 消息传递机制执行过程# gw.send函数msg = gw.send(send_func, nfeat_list=[("h", feature)]) """ 描述:触发message函数,发送消息并将消息返回输入:message_func - 自定义的消息函数nfeat_list - list [name] or tuple (name, tensor)efeat_list - list [name] or tuple (name, tensor)输出:消息字典 {name:(num_edges, feature_size)}"""# gw.recv函数output = gw.recv(msg, recv_func)""" 描述:触发reduce函数,接收并处理消息输入:msg - gw.send输出的消息字典reduce_function - "sum"或自定义的reduce函数输出:新的节点特征 (num_nodes, feature_size)如果reduce函数是对消息求和,可以直接用"sum"作为参数,使用内置函数加速训练,上述语句等价于 \output = gw.recv(msg, "sum")"""# 通过以activation为激活函数的全连接输出层output = L.fc(output, size=hidden_size, bias_attr=False, act=activation, name=name)return output
Day-4 Graphsage 采样代码实践
GraphSage的PGL完整代码实现位于 PGL/examples/graphsage/
本次实践将带领大家尝试实现一个简单的graphsage 采样代码实现
1. 构建graph
在实现graphsage采样之前,我们需要构建一个图网络。
图网络的构建需要用到Graph类,Graph类的具体实现可以参考 PGL/pgl/graph.py
下面我们简单展示一下如何构建一个图网络:
import random
import numpy as np
import pgl
import display
def build_graph():# 定义节点的个数;每个节点用一个数字表示,即从0~9num_node = 16# 添加节点之间的边,每条边用一个tuple表示为: (src, dst)edge_list = [(2, 0), (1, 0), (3, 0),(4, 0), (5, 0), (6, 1), (7, 1), (8, 2), (9, 2), (8, 7),(10, 3), (4, 3), (11, 10), (11, 4), (12, 4),(13, 5), (14, 5), (15, 5)]g = pgl.graph.Graph(num_nodes = num_node, edges = edge_list)return g# 创建一个图对象,用于保存图网络的各种数据。
g = build_graph()
display.display_graph(g)
2. GraphSage采样函数实现
GraphSage的作者提出了采样算法来使得模型能够以Mini-batch的方式进行训练,算法伪代码见论文附录A。
1.假设我们要利用中心节点的k阶邻居信息,则在聚合的时候,需要从第k阶邻居传递信息到k-1阶邻居,并依次传递到中心节点。
2.采样的过程刚好与此相反,在构造第t轮训练的Mini-batch时,我们从中心节点出发,在前序节点集合中采样Nt个邻居节点加入采样集合。
3.接着将邻居节点作为新的中心节点继续进行第t-1轮训练的节点 采样,以此类推。
4.最后将采样到的节点和边一起构造得到子图。
def traverse(item):"""traverse"""if isinstance(item, list) or isinstance(item, np.ndarray):for i in iter(item):for j in traverse(i):yield jelse:yield itemdef flat_node_and_edge(nodes):"""这个函数的目的是为了将 list of numpy array 扁平化成一个list例如: [array([7, 8, 9]), array([11, 12]), array([13, 15])] --> [7, 8, 9, 11, 12, 13, 15]"""nodes = list(set(traverse(nodes)))return nodesdef graphsage_sample(graph, start_nodes, sample_num):subgraph_edges = []# pre_nodes: a list of numpy array, pre_nodes = graph.sample_predecessor(start_nodes, sample_num)# 根据采样的子节点, 恢复边for dst_node, src_nodes in zip(start_nodes, pre_nodes):for node in src_nodes:subgraph_edges.append((node, dst_node))# flat_node_and_edge: 这个函数的目的是为了将 list of numpy array 扁平化成一个list# [array([7, 8, 9]), array([11, 12]), array([13, 15])] --> [7, 8, 9, 11, 12, 13, 15]subgraph_nodes = flat_node_and_edge(pre_nodes)return subgraph_nodes, subgraph_edgesseed = 458
np.random.seed(seed)
random.seed(seed)start_nodes = [0]layer1_nodes, layer1_edges = graphsage_sample(g, start_nodes, sample_num=3)
print('layer1_nodes: ', layer1_nodes)
print('layer1_edges: ', layer1_edges)
display.display_subgraph(g, {'orange': layer1_nodes}, {'orange': layer1_edges})layer2_nodes, layer2_edges = graphsage_sample(g, layer1_nodes, sample_num=2)
print('layer2_nodes: ', layer2_nodes)
print('layer2_edges: ', layer2_edges)
display.display_subgraph(g, {'orange': layer1_nodes, 'Thistle': layer2_nodes}, {'orange': laye
Day-5 ERNIESage代码解析
本项目主要是为了直接提供一个可以运行ERNIESage模型的代码介绍,以便同学们能够直观感受到ERNIESage的魅力,同时也会对ERNIESage中的部分关键代码进行必要讲解。Let’s enjoy!
ERNIESage可以很轻松地在PGL中的消息传递范式中进行实现,目前PGL在github上提供了3个版本的ERNIESage模型:
ERNIESage v1: ERNIE 作用于text graph节点上;
ERNIESage v2: ERNIE 作用在text graph的边上;
ERNIESage v3: ERNIE 作用于一阶邻居及起边上;
讲解流程
数据
模型
训练
# 拉取PGL代码,由于github拉取较慢,已经提前拉取完毕了
# !git clone https://github.com/PaddlePaddle/PGL
# !cd PGL/example/erniesage
# 为了正常运行代码,首先我们需要安装以下依赖
!pip install pgl
!pip install easydict
!python3 -m pip install --no-deps paddle-propeller
!pip install paddle-ernie
!pip uninstall -y colorlog
!export CUDAV_VISIBLE_DEVICES=0
数据
输入example数据集
example_data/link_predict/graph_data.txt - 简单的输入文件,格式为每行query \t answer,可作简单的运行实例使用,link predict任务一般直接用图中的边作为训练目标。
```javascript
! head -n 3 example_data/link_predict/graph_data.txt
! wc -l example_data/link_predict/graph_data.txt
如何表达一个文本图
1.出现过的每一个文本段当作一个节点,比如“黑缘粗角肖叶甲触角有多大?”就是一个节点
2.一行两个节点作为一条边
3.节点的文本段逐字转成id,形成id序列,作为节点特征
from preprocessing.dump_graph import dump_graph
from preprocessing.dump_graph import dump_node_feat
from preprocessing.dump_graph import download_ernie_model
from preprocessing.dump_graph import load_config
from pgl.graph_wrapper import BatchGraphWrapper
import propeller.paddle as propeller
import paddle.fluid as F
import paddle.fluid.layers as L
import numpy as np
from preprocessing.dump_graph import load_config
from models.pretrain_model_loader import PretrainedModelLoader
from pgl.graph import MemmapGraph
from models.encoder import linear
from ernie import ErnieModel
np.random.seed(123)
config = load_config("./config/erniesage_link_predict.yaml")
from preprocessing.dump_graph import dump_graph
from preprocessing.dump_graph import dump_node_feat
from preprocessing.dump_graph import download_ernie_model
from preprocessing.dump_graph import load_config
from pgl.graph_wrapper import BatchGraphWrapper
import propeller.paddle as propeller
import paddle.fluid as F
import paddle.fluid.layers as L
import numpy as np
from preprocessing.dump_graph import load_config
from models.pretrain_model_loader import PretrainedModelLoader
from pgl.graph import MemmapGraph
from models.encoder import linear
from ernie import ErnieModel
np.random.seed(123)
config = load_config("./config/erniesage_link_predict.yaml")
# 将原始QA数据产出一个文本图,并使用grpah.dump存放到 workdir 目录下
dump_graph(config)
dump_node_feat(config)
# MemmapGraph可以将PGL中graph.dump的模型,重新load回来
graph = MemmapGraph("./workdir/")
# 看一下图基础信息
print("节点", graph.num_nodes,"个")
print("边", graph.edges, graph.edges.shape)
# 看一下节点特征
print([("%s shape is %s" % (key, str(graph.node_feat[key].shape))) for key in graph.node_feat])
print(graph.node_feat) # 按字的粒度转成ID,每段文本为一个节点,文本全部保留40长度
# 1021个节点,每个节点有长度为40的id序列
模型
ERNIESage V1 模型核心流程
ERNIE提取节点语义 -> GNN聚合
# ERNIESage V1,ERNIE作用在节点上
class ERNIESageV1Encoder():def __init__(self, config):self.config = configdef __call__(self, graph_wrappers, inputs):# step1. ERNIE提取节点语义# 输入每个节点的文本的id序列term_ids = graph_wrappers[0].node_feat["term_ids"]cls = L.fill_constant_batch_size_like(term_ids, [-1, 1], "int64",self.config.cls_id) # cls [B, 1]term_ids = L.concat([cls, term_ids], 1) # term_ids [B, S]# [CLS], id1, id2, id3 .. [SEP]ernie_model = ErnieModel(self.config.ernie_config) # 获得ERNIE的[CLS]位置的表达cls_feat, _ = ernie_model(term_ids) # cls_feat [B, F]# step2. GNN聚合feature = graphsage_sum(cls_feat, graph_wrappers[0], self.config.hidden_size, "v1_graphsage_sum", "leaky_relu")final_feats = [self.take_final_feature(feature, i, "v1_final_fc") for i in inputs]return final_featsdef take_final_feature(self, feature, index, name):"""take final feature"""feat = L.gather(feature, index, overwrite=False)feat = linear(feat, self.config.hidden_size, name)feat = L.l2_normalize(feat, axis=1)return featdef graphsage_sum(feature, gw, hidden_size, name, act):# copy_sendmsg = gw.send(lambda src, dst, edge: src["h"], nfeat_list=[("h", feature)])# sum_recvneigh_feature = gw.recv(msg, lambda feat: L.sequence_pool(feat, pool_type="sum"))self_feature = linear(feature, hidden_size, name+"_l", act)neigh_feature = linear(neigh_feature, hidden_size, name+"_r", act)output = L.concat([self_feature, neigh_feature], axis=1) # [B, 2H]output = L.l2_normalize(output, axis=1)return output# 随机构造些数据
feat_size = 40
feed_dict = {"num_nodes": np.array([4]),"num_edges": np.array([6]),"edges": np.array([[0,1],[1,0],[0,2],[2,0],[0,3],[3,0]]),"term_ids": np.random.randint(4, 10000, size=(4, feat_size)),"inputs": np.array([0])}
place = F.CUDAPlace(0)
exe = F.Executor(place)# 模型v1
erniesage_v1_encoder = ERNIESageV1Encoder(config)main_prog, start_prog = F.Program(), F.Program()
with F.program_guard(main_prog, start_prog):with F.unique_name.guard():num_nodes = L.data("num_nodes", [1], False, 'int64')num_edges = L.data("num_edges", [1], False, 'int64')edges = L.data("edges", [-1, 2], False, 'int64')node_feat = L.data("term_ids", [-1, 40], False, 'int64')inputs = L.data("inputs", [-1], False, 'int64')# 输入图的基本信息(边、点、特征)构造一个graph gw = BatchGraphWrapper(num_nodes, num_edges, edges, {"term_ids": node_feat})outputs = erniesage_v1_encoder([gw], [inputs])exe.run(start_prog)
outputs_np = exe.run(main_prog, feed=feed_dict, fetch_list=[outputs])[0]
print(outputs_np)
ERNIESage V2 核心代码
GNN send 文本id -> ERNIE提取边语义 -> GNN recv 聚合邻居语义 -> ERNIE提取中心节点语义并concat
图片替换文本
为了使得大家对下面有关ERNIE模型的部分能够有所了解,这里先贴出ERNIE的主模型框架图。
# ERNIESage V2,ERNIE作用在边上
class ERNIESageV2Encoder():def __init__(self, config):self.config = configdef __call__(self, graph_wrappers, inputs):gw = graph_wrappers[0]term_ids = gw.node_feat["term_ids"] # term_ids [B, S]# step1. GNN send 文本iddef ernie_send(src_feat, dst_feat, edge_feat):def build_position_ids(term_ids):input_mask = L.cast(term_ids > 0, "int64")position_ids = L.cumsum(input_mask, axis=1) - 1return position_ids# src_ids, dst_ids 为发送src和接收dst节点分别的文本ID序列src_ids, dst_ids = src_feat["term_ids"], dst_feat["term_ids"]# 生成[CLS]对应的id列, 并与前半段concatcls = L.fill_constant_batch_size_like(src_feat["term_ids"], [-1, 1], "int64", self.config.cls_id) # cls [B, 1]src_ids = L.concat([cls, src_ids], 1) # src_ids [B, S+1]# 将src与dst concat在一起作为完整token idsterm_ids = L.concat([src_ids, dst_ids], 1) # term_ids [B, 2S+1]# [CLS], src_id1, src_id2.. [SEP], dst_id1, dst_id2..[SEP]sent_ids = L.concat([L.zeros_like(src_ids), L.ones_like(dst_ids)], 1)# 0, 0, 0 .. 0, 1, 1 .. 1 position_ids = build_position_ids(term_ids)# 0, 1, 2, 3 .. # step2. ERNIE提取边语义 ernie_model = ErnieModel(self.config.ernie_config)cls_feat, _ = ernie_model(term_ids, sent_ids, position_ids)# cls_feat 为ERNIE提取的句子级隐向量表达return cls_featmsg = gw.send(ernie_send, nfeat_list=[("term_ids", term_ids)])# step3. GNN recv 聚合邻居语义 # 接收了邻居的CLS语义表达,sum聚合在一起neigh_feature = gw.recv(msg, lambda feat: F.layers.sequence_pool(feat, pool_type="sum"))# 为每个节点也拼接一个CLS表达cls = L.fill_constant_batch_size_like(term_ids, [-1, 1],"int64", self.config.cls_id)term_ids = L.concat([cls, term_ids], 1)# [CLS], id1, id2, ... [SEP]# step4. ERNIE提取中心节点语义并concat# 对中心节点过一次ERNIE ernie_model = ErnieModel(self.config.ernie_config)# 获取中心节点的语义CLS表达self_cls_feat, _ = ernie_model(term_ids)hidden_size = self.config.hidden_size self_feature = linear(self_cls_feat, hidden_size, "erniesage_v2_l", "leaky_relu")neigh_feature = linear(neigh_feature, hidden_size, "erniesage_v2_r", "leaky_relu")output = L.concat([self_feature, neigh_feature], axis=1)output = L.l2_normalize(output, axis=1)final_feats = [self.take_final_feature(output, i, "v2_final_fc") for i in inputs]return final_featsdef take_final_feature(self, feature, index, name):"""take final feature"""feat = L.gather(feature, index, overwrite=False)feat = linear(feat, self.config.hidden_size, name)feat = L.l2_normalize(feat, axis=1)return feat
In [10]
# 直接run一下
erniesage_v2_encoder = ERNIESageV2Encoder(config)main_prog, start_prog = F.Program(), F.Program()
with F.program_guard(main_prog, start_prog):with F.unique_name.guard():num_nodes = L.data("num_nodes", [1], False, 'int64')num_edges = L.data("num_edges", [1], False, 'int64')edges = L.data("edges", [-1, 2], False, 'int64')node_feat = L.data("term_ids", [10, 40], False, 'int64')inputs = L.data("inputs", [2], False, 'int64')gw = BatchGraphWrapper(num_nodes, num_edges, edges, {"term_ids": node_feat})outputs = erniesage_v2_encoder([gw], [inputs])exe = F.Executor(place)
exe.run(start_prog)
outputs_np = exe.run(main_prog, feed=feed_dict, fetch_list=[outputs])[0]
print(outputs_np)
ERNIESage V3 核心过程
GNN send 文本id序列 -> GNN recv 拼接文本id序列 -> ERNIE同时提取中心和多个邻居语义表达
from models.encoder import v3_build_sentence_ids
from models.encoder import v3_build_position_idsclass ERNIESageV3Encoder():def __init__(self, config):self.config = configdef __call__(self, graph_wrappers, inputs):gw = graph_wrappers[0]term_ids = gw.node_feat["term_ids"]# step1. GNN send 文本id序列# copy_sendmsg = gw.send(lambda src, dst, edge: src["h"], nfeat_list=[("h", term_ids)])# step2. GNN recv 拼接文本id序列def ernie_recv(term_ids):"""doc"""num_neighbor = self.config.samples[0]pad_value = L.zeros([1], "int64")# 这里使用seq_pad,将num_neighbor个邻居节点的文本id序列拼接在一下# 对于不足num_neighbor个邻居的将会pad到num_neighbor个neighbors_term_ids, _ = L.sequence_pad(term_ids, pad_value=pad_value, maxlen=num_neighbor) # [B, N*S]neighbors_term_ids = L.reshape(neighbors_term_ids, [0, self.config.max_seqlen * num_neighbor])return neighbors_term_idsneigh_term_ids = gw.recv(msg, ernie_recv)neigh_term_ids = L.cast(neigh_term_ids, "int64")# step3. ERNIE同时提取中心和多个邻居语义表达cls = L.fill_constant_batch_size_like(term_ids, [-1, 1], "int64",self.config.cls_id) # [B, 1]# 将中心与多个邻居的文本全部拼接在一起,形成超长的文本(num_nerghbor+1) * seqlenmulti_term_ids = L.concat([cls, term_ids[:, :-1], neigh_term_ids], 1) # multi_term_ids [B, (N+1)*S]# [CLS], center_id1, center_id2..[SEP]n1_id1, n1_id2..[SEP]n2_id1, n2_id2..[SEP]..[SEP]slot_seqlen = self.config.max_seqlenfinal_feats = []for index in inputs:term_ids = L.gather(multi_term_ids, index, overwrite=False)position_ids = v3_build_position_ids(term_ids, slot_seqlen)sent_ids = v3_build_sentence_ids(term_ids, slot_seqlen)# 将需要计算的超长文本,使用Ernie提取CLS位置的语义表达ernie_model = ErnieModel(self.config.ernie_config)cls_feat, _ = ernie_model(term_ids, sent_ids, position_ids)feature = linear(cls_feat, self.config.hidden_size, "v3_final_fc")feature = L.l2_normalize(feature, axis=1)final_feats.append(feature)return final_feats# 直接run一下
erniesage_v3_encoder = ERNIESageV3Encoder(config)main_prog, start_prog = F.Program(), F.Program()
with F.program_guard(main_prog, start_prog):num_nodes = L.data("num_nodes", [1], False, 'int64')num_edges = L.data("num_edges", [1], False, 'int64')edges = L.data("edges", [-1, 2], False, 'int64')node_feat = L.data("term_ids", [-1, 40], False, 'int64')inputs = L.data("inputs", [-1], False, 'int64')gw = BatchGraphWrapper(num_nodes, num_edges, edges, {"term_ids": node_feat})outputs = erniesage_v3_encoder([gw], [inputs])exe.run(start_prog)
outputs_np = exe.run(main_prog, feed=feed_dict, fetch_list=[outputs])[0]
print(outputs_np)
下面展示一些 `内联代码片`。
训练
link predict任务
以一个link predict的任务为例,读取一个语义图,以上面的边为目标进行无监督的训练
In [17]
class ERNIESageLinkPredictModel(propeller.train.Model):def __init__(self, hparam, mode, run_config):self.hparam = hparamself.mode = modeself.run_config = run_configdef forward(self, features):num_nodes, num_edges, edges, node_feat_index, node_feat_term_ids, user_index, \pos_item_index, neg_item_index, user_real_index, pos_item_real_index = featuresnode_feat = {"index": node_feat_index, "term_ids": node_feat_term_ids}graph_wrapper = BatchGraphWrapper(num_nodes, num_edges, edges,node_feat)#encoder = ERNIESageV1Encoder(self.hparam)encoder = ERNIESageV2Encoder(self.hparam)#encoder = ERNIESageV3Encoder(self.hparam)# 中心节点、邻居节点、随机采样节点 分别提取特征outputs = encoder([graph_wrapper],[user_index, pos_item_index, neg_item_index])user_feat, pos_item_feat, neg_item_feat = outputsif self.mode is not propeller.RunMode.PREDICT:return user_feat, pos_item_feat, neg_item_featelse:return user_feat, user_real_indexdef loss(self, predictions, labels):user_feat, pos_item_feat, neg_item_feat = predictionspos = L.reduce_sum(user_feat * pos_item_feat, -1, keep_dim=True) # #neg = L.reduce_sum(user_feat * neg_item_feat, -1, keep_dim=True)# 60.neg = L.matmul(user_feat, neg_item_feat, transpose_y=True) # 80.# 距离(中心,邻居)> 距离(中心,随机负)loss = L.reduce_mean(L.relu(neg - pos + self.hparam.margin))return lossdef backward(self, loss):adam = F.optimizer.Adam(learning_rate=self.hparam['learning_rate'])adam.minimize(loss)def metrics(self, predictions, label):return {}from link_predict import train
from link_predict import predicttrain(config, ERNIESageLinkPredictModel)predict(config, ERNIESageLinkPredictModel)! head output/part-0
如何评价
为了可以比较清楚地知道Embedding的效果,我们直接通过MRR简单判断一下graphp_data.txt计算出来的Embedding结果,此处将graph_data.txt同时作为训练集和验证集。
!python build_dev.py --path "./example_data/link_predict/graph_data.txt" # 此命令用于将训练数据输出为需要的格式,产生的文件为dev_out.txt# 接下来,计算MRR得分。
# 注意,运行此代码的前提是,我们已经将config对应的yaml配置文件中的input_data参数修改为了:"data.txt"
# 并且注意训练的模型是针对data.txt的,如果不符合,请重新训练模型。
!python mrr.py --emb_path output/part-0
总结
通过以上三个版本的模型代码简单的讲解,我们可以知道他们的不同点,其实主要就是在消息传递机制的部分有所不同。ERNIESageV1版本只作用在text graph的节点上,在传递消息(Send阶段)时只考虑了邻居本身的文本信息;而ERNIESageV2版本则作用在了边上,在Send阶段同时考虑了当前节点和其邻居节点的文本信息,达到更好的交互效果, ERNIESageV3则作用在中心和全部邻居上,使节点之间能够互相attention。
希望通过这一运行实例,可以帮助同学们对ERNIESage有更好的了解和认识,大家快快用起来吧!
这篇关于图神经网络七日打卡营学习笔记及心得的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!