Pushing the boundaries of molecular representation for drug discovery with graph attention mechanism

本文主要是介绍Pushing the boundaries of molecular representation for drug discovery with graph attention mechanism,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

AttentiveFP 2020

Motivations

1、The gap between what these neural networks learn and what human beings can comprehend is growing

2、Graph-based representations take only the information concerning the topological arrangement of atoms as input

3、Geometry-based representations employ the molecular geometry information, including bond lengths, bond angles, and torsional angles

4、Attentive FP automatically learns nonlocal intramolecular interactions from specified tasks

(1)characterizes the atomic local environment by propagating node information from nearby nodes to more distant ones

(2)allows for nonlocal effects at the intramolecular level by applying a graph attention mechanism

Related Works

在这里插入图片描述

1、Neural FP and GCN

models the neighbor nodes’ chances to influence the target decrease with topological distance during the recursive propagation procedure(但有时候距离较远的结点对当前结点也会有很重要的作用)

2、Weave and MPNN

(1)construct virtual edges linking every pair of nodes in a molecule graph, meaning that any nodes, regardless of their distance to a target node, have an equal chance to exert influence, similar to the direct neighbors of the target node

(2)tended to make all the neighbors’ impacts weak because of their averaging effect

Methods

Attentive FP Network Architecture

在这里插入图片描述

1、linear transformation and nonlinear activation were performed to unify the vector length

2、those initial state vectors are further embedded with stacked attentive layers for node embedding

3、we treat the entire molecule as a supervirtual node that connects every atom in a molecule and is embedded using the same atom embedding attention mechanism

4、The final state vector is the learned representation that encodes structural information about the molecular graph, followed by a task dependent layer for prediction

Attentive Layers on a Graph

two stacks of attentive layers to extract information from the molecular graph

1、one stack (with k layers) is for atom embedding(a single attentive layer)
在这里插入图片描述

(1)When applying attention to atom 3, the state vector of atom 3 is aligned with the state vector of its neighbors 2, 4, and 5

(2)the weight that measures how much attention we want to assign to the neighbors is calculated by a softmax function

(3)a weighted sum of the neighborhood information C3 is obtained as the attention context vector of atom 3

(4)C3 (the attention context of atom 3) is fed into a GRU recurrent network unit together with the state vector h3 of atom 3(This scheme allows relevant information to be passed down without too much attrition)

2、full network architecture for the attentive layers(更新h的过程)
在这里插入图片描述

3、the other (with t layers) is for full-molecule embedding
在这里插入图片描述

all of the atom embeddings are aggregated by assuming a super virtual node that connects all the atoms of the molecule.

Conclusions

1、The adoption of graph attention mechanisms at both the atom and molecule levels allows this new representation framework to learn both local and nonlocal properties of a given chemical structure

2、it captures subtle substructure patterns

3、inverting the Attentive FP model by extracting the hidden layers or attention weights provides access to the model’s interpretation

这篇关于Pushing the boundaries of molecular representation for drug discovery with graph attention mechanism的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/366068

相关文章

什么是 Flash Attention

Flash Attention 是 由 Tri Dao 和 Dan Fu 等人在2022年的论文 FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness 中 提出的, 论文可以从 https://arxiv.org/abs/2205.14135 页面下载,点击 View PDF 就可以下载。 下面我

图神经网络框架DGL实现Graph Attention Network (GAT)笔记

参考列表: [1]深入理解图注意力机制 [2]DGL官方学习教程一 ——基础操作&消息传递 [3]Cora数据集介绍+python读取 一、DGL实现GAT分类机器学习论文 程序摘自[1],该程序实现了利用图神经网络框架——DGL,实现图注意网络(GAT)。应用demo为对机器学习论文数据集——Cora,对论文所属类别进行分类。(下图摘自[3]) 1. 程序 Ubuntu:18.04

SIGMOD-24概览Part7: Industry Session (Graph Data Management)

👇BG3: A Cost Effective and I/O Efficient Graph Database in ByteDance 🏛机构:字节 ➡️领域: Information systems → Data management systemsStorage management 📚摘要:介绍了字节新提出的ByteGraph 3.0(BG3)模型,用来处理大规模图结构数据 背景

访问controller404:The origin server did not find a current representation for the target resource

ider build->rebuild project。Rebuild:对选定的目标(Project),进行强制性编译,不管目标是否是被修改过。由于 Rebuild 的目标只有 Project,所以 Rebuild 每次花的时间会比较长。 参考:资料

A Comprehensive Survey on Graph Neural Networks笔记

一、摘要-Abstract 1、传统的深度学习模型主要处理欧几里得数据(如图像、文本),而图神经网络的出现和发展是为了有效处理和学习非欧几里得域(即图结构数据)的信息。 2、将GNN划分为四类:recurrent GNNs(RecGNN), convolutional GNNs,(GCN), graph autoencoders(GAE), and spatial–temporal GNNs(S

Neighborhood Homophily-based Graph Convolutional Network

#paper/ccfB 推荐指数: #paper/⭐ #pp/图结构学习 流程 重定义同配性指标: N H i k = ∣ N ( i , k , c m a x ) ∣ ∣ N ( i , k ) ∣ with c m a x = arg ⁡ max ⁡ c ∈ [ 1 , C ] ∣ N ( i , k , c ) ∣ NH_i^k=\frac{|\mathcal{N}(i,k,c_{

时序预测|变分模态分解-双向时域卷积-双向门控单元-注意力机制多变量时间序列预测VMD-BiTCN-BiGRU-Attention

时序预测|变分模态分解-双向时域卷积-双向门控单元-注意力机制多变量时间序列预测VMD-BiTCN-BiGRU-Attention 文章目录 一、基本原理1. 变分模态分解(VMD)2. 双向时域卷积(BiTCN)3. 双向门控单元(BiGRU)4. 注意力机制(Attention)总结流程 二、实验结果三、核心代码四、代码获取五、总结 时序预测|变分模态分解-双向时域卷积

阅读笔记--Guiding Attention in End-to-End Driving Models

作者:Diego Porres1, Yi Xiao1, Gabriel Villalonga1, Alexandre Levy1, Antonio M. L ́ opez1,2 出版时间:arXiv:2405.00242v1 [cs.CV] 30 Apr 2024 这篇论文研究了如何引导基于视觉的端到端自动驾驶模型的注意力,以提高它们的驾驶质量和获得更直观的激活图。 摘 要   介绍

基于 BiLSTM+Attention 实现降雨预测多变量时序分类——明日是否降雨

前言 系列专栏:【深度学习:算法项目实战】✨︎ 涉及医疗健康、财经金融、商业零售、食品饮料、运动健身、交通运输、环境科学、社交媒体以及文本和图像处理等诸多领域,讨论了各种复杂的深度神经网络思想,如卷积神经网络、循环神经网络、生成对抗网络、门控循环单元、长短期记忆、自然语言处理、深度强化学习、大型语言模型和迁移学习。 降雨预测作为气象学和水文学领域的重要研究课题,‌对于农业、‌城市规划、

Show,Attend and Tell: Neural Image Caption Generation with Visual Attention

简单的翻译阅读了一下 Abstract 受机器翻译和对象检测领域最新工作的启发,我们引入了一种基于注意力的模型,该模型可以自动学习描述图像的内容。我们描述了如何使用标准的反向传播技术,以确定性的方式训练模型,并通过最大化变分下界随机地训练模型。我们还通过可视化展示了模型如何能够自动学习将注视固定在显着对象上,同时在输出序列中生成相应的单词。我们通过三个基准数据集(Flickr9k,Flickr