python运行hhsearch二进制命令的包装器类

2023-11-21 14:04

本文主要是介绍python运行hhsearch二进制命令的包装器类,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

hhsearch 是 HMM-HMM(Hidden Markov Model to Hidden Markov Model)比对方法的一部分,属于 HMMER 软件套件。它用于进行蛋白质序列的高效比对,特别适用于检测远缘同源性。

以下是 hhsearch 的一些主要特点和用途:

  1. HMM-HMM比对: hhsearch 使用隐藏马尔可夫模型(HMM)来表示蛋白质家族的模型。与传统的序列-序列比对方法不同,HMM-HMM比对考虑了氨基酸残基的多序列信息,使得在比对中能够更好地捕捉蛋白质家族的模式和结构。

  2. 检测远缘同源性: hhsearch 的一个主要优势是其能够检测到相对远离的同源关系。它在比对中引入了更多的信息,从而提高了对远缘同源蛋白的发现能力。

  3. 灵敏度和特异性: hhsearch 的设计旨在在维持高灵敏度的同时,减少假阳性的比对。这使得它在寻找结构和功能相似性时更为可靠。

  4. 数据库搜索: 用户可以使用 hhsearch 在大型蛋白质数据库中搜索与给定蛋白质序列相似的蛋白质。

"""Library to run HHsearch from Python."""import glob
import os
import subprocess
from typing import Sequence, Optional, List, Iterable
from absl import logging
import contextlib
import tempfile
import dataclasses
import contextlib
import time
import shutil
import re@contextlib.contextmanager
def timing(msg: str):logging.info('Started %s', msg)tic = time.time()yieldtoc = time.time()logging.info('Finished %s in %.3f seconds', msg, toc - tic)@dataclasses.dataclass(frozen=True)
class TemplateHit:"""Class representing a template hit."""index: intname: straligned_cols: intsum_probs: Optional[float]query: strhit_sequence: strindices_query: List[int]indices_hit: List[int]@contextlib.contextmanager
def tmpdir_manager(base_dir: Optional[str] = None):"""Context manager that deletes a temporary directory on exit."""tmpdir = tempfile.mkdtemp(dir=base_dir)try:yield tmpdirfinally:shutil.rmtree(tmpdir, ignore_errors=True)def parse_hhr(hhr_string: str) -> Sequence[TemplateHit]:"""Parses the content of an entire HHR file."""lines = hhr_string.splitlines()# Each .hhr file starts with a results table, then has a sequence of hit# "paragraphs", each paragraph starting with a line 'No <hit number>'. We# iterate through each paragraph to parse each hit.block_starts = [i for i, line in enumerate(lines) if line.startswith('No ')]hits = []if block_starts:block_starts.append(len(lines))  # Add the end of the final block.for i in range(len(block_starts) - 1):hits.append(_parse_hhr_hit(lines[block_starts[i]:block_starts[i + 1]]))return hitsdef _parse_hhr_hit(detailed_lines: Sequence[str]) -> TemplateHit:"""Parses the detailed HMM HMM comparison section for a single Hit.This works on .hhr files generated from both HHBlits and HHSearch.Args:detailed_lines: A list of lines from a single comparison section between 2sequences (which each have their own HMM's)Returns:A dictionary with the information from that detailed comparison sectionRaises:RuntimeError: If a certain line cannot be processed"""# Parse first 2 lines.number_of_hit = int(detailed_lines[0].split()[-1])name_hit = detailed_lines[1][1:]# Parse the summary line.pattern = ('Probab=(.*)[\t ]*E-value=(.*)[\t ]*Score=(.*)[\t ]*Aligned_cols=(.*)[\t'' ]*Identities=(.*)%[\t ]*Similarity=(.*)[\t ]*Sum_probs=(.*)[\t '']*Template_Neff=(.*)')match = re.match(pattern, detailed_lines[2])if match is None:raise RuntimeError('Could not parse section: %s. Expected this: \n%s to contain summary.' %(detailed_lines, detailed_lines[2]))(_, _, _, aligned_cols, _, _, sum_probs, _) = [float(x)for x in match.groups()]# The next section reads the detailed comparisons. These are in a 'human# readable' format which has a fixed length. The strategy employed is to# assume that each block starts with the query sequence line, and to parse# that with a regexp in order to deduce the fixed length used for that block.query = ''hit_sequence = ''indices_query = []indices_hit = []length_block = Nonefor line in detailed_lines[3:]:# Parse the query sequence lineif (line.startswith('Q ') and not line.startswith('Q ss_dssp') andnot line.startswith('Q ss_pred') andnot line.startswith('Q Consensus')):# Thus the first 17 characters must be 'Q <query_name> ', and we can parse# everything after that.#              start    sequence       end       total_sequence_lengthpatt = r'[\t ]*([0-9]*) ([A-Z-]*)[\t ]*([0-9]*) \([0-9]*\)'groups = _get_hhr_line_regex_groups(patt, line[17:])# Get the length of the parsed block using the start and finish indices,# and ensure it is the same as the actual block length.start = int(groups[0]) - 1  # Make index zero based.delta_query = groups[1]end = int(groups[2])num_insertions = len([x for x in delta_query if x == '-'])length_block = end - start + num_insertionsassert length_block == len(delta_query)# Update the query sequence and indices list.query += delta_query_update_hhr_residue_indices_list(delta_query, start, indices_query)elif line.startswith('T '):# Parse the hit sequence.if (not line.startswith('T ss_dssp') andnot line.startswith('T ss_pred') andnot line.startswith('T Consensus')):# Thus the first 17 characters must be 'T <hit_name> ', and we can# parse everything after that.#              start    sequence       end     total_sequence_lengthpatt = r'[\t ]*([0-9]*) ([A-Z-]*)[\t ]*[0-9]* \([0-9]*\)'groups = _get_hhr_line_regex_groups(patt, line[17:])start = int(groups[0]) - 1  # Make index zero based.delta_hit_sequence = groups[1]assert length_block == len(delta_hit_sequence)# Update the hit sequence and indices list.hit_sequence += delta_hit_sequence_update_hhr_residue_indices_list(delta_hit_sequence, start, indices_hit)return TemplateHit(index=number_of_hit,name=name_hit,aligned_cols=int(aligned_cols),sum_probs=sum_probs,query=query,hit_sequence=hit_sequence,indices_query=indices_query,indices_hit=indices_hit,)def _get_hhr_line_regex_groups(regex_pattern: str, line: str) -> Sequence[Optional[str]]:match = re.match(regex_pattern, line)if match is None:raise RuntimeError(f'Could not parse query line {line}')return match.groups()def _update_hhr_residue_indices_list(sequence: str, start_index: int, indices_list: List[int]):"""Computes the relative indices for each residue with respect to the original sequence."""counter = start_indexfor symbol in sequence:if symbol == '-':indices_list.append(-1)else:indices_list.append(counter)counter += 1class HHSearch:"""Python wrapper of the HHsearch binary."""def __init__(self,*,binary_path: str,databases: Sequence[str],maxseq: int = 1_000_000):"""Initializes the Python HHsearch wrapper.Args:binary_path: The path to the HHsearch executable.databases: A sequence of HHsearch database paths. This should be thecommon prefix for the database files (i.e. up to but not including_hhm.ffindex etc.)maxseq: The maximum number of rows in an input alignment. Note that thisparameter is only supported in HHBlits version 3.1 and higher.Raises:RuntimeError: If HHsearch binary not found within the path."""self.binary_path = binary_pathself.databases = databasesself.maxseq = maxseq#for database_path in self.databases:#  if not glob.glob(database_path + '_*'):#    logging.error('Could not find HHsearch database %s', database_path)#    raise ValueError(f'Could not find HHsearch database {database_path}')@propertydef output_format(self) -> str:return 'hhr'@propertydef input_format(self) -> str:return 'a3m'def query(self, a3m: str) -> str:"""Queries the database using HHsearch using a given a3m."""with tmpdir_manager() as query_tmp_dir:input_path = os.path.join(query_tmp_dir, 'query.a3m')hhr_path = os.path.join(query_tmp_dir, 'output.hhr')with open(input_path, 'w') as f:f.write(a3m)db_cmd = []for db_path in self.databases:db_cmd.append('-d')db_cmd.append(db_path)cmd = [self.binary_path,'-i', input_path,'-o', hhr_path,'-maxseq', str(self.maxseq)] + db_cmdprint("cmd:",cmd)logging.info('Launching subprocess "%s"', ' '.join(cmd))process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)with timing('HHsearch query'):stdout, stderr = process.communicate()retcode = process.wait()if retcode:# Stderr is truncated to prevent proto size errors in Beam.raise RuntimeError('HHSearch failed:\nstdout:\n%s\n\nstderr:\n%s\n' % (stdout.decode('utf-8'), stderr[:100_000].decode('utf-8')))with open(hhr_path) as f:hhr = f.read()return hhrdef get_template_hits(self,output_string: str,input_sequence: str) -> Sequence[TemplateHit]:"""Gets parsed template hits from the raw string output by the tool."""del input_sequence  # Used by hmmseach but not needed for hhsearch.return parse_hhr(output_string)def convert_stockholm_to_a3m (stockholm_format: str,max_sequences: Optional[int] = None,remove_first_row_gaps: bool = True) -> str:"""Converts MSA in Stockholm format to the A3M format."""descriptions = {}sequences = {}reached_max_sequences = Falsefor line in stockholm_format.splitlines():reached_max_sequences = max_sequences and len(sequences) >= max_sequencesif line.strip() and not line.startswith(('#', '//')):# Ignore blank lines, markup and end symbols - remainder are alignment# sequence parts.seqname, aligned_seq = line.split(maxsplit=1)if seqname not in sequences:if reached_max_sequences:continuesequences[seqname] = ''sequences[seqname] += aligned_seqfor line in stockholm_format.splitlines():if line[:4] == '#=GS':# Description row - example format is:# #=GS UniRef90_Q9H5Z4/4-78            DE [subseq from] cDNA: FLJ22755 ...columns = line.split(maxsplit=3)seqname, feature = columns[1:3]value = columns[3] if len(columns) == 4 else ''if feature != 'DE':continueif reached_max_sequences and seqname not in sequences:continuedescriptions[seqname] = valueif len(descriptions) == len(sequences):break# Convert sto format to a3m line by linea3m_sequences = {}if remove_first_row_gaps:# query_sequence is assumed to be the first sequencequery_sequence = next(iter(sequences.values()))query_non_gaps = [res != '-' for res in query_sequence]for seqname, sto_sequence in sequences.items():# Dots are optional in a3m format and are commonly removed.out_sequence = sto_sequence.replace('.', '')if remove_first_row_gaps:out_sequence = ''.join(_convert_sto_seq_to_a3m(query_non_gaps, out_sequence))a3m_sequences[seqname] = out_sequencefasta_chunks = (f">{k} {descriptions.get(k, '')}\n{a3m_sequences[k]}"for k in a3m_sequences)return '\n'.join(fasta_chunks) + '\n'  # Include terminating newlinedef _convert_sto_seq_to_a3m(query_non_gaps: Sequence[bool], sto_seq: str) -> Iterable[str]:for is_query_res_non_gap, sequence_res in zip(query_non_gaps, sto_seq):if is_query_res_non_gap:yield sequence_reselif sequence_res != '-':yield sequence_res.lower()if __name__ == "__main__":### 1. 准备输入数据## 输入序列先通过Jackhmmer多次迭代从uniref90,MGnify数据库搜索同源序列,输出的多序列比对文件(如globins4.sto),转化为a3m格式后,再通过hhsearch从pdb数据库中找到同源序列input_fasta_file = '/home/zheng/test/Q94K49.fasta'## input_sequencewith open(input_fasta_file) as f:input_sequence = f.read()test_templates_sto_file = "/home/zheng/test/Q94K49_aln.sto"with open(test_templates_sto_file) as f:test_templates_sto = f.read()## sto格式转a3m格式()test_templates_a3m = convert_stockholm_to_a3m(test_templates_sto)hhsearch_binary_path = "/home/zheng/software/hhsuite-3.3.0-SSE2-Linux/bin/hhsearch"### 2.类实例化# scop70_1.75文件名前缀scop70_database_path = "/home/zheng/database/scop70_1.75_hhsuite3/scop70_1.75"pdb70_database_path = "/home/zheng/database/pdb70_from_mmcif_latest/pdb70"#hhsuite数据库下载地址:https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/  ## 单一数据库#template_searcher = HHSearch(binary_path = hhsearch_binary_path,#                             databases = [scop70_database_path])## 多个数据库database_lst = [scop70_database_path, pdb70_database_path]template_searcher = HHSearch(binary_path = hhsearch_binary_path,databases = database_lst) ### 3. 同源序列搜索## 搜索结果返回.hhr文件字符串templates_result = template_searcher.query(test_templates_a3m)print(templates_result)## pdb序列信息列表template_hits = template_searcher.get_template_hits(output_string=templates_result, input_sequence=input_sequence)print(template_hits)

这篇关于python运行hhsearch二进制命令的包装器类的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/402793

相关文章

python: 多模块(.py)中全局变量的导入

文章目录 global关键字可变类型和不可变类型数据的内存地址单模块(单个py文件)的全局变量示例总结 多模块(多个py文件)的全局变量from x import x导入全局变量示例 import x导入全局变量示例 总结 global关键字 global 的作用范围是模块(.py)级别: 当你在一个模块(文件)中使用 global 声明变量时,这个变量只在该模块的全局命名空

如何用Docker运行Django项目

本章教程,介绍如何用Docker创建一个Django,并运行能够访问。 一、拉取镜像 这里我们使用python3.11版本的docker镜像 docker pull python:3.11 二、运行容器 这里我们将容器内部的8080端口,映射到宿主机的80端口上。 docker run -itd --name python311 -p

C++包装器

包装器 在 C++ 中,“包装器”通常指的是一种设计模式或编程技巧,用于封装其他代码或对象,使其更易于使用、管理或扩展。包装器的概念在编程中非常普遍,可以用于函数、类、库等多个方面。下面是几个常见的 “包装器” 类型: 1. 函数包装器 函数包装器用于封装一个或多个函数,使其接口更统一或更便于调用。例如,std::function 是一个通用的函数包装器,它可以存储任意可调用对象(函数、函数

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]

30常用 Maven 命令

Maven 是一个强大的项目管理和构建工具,它广泛用于 Java 项目的依赖管理、构建流程和插件集成。Maven 的命令行工具提供了大量的命令来帮助开发人员管理项目的生命周期、依赖和插件。以下是 常用 Maven 命令的使用场景及其详细解释。 1. mvn clean 使用场景:清理项目的生成目录,通常用于删除项目中自动生成的文件(如 target/ 目录)。共性规律:清理操作

【机器学习】高斯过程的基本概念和应用领域以及在python中的实例

引言 高斯过程(Gaussian Process,简称GP)是一种概率模型,用于描述一组随机变量的联合概率分布,其中任何一个有限维度的子集都具有高斯分布 文章目录 引言一、高斯过程1.1 基本定义1.1.1 随机过程1.1.2 高斯分布 1.2 高斯过程的特性1.2.1 联合高斯性1.2.2 均值函数1.2.3 协方差函数(或核函数) 1.3 核函数1.4 高斯过程回归(Gauss

【学习笔记】 陈强-机器学习-Python-Ch15 人工神经网络(1)sklearn

系列文章目录 监督学习:参数方法 【学习笔记】 陈强-机器学习-Python-Ch4 线性回归 【学习笔记】 陈强-机器学习-Python-Ch5 逻辑回归 【课后题练习】 陈强-机器学习-Python-Ch5 逻辑回归(SAheart.csv) 【学习笔记】 陈强-机器学习-Python-Ch6 多项逻辑回归 【学习笔记 及 课后题练习】 陈强-机器学习-Python-Ch7 判别分析 【学

nudepy,一个有趣的 Python 库!

更多资料获取 📚 个人网站:ipengtao.com 大家好,今天为大家分享一个有趣的 Python 库 - nudepy。 Github地址:https://github.com/hhatto/nude.py 在图像处理和计算机视觉应用中,检测图像中的不适当内容(例如裸露图像)是一个重要的任务。nudepy 是一个基于 Python 的库,专门用于检测图像中的不适当内容。该

pip-tools:打造可重复、可控的 Python 开发环境,解决依赖关系,让代码更稳定

在 Python 开发中,管理依赖关系是一项繁琐且容易出错的任务。手动更新依赖版本、处理冲突、确保一致性等等,都可能让开发者感到头疼。而 pip-tools 为开发者提供了一套稳定可靠的解决方案。 什么是 pip-tools? pip-tools 是一组命令行工具,旨在简化 Python 依赖关系的管理,确保项目环境的稳定性和可重复性。它主要包含两个核心工具:pip-compile 和 pip