本文主要是介绍Python3 通过轮询方式使用腾讯云语音识别接口实现录音文件转写,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
腾讯云最先是只支持使用回调的方式获取录音识别的结果的,当时我也针对回调的方式写过一篇博客https://blog.csdn.net/TomorrowAndTuture/article/details/100100430。但 9月5号左右有一次更新,腾讯针对录音文件的识别变成可支持轮询的方式获取识别结果,详细修改情况请查看腾讯云GitHub的Python源码,https://github.com/tencentcloud/tencentcloud-sdk-python:
原来的回调方式获取结果需要自己搭建一个服务,专门接收腾讯识别结果的POST请求,但是,如果用户没有官网可访问的IP或者域名的话,搭建好接收腾讯回调POST请求的地址并不算简单。现在支持通过TaskId获取识别结果的话,自然是再好不过的了,而且只要在一定期限之内,通过TaskId还可以重复获取识别结果。
首先,安装 tencentcloud-sdk-python 的包,你可以通过pip或者其他方式进行安装:
然后的话,我也不绕弯子了,直接上代码,先是根据需要上传录音文件url和相应的参数:
"""
@file: recognition_request.py
@author: Looking
@email: 2392863668@qq.com
"""from tencentcloud.common import credential
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException
from tencentcloud.asr.v20190614 import asr_client, modelstry:cred = credential.Credential("your SECRETID", "your SECRET_KEY")httpProfile = HttpProfile()httpProfile.endpoint = "asr.tencentcloudapi.com"clientProfile = ClientProfile()clientProfile.httpProfile = httpProfileclient = asr_client.AsrClient(cred, "ap-guangzhou", clientProfile)req = models.CreateRecTaskRequest()# 下面这个参数你自己根据需要进行设置params = '{"EngineModelType":"8k_6","ChannelNum":1,"ResTextFormat":0,"SourceType":0,"Url":"http://audio.c.---.wav"}'req.from_json_string(params)resp = client.CreateRecTask(req)print(resp.to_json_string())except TencentCloudSDKException as err:print(err)
然后运行可以得到类似如下的结果:
"D:\Program Files\Python36\python3.exe" D:/MyProject/Python/Voice_SDK/python_record_asr_sdk/src/request_recognition.py
{"Data": {"TaskId": 537731632}, "RequestId": "4d83b186-98fc-4d51-ba09-da65fe7b891e"}Process finished with exit code 0
这样的话,你就可以获取这通录音识别的 TaskId 了,接着通过 TaskId 获取录音识别结果,我这儿添加了部分对录音识别结果进行解析的代码:
"""
@file: recognition_result.py
@author: Looking
@email: 2392863668@qq.com
"""
from tencentcloud.common import credential
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException
from tencentcloud.asr.v20190614 import asr_client, models
try:cred = credential.Credential("your SECRETID", "your SECRET_KEY")httpProfile = HttpProfile()httpProfile.endpoint = "asr.tencentcloudapi.com"clientProfile = ClientProfile()clientProfile.httpProfile = httpProfileclient = asr_client.AsrClient(cred, "ap-guangzhou", clientProfile)req = models.DescribeTaskStatusRequest()params = '{"TaskId":537731632}'req.from_json_string(params)resp = client.DescribeTaskStatus(req)# print(resp.to_json_string())recognition_text = resp.to_json_string()# print(eval(recognition_text)['Data'])recognition_text = eval(recognition_text)['Data']['Result']doc = open('result.txt', 'w', encoding='utf-8')sentence_list = recognition_text.split('\n')[0:-1] # 列表最后一个元素是空字符串for sentence in sentence_list:content = sentence.split(' ')[1] # 获取单句通话内容begin_time = sentence.split(' ')[0].split(',')[0][1:] # 获取每句话的开始时间begin_time = str(int(begin_time.split(":")[0]) * 60000 + int(begin_time.split(":")[1].replace(".", "")))end_time = sentence.split(' ')[0].split(',')[1] # 获取每句话的结束时间end_time = str(int(end_time.split(":")[0]) * 60000 + int(end_time.split(":")[1].replace(".", "")))speaker = sentence.split(' ')[0].split(',')[-1][:-1] # 获取说话人print(speaker + "\t" + content + '\t' + begin_time + '\t' + end_time)print(speaker + "\t" + content + '\t' + begin_time + '\t' + end_time, file=doc)doc.close()except TencentCloudSDKException as err:print(err)
然后对返回的字符串结果进行解析,获取完整录音识别结果:
还有一点,识别结果返回是以string的形式返回的,记得用 eval 函数转换成 Python 中的字典再进行解析。(还是用json.loads() 吧,返回语句 return resp.to_json_string() 返回的反正是json串,用 eval 可能会有 bug。)
这篇关于Python3 通过轮询方式使用腾讯云语音识别接口实现录音文件转写的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!