集成网易云信SDK,进行即时通信-Web(语音通信)

2023-10-30 07:59

本文主要是介绍集成网易云信SDK,进行即时通信-Web(语音通信),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、进行网易云信账号登录注册

登录网易云信控制台
1.登录后可在控制台创建uikit,获取自己appKey
2.查看自己所开通的服务
在这里插入图片描述
在这里插入图片描述
这边是没有语音通信权限的,可以申请免费试用

二、进行文档查看

开发文档
可以看到一些demo,各个平台和环境下API
在这里插入图片描述
选择自己开发的环境,选择下面的不含UI集成(根据自己需求选择适合自己的)
在这里插入图片描述

三、进行SDK集成

方式1: npm 集成(推荐)

npm install @yxim/nim-web-sdk@latest

方式2:script 集成

1.前往云信 SDK 下载中心下载最新版 SDK
2.解压 SDK。SDK 解压后可得到以下三个文件(配图仅以 v9.8.0 为例)
在这里插入图片描述
3. 三个文件说明

├── NIM_Web_Chatroom.js       提供聊天室功能,浏览器适配版(UMD 格式)
├── NIM_Web_NIM.js       提供 IM 功能,包括单聊、会话和群聊等,但不包含聊天室。浏览器适配版(UMD 格式)
├── NIM_Web_SDK.js       提供 IM 功能和聊天室功能的集成包,浏览器适配版(UMD 格式)

4.将所需的 SDK 文件,传入script标签的src中即可。在下文中使用 window 对象属性即可获取对 SDK 的引用。

<!-- 例如 -->
<script src="NIM_Web_SDK_vx.x.x.js">
<script>var nim = SDK.NIM.getInstance({// ...})
</script>

更详细的可去往网易云信官方文档

这边自己是通过方式一进行添加,然后封装在工具中的ts文件中
utils文件夹中的Nim.ts中

/** @Description: 即使通信* @Author: 黑猫* @Date: 2023-06-30 10:53:33* @LastEditTime: 2023-07-15 10:26:36* @FilePath: \pc-exam-invigilate\src\utils\Nim.ts*/
import { NIM } from '@yxim/nim-web-sdk'
// import { NimSdk, NimScene } from '@/constants'
import type {NIMSendTextOptions,NIMGetInstanceOptions,NIMSession,INimInstance
} from '@/types/NimTypes'type CallbackOptions =| 'onconnect'| 'ondisconnect'| 'onerror'| 'onmsg'| 'onwillreconnect'| 'onsessions'| 'onupdatesessions'| 'onofflinemsgs'
type Options = {account: stringtoken: stringdebug?: boolean
} & Pick<NIMGetInstanceOptions, CallbackOptions>class Nim {nimconstructor(options: Options) {this.nim = this.initNim(options)}private initNim({ account, token, debug = true, ...args }: Options): INimInstance {console.log(import.meta.env.VITE_APP_KEY)return NIM.getInstance({debug,appKey: import.meta.env.VITE_APP_KEY,account,token,privateConf: {isDataReportEnable: true,isMixStoreEnable: true}, // 私有化部署方案所需的配置...args})}// 发送文本消息sendText(options: NIMSendTextOptions) {this.nim.sendText({cc: true,...options})}// 重置会话未读数resetSessionUnread(sessionId: string,done: (err: Error | null, failedSessionId: string) => void) {this.nim.resetSessionUnread(sessionId, done)}// 获取历史记录getHistoryMsgs(options: any) {this.nim.getHistoryMsgs(options)}sendCustomMsg(options: any) {this.nim.sendCustomMsg(options)}getTeam(options: any) {this.nim.getTeam(options)}getTeams(options: any) {this.nim.getTeams(options)}sendFile(options: any) {this.nim.sendFile(options)}// 合并会话列表mergeSessions(olds: any[], news: any[]) {if (!olds) {olds = []}if (!news) {return olds}if (!NIM.util.isArray(news)) {news = [news]}if (!news.length) {return olds}const options = {sortPath: 'time',desc: true}return NIM.util.mergeObjArray([], olds, news, options)}destory(done?: any) {this.nim.destroy(done)}disconnect(done?: any) {this.nim.disconnect(done)}
}export default Nim

types中的NimTypes.ts

/** @Description: 功能* @Author: 黑猫* @Date: 2023-06-30 11:01:57* @LastEditTime: 2023-06-30 14:18:22* @FilePath: \pc-exam-invigilate\src\types\NimTypes.ts*/
import { NIM } from '@yxim/nim-web-sdk'
import type { NIMCommonError, NIMStrAnyObj } from '@yxim/nim-web-sdk/dist/types/types'
export type {NIMSendTextOptions,NIMMessage
} from '@yxim/nim-web-sdk/dist/types/nim/MessageInterface'
export type {NIMTeam,NIMTeamMember,TeamInterface
} from '@yxim/nim-web-sdk/dist/types/nim/TeamInterface'
export type { NIMGetInstanceOptions } from '@yxim/nim-web-sdk/dist/types/nim/types'
export type { NIMSession } from '@yxim/nim-web-sdk/dist/types/nim/SessionInterface'
export type NIMError = NIMCommonError | Error | NIMStrAnyObj | null
export type INimInstance = InstanceType<typeof NIM>

1.现在进行初始化

// 获取历史消息
const getHistoryMessages = (teamId: string) => {const today = new Date()const threeDaysAgo = new Date(today.getTime() - 2 * 24 * 60 * 60 * 1000) // 两天前的日期today.setHours(0, 0, 0, 0)threeDaysAgo.setHours(0, 0, 0, 0)const beginTime = threeDaysAgo.getTime()const endTime = Date.now()try {inspectorNimRef.value?.getHistoryMsgs({scene: NimScene.GroupChat,to: teamId,beginTime: beginTime,endTime: endTime,reverse: true, // 是否按照逆序获取历史消息done: getHistoryMsgsDone})} catch (error) {console.log('获取历史消息错误', error)}
}// 处理历史消息数据
const getHistoryMsgsDone = (error, obj: any) => {if (!error) {if (obj.msgs.length > 0) {obj.msgs.forEach((item) => {if (item?.file) {audioMessageList.value.push({duration: Math.ceil(item?.file?.size / (6600 * 2)),stream: item?.file?.url,wink: false,username: item.fromNick,flow: item.flow,time: formatDateTime(item.time)})}})}}
}
// 接收消息
const receiveMessage = (message: NIMMessage) => {if (message && message.type === 'audio' && message?.to) {// 处理接收到的语音消息const duration = Math.ceil(message.file.size / (6600 * 2))if (duration <= 1) {return}const dataTime = formatDateTime(message.time)audioMessageList.value.push({duration,stream: message.file.url,wink: false,username: message.fromNick,flow: message.flow,time: dataTime})openChatting(true, checkedObj.value)// 在此处触发弹窗或其他操作来展示语音消息}
}import { refreshImToken } from '@/api/modules/examRoom'
import type { NIMMessage } from '@/types/NimTypes'
import { NimScene } from '@/constants'
import { ref, onMounted, computed } from 'vue'
import Nim from '@/utils/Nim'
const inspectorNimRef = ref<InstanceType<typeof Nim>>()
// 创建群聊 获取群ID(可以用来语音消息的收发、获取聊天记录)
const createTeam = (checkedObj) => {monitorCreateTeam({scheduleId: checkedObj.scheduleId // 根据自身需求来调用接口获取群id,这边是日程id}).then((data) => {teamId.value = data as stringconfigStore.setConfigState('teamId', teamId.value)localStorage.setItem('teamId', teamId.value)getHistoryMessages(teamId.value)})
}
// 初始化
const initNim = (im_token) => {accid.value = userStore.userId ? `yss_${userStore.userId}` : 'yss_'const nim = new Nim({account: accid.value, // 用户idtoken: im_token, // token,是通过用户Id,调用接口获取tokendebug: false,onconnect() {console.log('连接成功')},onupdatesessions: '', // 您需要根据实际需求指定一个回调函数或设为 null 或 undefinedonsessions: null, // 根据需要指定或设为 null 或 undefinedonmsg: receiveMessage,ondisconnect() {// 处理断开连接逻辑},onerror(err) {// 处理错误逻辑console.log('err --->>>', err)},onwillreconnect(obj) {// 处理将要重新连接逻辑},onofflinemsgs(obj) {// 处理离线消息逻辑}})inspectorNimRef.value = nim
}
// iM_token刷新,获取accid和im_token
const getImToken = () => {const acc_id = userStore.userId ? `yss_${userStore.userId}` : 'yss_'refreshImToken({accId: acc_id}).then((data: { info?: { token?: string } }) => {if (data && data.info && data.info.token) {const im_token: string = data.info.tokeninitNim(im_token)}})
}
onMounted(() => {getImToken()
})

contant文件

/** @Author: heimao* @Date: 2023-07-10 15:33:14* @LastEditors: Do not edit* @LastEditTime: 2023-07-07 14:12:23* @FilePath: \pc-exam-invigilate\src\constants\index.ts* @Description:*/export const enum Source {Examinee = 1, // 学生Teacher = 2 // 老师
}export const enum EmptyStatus {NoSession = 0, // 暂无会话TransferSuccess = 1 // 转接成功
}// 通用
export const enum CommonData {PollingTime = 2000, // 轮询时间SliceLength = 10, // 截取长度PageSize = 100 // 一页长度
}// 云信im sdk
export const enum NimSdk {AppKey = 'cbf015717598350b86ecc0fc7ed51136'
}/*** p2p:单聊消息* team:群聊消息* superTeam:超大群消息**/
export const enum NimScene {SingleChat = 'p2p',GroupChat = 'team',SuperGroupChat = 'superTeam'
}export const enum FileType {Image = 'image',Audio = 'audio',Video = 'video',File = 'file'
}
  1. 进行消息的发送,这里语音消息需要处理再调用网易云信api
    一般都是通过浏览器自身的navigator.mediaDevices.getUserMedia去获取浏览器的语音授权
    以下是聊天框以及获取语音发送授权,封装成组件
<script lang="ts" setup>
import { ref, reactive, onMounted, nextTick, watch, inject, computed } from 'vue'
import { message } from 'ant-design-vue'
import { formatDateTime } from '@/utils/utils'
const chunks = ref<Array<any>>([])
const chunkList: any = ref<Array<{idClient: stringduration: numberstream: stringusername: stringflow: stringtime: string}>
>([]) // 显式指定类型
const audioDiv = ref<any>()
const btnText = ref('按住说话')
let recorder: any = reactive({})
// 父组件中的语音消息数组(包括历史消息和收发消息)
const props = defineProps({audioMessageList: {type: Array,default: () => []}
})
const isDataLoaded = ref(false) // 添加标志位
// 定义父组件中发送语音的消息
const emits = defineEmits(['sendFileVoice', 'createTeam'])
const requestAudioAccess = () => {navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {recorder = new window.MediaRecorder(stream)bindEvents()},(error) => {message.info('出错,请确保已允许浏览器获取录音权限')})
}
const onMousedown = () => {onStart()btnText.value = '松开结束'
}
const onMouseup = () => {onStop()btnText.value = '按住说话'
}
const onStart = () => {recorder.start()
}
const onStop = () => {recorder.stop()
}
const onPlay = (index: any) => {chunkList.value.forEach((item: any) => {item.wink = false})const item: any = chunkList.value[index]audioDiv.value.src = item.streamaudioDiv.value.play()bindAudioEvent(index)
}const bindAudioEvent = (index: any) => {const item: any = chunkList.value[index]audioDiv.value.onplaying = () => {item.wink = true}audioDiv.value.onended = () => {item.wink = false}
}const bindEvents = () => {recorder.ondataavailable = getRecordingDatarecorder.onstop = saveRecordingData
}const getRecordingData = (value) => {chunks.value.push(value.data)
}
// 将语音消息进行保存和转化
const saveRecordingData = () => {const blob = new Blob(chunks.value, { type: 'audio/ogg; codecs=opus' })// 将语音发送到网易云信集成的IMemits('sendFileVoice', blob)const audioStream = URL.createObjectURL(blob)let duration = Math.ceil(blob.size / (6600 * 2))if (duration <= 1) {message.info('说话时间太短')return}if (duration > 60) {duration = 60}const currentTimestamp = formatDateTime(Date.now())chunkList.value.push({ duration, stream: audioStream, time: currentTimestamp })chunks.value = []nextTick(() => {let msgList = document.getElementById('msgList')// 滚动到最底部if (msgList) {msgList.scrollTop = msgList.scrollHeight}})
}
watch(() => props.audioMessageList.length,(newValue) => {console.log('newValue --->>>', newValue)if (props.audioMessageList && props.audioMessageList.length > 0) {chunkList.value = [...props.audioMessageList]}},{ deep: true, immediate: true }
)
const shouldShowTime = (item) => {const currentIndex = chunkList.value.findIndex((el) => el.idClient === item.idClient)if (currentIndex === 0) {return true // 第一项总是显示时间}console.log(item, 'item.time')const currentTime = Date.parse(item.time)const previousTime = Date.parse(chunkList.value[currentIndex - 1].time)const timeDiffMinutes = (currentTime - previousTime) / (1000 * 60)return timeDiffMinutes >= 2 // 返回是否大于或等于两分钟
}
const handleScroll = () => {const container = document.getElementById('msgList')if (container) {// 判断是否滚动到顶部if (container.scrollTop === 0 && !isDataLoaded.value) {console.log('滚动到顶部加载')emits('createTeam')isDataLoaded.value = true}}
}
onMounted(() => {if (!navigator.mediaDevices) {message.info('您的浏览器不支持获取用户设备')return}if (!window.MediaRecorder) {message.info('您的浏览器不支持录音')return}requestAudioAccess()
})
</script>
<template><div class="recorder-wrapper"><div class="phone"><div class="phone-body"><div class="phone-content"><transition-group tag="ul" id="msgList" @scroll="handleScroll" class="msg-list"><li v-for="(item, index) in chunkList" :key="index" class="msg"><div class="msg-time" v-if="shouldShowTime(item)">{{ item.time }}</div><divclass="msg-content":class="item.flow === 'in' ? 'from-user' : 'from-other'"@click="onPlay(index)"@touchend.prevent="onPlay(index)"><div class="msg-body" v-if="item.flow === 'in'"><div class="username">{{ item.username }}</div><div class="avatar"></div><div:class="[item.flow === 'in' ? 'audio-left' : 'audio-right',{ wink: item.wink }]"v-cloak:style="{ width: 20 * item.duration + 'px' }"><span>(</span><span>(</span><span>(</span></div><div class="duration">{{ item.duration }}"</div></div><div class="msg-body" v-else><div class="duration">{{ item.duration }}"</div><div:class="[item.flow === 'in' ? 'audio-left' : 'audio-right',{ wink: item.wink }]"v-cloak:style="{ width: 20 * item.duration + 'px' }"><span>(</span><span>(</span><span>(</span></div><div class="avatar"></div><div class="username">{{ item.username }}</div></div></div></li></transition-group></div></div></div><audio ref="audioDiv"></audio><divclass="phone-operate"@mousedown="onMousedown"@touchstart.prevent="onMousedown"@mouseup="onMouseup"@touchend.prevent="onMouseup">{{ btnText }}</div></div>
</template>
<style lang="less" scoped>
.phone {margin: 0 auto;font-size: 12px;border-radius: 35px;background-image: #e1e1e1;box-sizing: border-box;user-select: none;
}
.phone-body {height: 100%;background-color: #fff;
}
.phone-head {height: 30px;line-height: 30px;color: #fff;font-weight: bold;background-color: #000;
}
.phone-head span {display: inline-block;
}
.phone-head span:nth-child(2) {width: 100px;text-align: center;
}
.phone-head span:nth-child(3) {float: right;margin-right: 10px;
}
.phone-content {height: 282px;background-color: #f1eded;
}
.phone-operate {position: relative;line-height: 28px;text-align: center;cursor: pointer;font-weight: bold;box-shadow: 0 -1px 1px rgba(0, 0, 0, 0.1);
}
.phone-operate:active {background-color: #95a5a6;
}
.phone-operate:active:before {position: absolute;left: 50%;transform: translate(-50%, 0);top: -2px;content: '';width: 0%;height: 2px;background-color: #7bed9f;animation: loading 1s ease-in-out infinite backwards;
}
.msg-list {margin: 0;padding: 0;height: 100%;overflow-y: auto;-webkit-overflow-scrolling: touch;
}
.msg-list::-webkit-scrollbar {display: none;
}
.msg-list .msg {list-style: none;padding: 0 8px;margin: 10px 0;overflow: hidden;cursor: pointer;
}
.from-user {float: left;
}
.from-other {float: right;
}
.msg-time {width: 100%;text-align: center;height: 30px;line-height: 30px;
}
.msg-list .msg .avatar,
.msg-list .msg .audio-right,
.msg-list .msg .duration,
.msg-list .msg .username {float: right;
}
.load-history {width: 100%;height: 30px;text-align: center;
}
.msg-list .msg .avatar {width: 24px;height: 24px;line-height: 24px;text-align: center;background-color: #000;background: url('@/assets/images/avatar.png') 0 0;background-size: 100%;margin-right: 5px;
}
.msg-list .msg .audio-right {position: relative;margin-right: 6px;max-width: 116px;min-width: 30px;height: 24px;line-height: 24px;padding: 0 4px 0 10px;border-radius: 2px;color: #000;text-align: right;background-color: rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio-left {position: relative;margin-right: 6px;max-width: 116px;min-width: 30px;height: 24px;line-height: 24px;padding: 0 10px 0 4px;border-radius: 2px;color: #000;text-align: left;direction: rtl;background-color: rgba(107, 197, 107, 0.85);
}
.msg-list .msg.eg {cursor: default;
}
.msg-list .msg.eg .audio {text-align: left;
}
.msg-list .msg .audio-right:before {position: absolute;right: -8px;top: 8px;content: '';display: inline-block;width: 0;height: 0;border-style: solid;border-width: 4px;border-color: transparent transparent transparent rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio-left:before {position: absolute;left: -8px;top: 8px;content: '';display: inline-block;width: 0;height: 0;border-style: solid;border-width: 4px;border-color: transparent rgba(107, 197, 107, 0.85) transparent transparent;
}
.msg-list .msg .audio-left span {color: rgba(255, 255, 255, 0.8);display: inline-block;transform-origin: center;
}
.msg-list .msg .audio-left span:nth-child(1) {font-weight: 400;
}
.msg-list .msg .audio-left span:nth-child(2) {transform: scale(0.8);font-weight: 500;
}
.msg-list .msg .audio-left span:nth-child(3) {transform: scale(0.5);font-weight: 700;
}
.msg-list .msg .audio-right span {color: rgba(255, 255, 255, 0.8);display: inline-block;transform-origin: center;
}
.msg-list .msg .audio-right span:nth-child(1) {font-weight: 400;
}
.msg-list .msg .audio-right span:nth-child(2) {transform: scale(0.8);font-weight: 500;
}
.msg-list .msg .audio-right span:nth-child(3) {transform: scale(0.5);font-weight: 700;
}
.msg-list .msg .audio-left.wink span {animation: wink 1s ease infinite;
}
.msg-list .msg .audio-right.wink span {animation: wink 1s ease infinite;
}
.msg-list .msg .duration {margin: 3px 2px;
}
.msg-list .msg .msg-content {display: flex;flex-direction: column;
}.msg-list .msg .msg-content .msg-time {margin-bottom: 5px;
}.msg-list .msg .msg-content .msg-body {display: flex;flex-direction: row;align-items: center;
}.msg-list .msg .msg-content .msg-body .username {padding: 0 5px 0 0;
}.msg-list .msg .msg-content .msg-body .audio {margin-right: 10px;
}.msg-list .msg .msg-content .msg-body .duration {font-style: italic;
}
.fade-enter-active,
.fade-leave-active {transition: opacity 0.5s;
}
.fade-enter,
.fade-leave-to {opacity: 0;
}
@keyframes wink {from {color: rgba(255, 255, 255, 0.8);}to {color: rgba(255, 255, 255, 0.1);}
}
@keyframes loading {from {width: 0%;}to {width: 100%;}
}
</style>

3.将语音消息转化然后调用Nim中的发送语音消息API

<!--* @Description: 功能* @Author: 黑猫* @Date: 2023-06-30 15:31:32* @LastEditTime: 2023-07-17 11:04:05* @FilePath: \pc-exam-invigilate\src\components\chatting\index.vue
-->
<script lang="ts" setup>
import { ref, onMounted, computed, reactive } from 'vue'
import Nim from '@/utils/Nim'
import Header from '@/components/Header.vue'
import ChatPanel from './components/ChatPanel.vue'
import { NimScene } from '@/constants'
import { useConfigStore } from '@/stores/modules/config'
import { formatDateTime } from '@/utils/utils'
const visible = ref(false)
setTimeout(() => {visible.value = true
}, 100)
const emit = defineEmits(['closeChatting', 'openChatting', 'createTeam'])
const props = defineProps({checkedObj: {type: Object,default: () => ({})},inspectorNimRef: {type: Object,default: () => ({})},audioMessageList: {type: Array,default: () => []}
})
const audioMessageList = computed(() => {return props.audioMessageList
})
const username = ref('巡考员')
const teamId = ref<string>('')
const configStore = useConfigStore()
const chatPanelRef = ref<InstanceType<typeof ChatPanel> | null>(null)
// 发送语音
const sendFileVoice = (audio) => {// 创建一个 file 类型的 input 节点let fileInput = document.createElement('input')fileInput.type = 'file'// 创建一个 Blob 对象,代表你的音频文件 audioBloblet audioBlob = new Blob([audio], { type: 'audio/ogg' })// 创建一个 File 对象,代表音频文件let file = new File([audioBlob], 'audio.ogg', { type: 'audio/ogg' })// 创建一个自定义的 FileList 对象let fileList = new DataTransfer()fileList.items.add(file)// 将自定义的 FileList 设置到 fileInput.files 属性中Object.defineProperty(fileInput, 'files', {value: fileList.files,writable: false})teamId.value = (configStore.teamId || localStorage.getItem('teamId')) as stringtry {props.inspectorNimRef.sendFile({scene: NimScene.GroupChat,to: teamId.value,type: 'audio',fileInput: fileInput,beginupload: function (upload) {// - 如果开发者传入 fileInput, 在此回调之前不能修改 fileInput// - 在此回调之后可以取消图片上传, 此回调会接收一个参数 `upload`, 调用 `upload.abort();` 来取消文件上传},uploadprogress: function (obj) {// console.log('文件总大小: ' + obj.total + 'bytes')// console.log('已经上传的大小: ' + obj.loaded + 'bytes')// console.log('上传进度: ' + obj.percentage)// console.log('上传进度文本: ' + obj.percentageText)},uploaddone: function (error, file) {// console.log(error)// console.log(file)console.log('上传' + (!error ? '成功' : '失败'))},beforesend: function (msg) {// console.log('正在发送team 语音消息, id=' + msg.idClient)pushMsg(msg) // pushMsg需要用户自己实现,将消息压入到自己的数据中},done: sendMsgDone})} catch (error) {console.log(error)}
}
// 将消息存入数组
const pushMsg = (msg: any) => {// 将消息压入到自己的数据中// 例如,将消息存储在一个数组中const dataTime = formatDateTime(msg.time)const duration = Math.ceil(msg.file.size / (6600 * 2))if (duration <= 1) {return}audioMessageList.value.push({duration,stream: msg.file.url,wink: false,username: msg.fromNick,flow: msg.flow,time: dataTime})
}
// 发送语音是否成功或失败
const sendMsgDone = (error: any, file: any) => {if (!error) {console.log('消息发送成功')} else {console.error('消息发送失败')}
}
// 关闭聊天弹窗
const handleClose = () => {emit('closeChatting', false)
}
const createTeam = () => {emit('createTeam')
}
</script>
<template><a-modal v-model:open="visible" @cancel="handleClose" :footer="null" width="90%"><a-layout><a-layout-header class="header" style="background: #fff"><Header :username="username" /></a-layout-header><a-layout style="margin: 10px"><a-layout-content><div style="flex: 1"><ChatPanelref="chatPanelRef":audioMessageList="audioMessageList"@sendFileVoice="sendFileVoice"@createTeam="createTeam"/></div></a-layout-content></a-layout></a-layout></a-modal>
</template>
<style lang="less" scoped>
.chatting-layout {display: flex;flex-direction: column;flex: 1;background-color: #eef1f6;.header,.chat-list-container,.chat-dialog-container {display: flex;flex-direction: column;background: #fff;text-align: center;}.header {height: auto;margin-bottom: 15px;padding: 0;:deep(.header-wrapper) {padding: 0 166px 0 72px;box-shadow: none;}}.chat-list-container {padding-bottom: 10px;@media (max-width: 1440px) {width: 270px;}border-radius: 0px 20px 0px 0px;:deep(.teacher-info) {flex-direction: row;margin-top: none;padding: 20px 0;align-items: center;justify-content: center;img {margin-right: 17px;}span {margin-top: 0;}}}.chat-dialog-container {margin: 0 15px;padding: 0;border-radius: 20px;.send-chat {height: 218px;}}.examinee-info-container {@media (max-width: 1440px) {width: 270px;}display: flex;flex-direction: column;overflow: hidden;}
}
</style>

以上就是简单的进行语音收发和获取历史消息记录
如果需求复杂或者自己感兴趣可以在api文档中查看更多功能
在这里插入图片描述

这篇关于集成网易云信SDK,进行即时通信-Web(语音通信)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/306555

相关文章

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

【Prometheus】PromQL向量匹配实现不同标签的向量数据进行运算

✨✨ 欢迎大家来到景天科技苑✨✨ 🎈🎈 养成好习惯,先赞后看哦~🎈🎈 🏆 作者简介:景天科技苑 🏆《头衔》:大厂架构师,华为云开发者社区专家博主,阿里云开发者社区专家博主,CSDN全栈领域优质创作者,掘金优秀博主,51CTO博客专家等。 🏆《博客》:Python全栈,前后端开发,小程序开发,人工智能,js逆向,App逆向,网络系统安全,数据分析,Django,fastapi

让树莓派智能语音助手实现定时提醒功能

最初的时候是想直接在rasa 的chatbot上实现,因为rasa本身是带有remindschedule模块的。不过经过一番折腾后,忽然发现,chatbot上实现的定时,语音助手不一定会有响应。因为,我目前语音助手的代码设置了长时间无应答会结束对话,这样一来,chatbot定时提醒的触发就不会被语音助手获悉。那怎么让语音助手也具有定时提醒功能呢? 我最后选择的方法是用threading.Time

业务中14个需要进行A/B测试的时刻[信息图]

在本指南中,我们将全面了解有关 A/B测试 的所有内容。 我们将介绍不同类型的A/B测试,如何有效地规划和启动测试,如何评估测试是否成功,您应该关注哪些指标,多年来我们发现的常见错误等等。 什么是A/B测试? A/B测试(有时称为“分割测试”)是一种实验类型,其中您创建两种或多种内容变体——如登录页面、电子邮件或广告——并将它们显示给不同的受众群体,以查看哪一种效果最好。 本质上,A/B测

Java Web指的是什么

Java Web指的是使用Java技术进行Web开发的一种方式。Java在Web开发领域有着广泛的应用,主要通过Java EE(Enterprise Edition)平台来实现。  主要特点和技术包括: 1. Servlets和JSP:     Servlets 是Java编写的服务器端程序,用于处理客户端请求和生成动态网页内容。     JSP(JavaServer Pages)

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识

BUUCTF靶场[web][极客大挑战 2019]Http、[HCTF 2018]admin

目录   [web][极客大挑战 2019]Http 考点:Referer协议、UA协议、X-Forwarded-For协议 [web][HCTF 2018]admin 考点:弱密码字典爆破 四种方法:   [web][极客大挑战 2019]Http 考点:Referer协议、UA协议、X-Forwarded-For协议 访问环境 老规矩,我们先查看源代码

【区块链 + 人才服务】区块链集成开发平台 | FISCO BCOS应用案例

随着区块链技术的快速发展,越来越多的企业开始将其应用于实际业务中。然而,区块链技术的专业性使得其集成开发成为一项挑战。针对此,广东中创智慧科技有限公司基于国产开源联盟链 FISCO BCOS 推出了区块链集成开发平台。该平台基于区块链技术,提供一套全面的区块链开发工具和开发环境,支持开发者快速开发和部署区块链应用。此外,该平台还可以提供一套全面的区块链开发教程和文档,帮助开发者快速上手区块链开发。

【STM32】SPI通信-软件与硬件读写SPI

SPI通信-软件与硬件读写SPI 软件SPI一、SPI通信协议1、SPI通信2、硬件电路3、移位示意图4、SPI时序基本单元(1)开始通信和结束通信(2)模式0---用的最多(3)模式1(4)模式2(5)模式3 5、SPI时序(1)写使能(2)指定地址写(3)指定地址读 二、W25Q64模块介绍1、W25Q64简介2、硬件电路3、W25Q64框图4、Flash操作注意事项软件SPI读写W2

EasyPlayer.js网页H5 Web js播放器能力合集

最近遇到一个需求,要求做一款播放器,发现能力上跟EasyPlayer.js基本一致,满足要求: 需求 功性能 分类 需求描述 功能 预览 分屏模式 单分屏(单屏/全屏) 多分屏(2*2) 多分屏(3*3) 多分屏(4*4) 播放控制 播放(单个或全部) 暂停(暂停时展示最后一帧画面) 停止(单个或全部) 声音控制(开关/音量调节) 主辅码流切换 辅助功能 屏