IJKPLAYER源码分析-OpenSL ES播放

2024-04-08 14:04

本文主要是介绍IJKPLAYER源码分析-OpenSL ES播放,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

前言

    与IJKPLAYER处理AudioTrack播放类似,OpenSL ES的接入需要满足SDL_Aout的接口规范,所不同的是OpenSL ES播放是在native完成的,调用的是NDK接口OpenSL ES的播放能力。关于OpenSL ES的详细介绍,请参考官方文档 OpenSL ES 一文。

    Pipeline及SDL_Aout结构体及相关创建,与AudioTrack一致,请参考前文IJKPLAYER源码分析-AudioTrack播放-CSDN博客

接口

创建SDL_Aout

    创建OpenSL ES的SDL_Aout对象,调用链如下:

ijkmp_android_create() => ffpipeline_create_from_android() => func_open_audio_output() => SDL_AoutAndroid_CreateForOpenSLES()

    使能opensles选项,缺省为0,即使用AudioTrack播放: 

    { "opensles",                           "OpenSL ES: enable",OPTION_OFFSET(opensles),            OPTION_INT(0, 0, 1) },

     若使能了opensles选项,走OpenSL ES播放,相反则走AudioTrack播放:

static SDL_Aout *func_open_audio_output(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{SDL_Aout *aout = NULL;if (ffp->opensles) {aout = SDL_AoutAndroid_CreateForOpenSLES();} else {aout = SDL_AoutAndroid_CreateForAudioTrack();}if (aout)SDL_AoutSetStereoVolume(aout, pipeline->opaque->left_volume, pipeline->opaque->right_volume);return aout;
}

     OpenSL ES的SDL_Aout对象创建具体在此,遵循SDL_Aout接口规范:

SDL_Aout *SDL_AoutAndroid_CreateForOpenSLES()
{SDLTRACE("%s\n", __func__);SDL_Aout *aout = SDL_Aout_CreateInternal(sizeof(SDL_Aout_Opaque));if (!aout)return NULL;SDL_Aout_Opaque *opaque = aout->opaque;opaque->wakeup_cond = SDL_CreateCond();opaque->wakeup_mutex = SDL_CreateMutex();int ret = 0;SLObjectItf slObject = NULL;ret = slCreateEngine(&slObject, 0, NULL, 0, NULL, NULL);CHECK_OPENSL_ERROR(ret, "%s: slCreateEngine() failed", __func__);opaque->slObject = slObject;ret = (*slObject)->Realize(slObject, SL_BOOLEAN_FALSE);CHECK_OPENSL_ERROR(ret, "%s: slObject->Realize() failed", __func__);SLEngineItf slEngine = NULL;ret = (*slObject)->GetInterface(slObject, SL_IID_ENGINE, &slEngine);CHECK_OPENSL_ERROR(ret, "%s: slObject->GetInterface() failed", __func__);opaque->slEngine = slEngine;SLObjectItf slOutputMixObject = NULL;const SLInterfaceID ids1[] = {SL_IID_VOLUME};const SLboolean req1[] = {SL_BOOLEAN_FALSE};ret = (*slEngine)->CreateOutputMix(slEngine, &slOutputMixObject, 1, ids1, req1);CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateOutputMix() failed", __func__);opaque->slOutputMixObject = slOutputMixObject;ret = (*slOutputMixObject)->Realize(slOutputMixObject, SL_BOOLEAN_FALSE);CHECK_OPENSL_ERROR(ret, "%s: slOutputMixObject->Realize() failed", __func__);aout->free_l       = aout_free_l;aout->opaque_class = &g_opensles_class;aout->open_audio   = aout_open_audio;aout->pause_audio  = aout_pause_audio;aout->flush_audio  = aout_flush_audio;aout->close_audio  = aout_close_audio;aout->set_volume   = aout_set_volume;aout->func_get_latency_seconds = aout_get_latency_seconds;return aout;
fail:aout_free_l(aout);return NULL;
}

func_get_latency_seconds接口

  • 此接口是OpenSL ES相比于AudioTrack新增的1个接口;
  • 作用是:用来计算OpenSL ES底层buffer缓存了多少ms的音频数据,音视频同步时用来纠正音频的时钟;

    与AudioTrack相比,OpenSL ES增加了func_get_latency_seconds接口: 

    aout->func_get_latency_seconds = aout_get_latency_seconds;

    此接口的具体实现: 

static double aout_get_latency_seconds(SDL_Aout *aout)
{SDL_Aout_Opaque *opaque = aout->opaque;SLAndroidSimpleBufferQueueState state = {0};SLresult slRet = (*opaque->slBufferQueueItf)->GetState(opaque->slBufferQueueItf, &state);if (slRet != SL_RESULT_SUCCESS) {ALOGE("%s failed\n", __func__);return ((double)opaque->milli_per_buffer) * OPENSLES_BUFFERS / 1000;}// assume there is always a buffer in coping// state.count表示已用buffer个数double latency = ((double)opaque->milli_per_buffer) * state.count / 1000;return latency;
}

     以上就是OpenSL ES的SDL_Aout创建过程,大致与AudioTrack类似,遵循了SDL_Aout的接口规范,所不同的是OpenSL ES增加了func_get_latency_seconds接口,用来计算底层缓存的音频ms数;

open_audio

初始化

    OpenSL ES的打开,即OpenSL ES的初始化,主要做以下事情:

  • OpenSL ES的open_audio接口,大致功能与AudioTrack类似;
  • 将音频源的采样参数(采样率、通道数、采样位深)告诉OpenSL ES,并调用CreateAudioPlayer()创建SLObjectItf类型的播放器实例;
  • 使用播放器实例,查询SLPlayItf实例/SLVolumeItf实例/SLAndroidSimpleBufferQueueItf实例;

  • 使用播放器实例SLAndroidSimpleBufferQueueItf类型队列实例,并给该队列注册callback;

  • 根据最终的音频参数(采样率、通道数、采样位深),以及OpenSL ES每个buffer所能容纳的音频PCM数据量(10ms),,计算出最终的buffer总容量,并分配buffer内存;

  • 启动1个audio_thread线程,此线程的作用与AudioTrack的audio_thread作用一致,异步执行所有关于音频的操作,取得音频的PCM数据并喂给OpenSL ES;

  • 将最终的音频参数保存在全局变量is->audio_tgt中,后续若音频参数发生变更,需要重采样并且重置is->audio_tgt的值;

  • 设置OpenSL ES的缺省延迟时间,即OpenSL ES最多缓存了多少秒的PCM数据,此值在音视频同步时纠正音频的时钟有重要用处;

    // 设置缺省时延,若有func_set_default_latency_seconds回调则通过回调更新,没有则设置变量minimal_latency_seconds的值SDL_AoutSetDefaultLatencySeconds(ffp->aout, ((double)(2 * spec.size)) / audio_hw_params->bytes_per_sec);

 pcm buffer

    定义OpenSL ES每个buffer的音频容量,即能装下10ms的PCM数据,一共由255个音频buffer组成,即10 * 255 = 2550ms的数据。

    2个参数的具体宏定义请参照:

#define OPENSLES_BUFFERS 255 /* maximum number of buffers */
#define OPENSLES_BUFLEN  10 /* ms */

    计算最终OpenGL ES的buffer容量,并分配buffer: 

    // 对于opensl es来说,播放的是pcm数据,每个pcm为1帧opaque->bytes_per_frame   = format_pcm->numChannels * format_pcm->bitsPerSample / 8;// 对于opensl es来说,每次播放10ms的pcm数据opaque->milli_per_buffer  = OPENSLES_BUFLEN;// 每OPENSLES_BUFLEN=10ms有多少pcm帧opaque->frames_per_buffer = opaque->milli_per_buffer * format_pcm->samplesPerSec / 1000000; // samplesPerSec is in milliopaque->bytes_per_buffer  = opaque->bytes_per_frame * opaque->frames_per_buffer;opaque->buffer_capacity   = OPENSLES_BUFFERS * opaque->bytes_per_buffer;ALOGI("OpenSL-ES: bytes_per_frame  = %d bytes\n",  (int)opaque->bytes_per_frame);ALOGI("OpenSL-ES: milli_per_buffer = %d ms\n",     (int)opaque->milli_per_buffer);ALOGI("OpenSL-ES: frame_per_buffer = %d frames\n", (int)opaque->frames_per_buffer);ALOGI("OpenSL-ES: bytes_per_buffer = %d bytes\n",  (int)opaque->bytes_per_buffer);ALOGI("OpenSL-ES: buffer_capacity  = %d bytes\n",  (int)opaque->buffer_capacity);// 分配最终的pcm缓冲区opaque->buffer          = malloc(opaque->buffer_capacity);CHECK_COND_ERROR(opaque->buffer, "%s: failed to alloc buffer %d\n", __func__, (int)opaque->buffer_capacity);// (*opaque->slPlayItf)->SetPositionUpdatePeriod(opaque->slPlayItf, 1000);// enqueue empty buffer to start playmemset(opaque->buffer, 0, opaque->buffer_capacity);for(int i = 0; i < OPENSLES_BUFFERS; ++i) {ret = (*opaque->slBufferQueueItf)->Enqueue(opaque->slBufferQueueItf, opaque->buffer + i * opaque->bytes_per_buffer, opaque->bytes_per_buffer);CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->Enqueue(000...) failed", __func__);}

    值得一提的是,OpenGL ES在此所支持的音频采样参数如下:

    CHECK_COND_ERROR((desired->format == AUDIO_S16SYS), "%s: not AUDIO_S16SYS", __func__);CHECK_COND_ERROR((desired->channels == 2 || desired->channels == 1), "%s: not 1,2 channel", __func__);CHECK_COND_ERROR((desired->freq >= 8000 && desired->freq <= 48000), "%s: unsupport freq %d Hz", __func__, desired->freq);

 调用流程

    打开OpenSL ES的调用链,其实是和AudioTrack一致,因为他们遵循了同样的接口规范SDL_Aout,具体如下:

read_thread() => stream_component_open() => audio_open() => SDL_AoutOpenAudio() => aout_open_audio()

    最后,走到aout_open_audio方法: 

static int aout_open_audio(SDL_Aout *aout, const SDL_AudioSpec *desired, SDL_AudioSpec *obtained)
{SDLTRACE("%s\n", __func__);assert(desired);SDLTRACE("aout_open_audio()\n");SDL_Aout_Opaque  *opaque     = aout->opaque;SLEngineItf       slEngine   = opaque->slEngine;SLDataFormat_PCM *format_pcm = &opaque->format_pcm;int               ret = 0;opaque->spec = *desired;// config audio srcSLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE,OPENSLES_BUFFERS};int native_sample_rate = audiotrack_get_native_output_sample_rate(NULL);ALOGI("OpenSL-ES: native sample rate %d Hz\n", native_sample_rate);// opensl es仅支持以下参数的audio pcm播放CHECK_COND_ERROR((desired->format == AUDIO_S16SYS), "%s: not AUDIO_S16SYS", __func__);CHECK_COND_ERROR((desired->channels == 2 || desired->channels == 1), "%s: not 1,2 channel", __func__);CHECK_COND_ERROR((desired->freq >= 8000 && desired->freq <= 48000), "%s: unsupport freq %d Hz", __func__, desired->freq);if (SDL_Android_GetApiLevel() < IJK_API_21_LOLLIPOP &&native_sample_rate > 0 &&desired->freq < native_sample_rate) {// Don't try to play back a sample rate higher than the native one,// since OpenSL ES will try to use the fast path, which AudioFlinger// will reject (fast path can't do resampling), and will end up with// too small buffers for the resampling. See http://b.android.com/59453// for details. This bug is still present in 4.4. If it is fixed later// this workaround could be made conditional.//// by VLC/android_opensles.cALOGW("OpenSL-ES: force resample %lu to native sample rate %d\n",(unsigned long) format_pcm->samplesPerSec / 1000,(int) native_sample_rate);format_pcm->samplesPerSec = native_sample_rate * 1000;}format_pcm->formatType       = SL_DATAFORMAT_PCM;format_pcm->numChannels      = desired->channels;format_pcm->samplesPerSec    = desired->freq * 1000; // milli Hz// format_pcm->numChannels      = 2;// format_pcm->samplesPerSec    = SL_SAMPLINGRATE_44_1;format_pcm->bitsPerSample    = SL_PCMSAMPLEFORMAT_FIXED_16;format_pcm->containerSize    = SL_PCMSAMPLEFORMAT_FIXED_16;switch (desired->channels) {case 2:format_pcm->channelMask  = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;break;case 1:format_pcm->channelMask  = SL_SPEAKER_FRONT_CENTER;break;default:ALOGE("%s, invalid channel %d", __func__, desired->channels);goto fail;}format_pcm->endianness       = SL_BYTEORDER_LITTLEENDIAN;SLDataSource audio_source = {&loc_bufq, format_pcm};// config audio sinkSLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX,opaque->slOutputMixObject};SLDataSink audio_sink = {&loc_outmix, NULL};SLObjectItf slPlayerObject = NULL;const SLInterfaceID ids2[] = { SL_IID_ANDROIDSIMPLEBUFFERQUEUE, SL_IID_VOLUME, SL_IID_PLAY };static const SLboolean req2[] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };// 在此将audio的采样参数传递给opensl esret = (*slEngine)->CreateAudioPlayer(slEngine, &slPlayerObject, &audio_source,&audio_sink, sizeof(ids2) / sizeof(*ids2),ids2, req2);CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateAudioPlayer() failed", __func__);opaque->slPlayerObject = slPlayerObject;ret = (*slPlayerObject)->Realize(slPlayerObject, SL_BOOLEAN_FALSE);CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->Realize() failed", __func__);ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_PLAY, &opaque->slPlayItf);CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_PLAY) failed", __func__);ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_VOLUME, &opaque->slVolumeItf);CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_VOLUME) failed", __func__);ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &opaque->slBufferQueueItf);CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_ANDROIDSIMPLEBUFFERQUEUE) failed", __func__);ret = (*opaque->slBufferQueueItf)->RegisterCallback(opaque->slBufferQueueItf, aout_opensles_callback, (void*)aout);CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->RegisterCallback() failed", __func__);// set the player's state to playing// ret = (*opaque->slPlayItf)->SetPlayState(opaque->slPlayItf, SL_PLAYSTATE_PLAYING);// CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->slPlayItf() failed", __func__);// 对于opensl es来说,播放的是pcm数据,每个pcm为1帧opaque->bytes_per_frame   = format_pcm->numChannels * format_pcm->bitsPerSample / 8;// 对于opensl es来说,每次播放是10ms的pcm数据opaque->milli_per_buffer  = OPENSLES_BUFLEN;// 每OPENSLES_BUFLEN 10ms有多少pcm帧opaque->frames_per_buffer = opaque->milli_per_buffer * format_pcm->samplesPerSec / 1000000; // samplesPerSec is in milliopaque->bytes_per_buffer  = opaque->bytes_per_frame * opaque->frames_per_buffer;opaque->buffer_capacity   = OPENSLES_BUFFERS * opaque->bytes_per_buffer;ALOGI("OpenSL-ES: bytes_per_frame  = %d bytes\n",  (int)opaque->bytes_per_frame);ALOGI("OpenSL-ES: milli_per_buffer = %d ms\n",     (int)opaque->milli_per_buffer);ALOGI("OpenSL-ES: frame_per_buffer = %d frames\n", (int)opaque->frames_per_buffer);ALOGI("OpenSL-ES: bytes_per_buffer = %d bytes\n",  (int)opaque->bytes_per_buffer);ALOGI("OpenSL-ES: buffer_capacity  = %d bytes\n",  (int)opaque->buffer_capacity);// 根据计算出来的buffer_capacity分配bufferopaque->buffer          = malloc(opaque->buffer_capacity);CHECK_COND_ERROR(opaque->buffer, "%s: failed to alloc buffer %d\n", __func__, (int)opaque->buffer_capacity);// (*opaque->slPlayItf)->SetPositionUpdatePeriod(opaque->slPlayItf, 1000);// enqueue empty buffer to start playmemset(opaque->buffer, 0, opaque->buffer_capacity);for(int i = 0; i < OPENSLES_BUFFERS; ++i) {ret = (*opaque->slBufferQueueItf)->Enqueue(opaque->slBufferQueueItf, opaque->buffer + i * opaque->bytes_per_buffer, opaque->bytes_per_buffer);CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->Enqueue(000...) failed", __func__);}opaque->pause_on = 1;opaque->abort_request = 0;opaque->audio_tid = SDL_CreateThreadEx(&opaque->_audio_tid, aout_thread, aout, "ff_aout_opensles");CHECK_COND_ERROR(opaque->audio_tid, "%s: failed to SDL_CreateThreadEx", __func__);if (obtained) {*obtained      = *desired;// opensl es音频硬件的缓冲区容量obtained->size = opaque->buffer_capacity;obtained->freq = format_pcm->samplesPerSec / 1000;}return opaque->buffer_capacity;
fail:aout_close_audio(aout);return -1;
}

     以上就是OpenSL ES的完整初始化流程。

audio_thread

    该线程主要做以下事情:

  • 响应业务侧对声音的操作,异步处理诸如play()/pause()/flush()/setVolume()等操作;
  • 通过sdl_audio_callback回调,取得PCM数据,再喂给OpenSL ES播放;
  • 每次取的PCM数据数是opaque->bytes_per_buffer,即OPENSLES_BUFLEN=10ms的PCM数据;

执行操作

    在此临界区执行对声音的操作,播放/暂停/调节音量/flush()等:

        SLAndroidSimpleBufferQueueState slState = {0};SLresult slRet = (*slBufferQueueItf)->GetState(slBufferQueueItf, &slState);if (slRet != SL_RESULT_SUCCESS) {ALOGE("%s: slBufferQueueItf->GetState() failed\n", __func__);SDL_UnlockMutex(opaque->wakeup_mutex);}SDL_LockMutex(opaque->wakeup_mutex);if (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) {while (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) {if (!opaque->pause_on) {(*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING);}SDL_CondWaitTimeout(opaque->wakeup_cond, opaque->wakeup_mutex, 1000);SLresult slRet = (*slBufferQueueItf)->GetState(slBufferQueueItf, &slState);if (slRet != SL_RESULT_SUCCESS) {ALOGE("%s: slBufferQueueItf->GetState() failed\n", __func__);SDL_UnlockMutex(opaque->wakeup_mutex);}if (opaque->pause_on)(*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PAUSED);}if (!opaque->abort_request && !opaque->pause_on) {(*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING);}}if (opaque->need_flush) {opaque->need_flush = 0;(*slBufferQueueItf)->Clear(slBufferQueueItf);}if (opaque->need_set_volume) {opaque->need_set_volume = 0;SLmillibel level = android_amplification_to_sles((opaque->left_volume + opaque->right_volume) / 2);ALOGI("slVolumeItf->SetVolumeLevel((%f, %f) -> %d)\n", opaque->left_volume, opaque->right_volume, (int)level);slRet = (*slVolumeItf)->SetVolumeLevel(slVolumeItf, level);if (slRet != SL_RESULT_SUCCESS) {ALOGE("slVolumeItf->SetVolumeLevel failed %d\n", (int)slRet);// just ignore error}}SDL_UnlockMutex(opaque->wakeup_mutex);......

sdl_audio_callback

    此处与AudioTrack逻辑一致,请参考 IJKPLAYER源码分析-AudioTrack播放-CSDN博客

  • 确保通过此callback取得opaque->bytes_per_buffer字节的PCM数据即可,无论是静音PCM数据抑或真实的可播放的PCM数据;

喂PCM数据

    再将取得的PCM数据喂给OpenSL ES播放即可:

        ......next_buffer = opaque->buffer + next_buffer_index * bytes_per_buffer;next_buffer_index = (next_buffer_index + 1) % OPENSLES_BUFFERS;audio_cblk(userdata, next_buffer, bytes_per_buffer);if (opaque->need_flush) {(*slBufferQueueItf)->Clear(slBufferQueueItf);opaque->need_flush = false;}if (opaque->need_flush) {ALOGE("flush");opaque->need_flush = 0;(*slBufferQueueItf)->Clear(slBufferQueueItf);} else {// 每次送给opensl es的Audio样本byte数为OPENSLES_BUFLEN=10ms所采集的PCM样本slRet = (*slBufferQueueItf)->Enqueue(slBufferQueueItf, next_buffer, bytes_per_buffer);if (slRet == SL_RESULT_SUCCESS) {// do nothing} else if (slRet == SL_RESULT_BUFFER_INSUFFICIENT) {// don't retry, just pass throughALOGE("SL_RESULT_BUFFER_INSUFFICIENT\n");} else {ALOGE("slBufferQueueItf->Enqueue() = %d\n", (int)slRet);break;}}

这篇关于IJKPLAYER源码分析-OpenSL ES播放的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/885789

相关文章

Go标准库常见错误分析和解决办法

《Go标准库常见错误分析和解决办法》Go语言的标准库为开发者提供了丰富且高效的工具,涵盖了从网络编程到文件操作等各个方面,然而,标准库虽好,使用不当却可能适得其反,正所谓工欲善其事,必先利其器,本文将... 目录1. 使用了错误的time.Duration2. time.After导致的内存泄漏3. jsO

用js控制视频播放进度基本示例代码

《用js控制视频播放进度基本示例代码》写前端的时候,很多的时候是需要支持要网页视频播放的功能,下面这篇文章主要给大家介绍了关于用js控制视频播放进度的相关资料,文中通过代码介绍的非常详细,需要的朋友可... 目录前言html部分:JavaScript部分:注意:总结前言在javascript中控制视频播放

Python+PyQt5实现多屏幕协同播放功能

《Python+PyQt5实现多屏幕协同播放功能》在现代会议展示、数字广告、展览展示等场景中,多屏幕协同播放已成为刚需,下面我们就来看看如何利用Python和PyQt5开发一套功能强大的跨屏播控系统吧... 目录一、项目概述:突破传统播放限制二、核心技术解析2.1 多屏管理机制2.2 播放引擎设计2.3 专

Python实现无痛修改第三方库源码的方法详解

《Python实现无痛修改第三方库源码的方法详解》很多时候,我们下载的第三方库是不会有需求不满足的情况,但也有极少的情况,第三方库没有兼顾到需求,本文将介绍几个修改源码的操作,大家可以根据需求进行选择... 目录需求不符合模拟示例 1. 修改源文件2. 继承修改3. 猴子补丁4. 追踪局部变量需求不符合很

Spring事务中@Transactional注解不生效的原因分析与解决

《Spring事务中@Transactional注解不生效的原因分析与解决》在Spring框架中,@Transactional注解是管理数据库事务的核心方式,本文将深入分析事务自调用的底层原理,解释为... 目录1. 引言2. 事务自调用问题重现2.1 示例代码2.2 问题现象3. 为什么事务自调用会失效3

找不到Anaconda prompt终端的原因分析及解决方案

《找不到Anacondaprompt终端的原因分析及解决方案》因为anaconda还没有初始化,在安装anaconda的过程中,有一行是否要添加anaconda到菜单目录中,由于没有勾选,导致没有菜... 目录问题原因问http://www.chinasem.cn题解决安装了 Anaconda 却找不到 An

Spring定时任务只执行一次的原因分析与解决方案

《Spring定时任务只执行一次的原因分析与解决方案》在使用Spring的@Scheduled定时任务时,你是否遇到过任务只执行一次,后续不再触发的情况?这种情况可能由多种原因导致,如未启用调度、线程... 目录1. 问题背景2. Spring定时任务的基本用法3. 为什么定时任务只执行一次?3.1 未启用

使用Python实现文本转语音(TTS)并播放音频

《使用Python实现文本转语音(TTS)并播放音频》在开发涉及语音交互或需要语音提示的应用时,文本转语音(TTS)技术是一个非常实用的工具,下面我们来看看如何使用gTTS和playsound库将文本... 目录什么是 gTTS 和 playsound安装依赖库实现步骤 1. 导入库2. 定义文本和语言 3

C++ 各种map特点对比分析

《C++各种map特点对比分析》文章比较了C++中不同类型的map(如std::map,std::unordered_map,std::multimap,std::unordered_multima... 目录特点比较C++ 示例代码 ​​​​​​代码解释特点比较1. std::map底层实现:基于红黑

Spring、Spring Boot、Spring Cloud 的区别与联系分析

《Spring、SpringBoot、SpringCloud的区别与联系分析》Spring、SpringBoot和SpringCloud是Java开发中常用的框架,分别针对企业级应用开发、快速开... 目录1. Spring 框架2. Spring Boot3. Spring Cloud总结1. Sprin