android基于ffmpeg的简单视频播发器 三线程实现播放器(完)

2024-05-11 06:32

本文主要是介绍android基于ffmpeg的简单视频播发器 三线程实现播放器(完),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一个多星期都在研究播放器,从双线程到三线程,它们的关系太复杂了,总是搞不定,而且本人c++水平实在有限,很多东西都不太会用。终于搞好了一个能拿得出手的东东,基本没啥严重的bug了,或者我没发现严重的bug,不过代码还是挺乱的,而且音视频对齐使用的办法也不是很好,以后再慢慢优化,先拿来用

一个线程读取AVPacket保存到数组,由另外两个线程做解码和播放,这样就不会出现上一篇博文里一个文件两个线程都去加载了,这样读网络资源就不需要读两次了,对ffmpeg理解还是不深,不明白地方太多了,先按自己理解的来吧

因为不能确定数据多少,所以要用动态数组,我就用了vector来储存数据

std::vector<AVPacket *> video_packets;
std::vector<AVPacket *> audio_packets;

还要加个数组最大数,不然直接就会爆掉的

int max_count = 200;

还需要记录线程的状态

typedef enum {VIDEO_READY_STOP,
    VIDEO_STOP,
    VIDEO_READY_MOVE,
    VIDEO_MOVE,
    VIDEO_MOVE_OVER,
    VIDEO_READY,
    VIDEO_PLAY,
    VIDEO_OVER
} VIDEO_STATE;

在开发中发现一个严重的bug,那就是关于SurfaceView的生命周期,当app切换到后台时,SurfaceView会释放掉Surface,所以egl就不能用了,要重新加载Surface,所以要记录下SurfaceView的生命周期来进行操作,在加个锁防止并发

bool create_egl = false;
pthread_mutex_t play_mutex;

试了试简单的跳帧

if(!yuvFrame->key_frame){int m = (int)s / -300;
    if(m > throw_max){throw_max = m;
    }if(throw_index < throw_max){throw_index++;
        av_frame_free(&yuvFrame);
        av_packet_unref(pkt);
        continue;
    }
}

主要还是各种逻辑关系,基本思路是先停音频线程,再停视频线程,先启动视频线程,在启动音频线程,因为视频解码比较费时,所以优先考虑视频

java代码VideoSurfaceView

public class VideoSurfaceView extends SurfaceView implements SurfaceHolder.Callback {/**
     * 视频路径
     */

    String videoPath = "/storage/emulated/0/baiduNetdisk/season09.mp4";

    private SurfaceHolder mHolder;

    public VideoSurfaceView(Context context) {super(context);
        init();
    }public VideoSurfaceView(Context context, AttributeSet attributeSet) {super(context, attributeSet);
        init();
    }private void init() {mHolder = getHolder();
        mHolder.addCallback(this);


        Thread thread = new Thread() {@Override
            public void run() {super.run();
                decoder(videoPath);

            }};
        thread.start();

        Thread audioThread = new Thread() {@Override
            public void run() {super.run();
                audioPlay();
            }};

        Thread videoThread = new Thread() {@Override
            public void run() {super.run();
                videoPlay();
            }};
        audioThread.start();
        videoThread.start();
    }public void surfaceCreated(SurfaceHolder holder) {created();
    }public void surfaceDestroyed(SurfaceHolder holder) {destroyed();
    }public void surfaceChanged(SurfaceHolder holder, int format, final int w, final int h) {}public Surface getSurface(){return mHolder.getSurface();
    }public AudioTrack createAudio(int sampleRateInHz, int nb_channels) {int channelConfig;
        if (nb_channels == 1) {channelConfig = AudioFormat.CHANNEL_OUT_MONO;
        } else if (nb_channels == 2) {channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        } else {channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        }int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
        int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
                channelConfig, audioFormat);

        AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC, // 指定流的类型
                sampleRateInHz, // 设置音频数据的采样率 32k,如果是44.1k就是44100
                channelConfig, // 设置输出声道为双声道立体声,而CHANNEL_OUT_MONO类型是单声道
                audioFormat, // 设置音频数据块是8位还是16位,这里设置为16位。好像现在绝大多数的音频都是16位的了
                minBufferSize, AudioTrack.MODE_STREAM // 设置模式类型,在这里设置为流类型,另外一种MODE_STATIC貌似没有什么效果
        );
        // audio.play(); // 启动音频设备,下面就可以真正开始音频数据的播放了
        return audio;
    }static {System.loadLibrary("native-lib");
    }public native void decoder(String path);

    public native void play();

    public native void stop();


    public native void videoPlay();

    public native void audioPlay();

    public native void move(long time);

    public native void created();
    public native void destroyed();
    public native void close();

}

extern "C" {
#include "libavformat/avformat.h"
#include "libavfilter/avfiltergraph.h"
#include "libavfilter/buffersink.h"
#include "libswresample/swresample.h"
};

#include <vector>
#include<mutex>

#define MAX_AUDIO_FRME_SIZE 48000 * 4

//系统当前时间
long getCurrentTime() {struct timeval tv;
    gettimeofday(&tv, NULL);
    return tv.tv_sec * 1000 + tv.tv_usec / 1000;
}
//等待时间
timespec waitTime(long timeout_ms) {struct timespec abstime;
    struct timeval now;
    gettimeofday(&now, NULL);
    long nsec = now.tv_usec * 1000 + (timeout_ms % 1000) * 1000000;
    abstime.tv_sec = now.tv_sec + nsec / 1000000000 + timeout_ms / 1000;
    abstime.tv_nsec = nsec % 1000000000;
    return abstime;
}double play_time;//播放时间

long audio_time = 0;//声音时间   -1表示音频线程停止,-2表示视频数据和停止的音频数据停在差不多位置
long start_time = 0;//记录audio_time的时间

bool isClose = false;//结束循环

std::vector<AVPacket *> video_packets;//视频数据数组
std::vector<AVPacket *> audio_packets;//音频数据数组

AVStream *video_stream = NULL;
AVStream *audio_stream = NULL;

AVCodecContext *video_codec_ctx;
AVCodecContext *audio_codec_ctx;

//视频锁
pthread_mutex_t video_mutex;
pthread_cond_t video_cond;
//音频锁
pthread_mutex_t audio_mutex;
pthread_cond_t audio_cond;

//解码等待
bool decoder_wait;

//解码锁
pthread_mutex_t decoder_mutex;
pthread_cond_t decoder_cond;

//跳转时间
double move_time = 0;

//解码结束
bool decoder_over = false;

//数据数组最大数
int max_count = 200;

//线程状态
typedef enum {VIDEO_READY_STOP,//准备停止
    VIDEO_STOP,//停止
    VIDEO_READY_MOVE,//准备跳转
    VIDEO_MOVE,//跳转中
    VIDEO_MOVE_OVER,//跳转结束
    VIDEO_READY,//准备播放
    VIDEO_PLAY,//播放中
    VIDEO_OVER//播放结束
} VIDEO_STATE;

VIDEO_STATE video_state; //视频状态
VIDEO_STATE audio_state; //音频状态


//界面是否隐藏
bool create_egl = false;
//渲染锁
pthread_mutex_t play_mutex;


//将音频数据与视频数据对齐
void alineAudio2VideoPst() {if (audio_packets.size() >= 3 && video_packets.size() >= 1) {AVPacket *video_packet = video_packets[0];
        AVPacket *audio_packet_1 = audio_packets[0];
        AVPacket *audio_packet_2 = audio_packets[1];
        double video_time = video_packet->pts * av_q2d(video_stream->time_base);
        double audio_time_1 = audio_packet_1->pts * av_q2d(audio_stream->time_base);
        double audio_time_2 = audio_packet_2->pts * av_q2d(audio_stream->time_base);
        if (video_time >= audio_time_1 && video_time < audio_time_2) {} else {audio_packets.erase(audio_packets.begin());
            av_packet_unref(audio_packet_1);
            alineAudio2VideoPst();
        }}
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_decoder(JNIEnv *env, jobject instance, jstring path_) {const char *path = env->GetStringUTFChars(path_, 0);

    // TODO


    pthread_mutex_init(&decoder_mutex, NULL);
    pthread_cond_init(&decoder_cond, NULL);


    pthread_mutex_lock(&decoder_mutex);
    decoder_wait = true;
    pthread_cond_wait(&decoder_cond, &decoder_mutex);
    decoder_wait = false;
    pthread_mutex_unlock(&decoder_mutex);


    av_register_all();
    avformat_network_init();
    AVFormatContext *fmt_ctx = avformat_alloc_context();
    if (avformat_open_input(&fmt_ctx, path, NULL, NULL) < 0) {return;
    }if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {return;
    }int video_stream_index = -1;
    int audio_stream_index = -1;
    for (int i = 0; i < fmt_ctx->nb_streams; i++) {if (fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {video_stream = fmt_ctx->streams[i];
            video_stream_index = i;
        } else if (fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {audio_stream = fmt_ctx->streams[i];
            audio_stream_index = i;
        }if (video_stream_index != -1 && audio_stream_index != -1) {break;
        }}if (video_stream_index == -1) {return;
    }if (audio_stream_index == -1) {return;
    }video_codec_ctx = avcodec_alloc_context3(NULL);
    avcodec_parameters_to_context(video_codec_ctx, video_stream->codecpar);
    AVCodec *video_codec = avcodec_find_decoder(video_codec_ctx->codec_id);
    if (avcodec_open2(video_codec_ctx, video_codec, NULL) < 0) {return;
    }audio_codec_ctx = avcodec_alloc_context3(NULL);
    avcodec_parameters_to_context(audio_codec_ctx, audio_stream->codecpar);
    AVCodec *audio_codec = avcodec_find_decoder(audio_codec_ctx->codec_id);
    if (avcodec_open2(audio_codec_ctx, audio_codec, NULL) < 0) {return;
    }while (1) {if (isClose) {break;
        }//视频频是否进入跳转状态
        // 状态顺序
        // VIDEO_READY_MOVE->audio_state = VIDEO_MOVE -> video_state == VIDEO_MOVE -> VIDEO_MOVE_OVER
        if (video_state == VIDEO_MOVE) {//清空数组数据
            while (video_packets.size() != 0) {AVPacket *pkt = video_packets[0];
                video_packets.erase(video_packets.begin());
                av_packet_unref(pkt);
            }while (audio_packets.size() != 0) {AVPacket *pkt = audio_packets[0];
                audio_packets.erase(audio_packets.begin());
                av_packet_unref(pkt);
            }std::vector<AVPacket *>().swap(video_packets);
            std::vector<AVPacket *>().swap(audio_packets);

            //计算时间
            int64_t k = (int64_t) (move_time / av_q2d(video_stream->time_base));
            //跳转
            av_seek_frame(fmt_ctx, video_stream_index,
                          k,
                          AVSEEK_FLAG_BACKWARD);
            avcodec_flush_buffers(video_codec_ctx);
            avcodec_flush_buffers(audio_codec_ctx);
            //改变状态
            video_state = VIDEO_MOVE_OVER;
            audio_state = VIDEO_MOVE_OVER;
        }AVPacket *pkt = (AVPacket *) malloc(sizeof(AVPacket));
        //当没有数据时
        if (av_read_frame(fmt_ctx, pkt) < 0) {av_packet_unref(pkt);
            //是否跳转结束
            if (video_state == VIDEO_MOVE_OVER && audio_state == VIDEO_MOVE_OVER){//数据对齐
                alineAudio2VideoPst();

                //启动视频播放线程,变成播放状态
                pthread_mutex_lock(&video_mutex);
                video_state = VIDEO_PLAY;
                pthread_cond_signal(&video_cond);
                pthread_mutex_unlock(&video_mutex);

                //启动音频播放线程,变成播放状态
                pthread_mutex_lock(&audio_mutex);
                audio_state = VIDEO_PLAY;
                pthread_cond_signal(&audio_cond);
                pthread_mutex_unlock(&audio_mutex);


            }//先判断是否进入了跳转状态,是就不让线程进行等待
            pthread_mutex_lock(&decoder_mutex);
            if (video_state != VIDEO_MOVE && !isClose) {decoder_over = true;
                pthread_cond_wait(&decoder_cond, &decoder_mutex);
                decoder_over = false;
            }pthread_mutex_unlock(&decoder_mutex);
            continue;
        }if (pkt->stream_index == audio_stream_index) {pthread_mutex_lock(&audio_mutex);
            audio_packets.push_back(pkt);
            pthread_mutex_unlock(&audio_mutex);
        } else if (pkt->stream_index == video_stream_index) {pthread_mutex_lock(&video_mutex);
            video_packets.push_back(pkt);
            pthread_mutex_unlock(&video_mutex);
        }//判断数组数据超过最大值
        if (video_packets.size() > max_count && audio_packets.size() > max_count) {//是否跳转结束
            if (video_state == VIDEO_MOVE_OVER && audio_state == VIDEO_MOVE_OVER){//数据对齐
                alineAudio2VideoPst();

                //启动视频播放线程,变成播放状态
                pthread_mutex_lock(&video_mutex);
                video_state = VIDEO_PLAY;
                pthread_cond_signal(&video_cond);
                pthread_mutex_unlock(&video_mutex);

                //启动音频播放线程,变成播放状态
                pthread_mutex_lock(&audio_mutex);
                audio_state = VIDEO_PLAY;
                pthread_cond_signal(&audio_cond);
                pthread_mutex_unlock(&audio_mutex);


            } else{//是否是准备状态
                if (audio_state == VIDEO_READY) {pthread_mutex_lock(&audio_mutex);
                    pthread_cond_signal(&audio_cond);
                    pthread_mutex_unlock(&audio_mutex);
                }if (video_state == VIDEO_READY) {pthread_mutex_lock(&video_mutex);
                    pthread_cond_signal(&video_cond);
                    pthread_mutex_unlock(&video_mutex);
                }}//先判断是否进入了跳转状态,是就不让线程进行等待
            pthread_mutex_lock(&decoder_mutex);
            if (video_state != VIDEO_MOVE && !isClose) {decoder_wait = true;
                pthread_cond_wait(&decoder_cond, &decoder_mutex);
                decoder_wait = false;
            }pthread_mutex_unlock(&decoder_mutex);
        }}//释放
    avformat_close_input(&fmt_ctx);
    while (video_packets.size() != 0) {AVPacket *pkt = video_packets[0];
        video_packets.erase(video_packets.begin());
        av_packet_unref(pkt);
    }while (audio_packets.size() != 0) {AVPacket *pkt = audio_packets[0];
        audio_packets.erase(audio_packets.begin());
        av_packet_unref(pkt);
    }std::vector<AVPacket *>().swap(video_packets);
    std::vector<AVPacket *>().swap(audio_packets);


    pthread_mutex_destroy(&decoder_mutex);

    pthread_cond_destroy(&decoder_cond);

    env->ReleaseStringUTFChars(path_, path);
}
//判断是否要解码
bool isDecoder() {return video_packets.size() > max_count / 2 && audio_packets.size() > max_count / 2 &&decoder_wait;
}EGLUtils *eglUtils = NULL;
extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_videoPlay(JNIEnv *env, jobject instance) {// TODO


    pthread_mutex_init(&video_mutex, NULL);
    pthread_cond_init(&video_cond, NULL);

    pthread_mutex_init(&play_mutex, NULL);


    pthread_mutex_lock(&video_mutex);
    video_state = VIDEO_READY;
    pthread_cond_wait(&video_cond, &video_mutex);
    video_state = VIDEO_PLAY;
    pthread_mutex_unlock(&video_mutex);


    OpenGLUtils *openGLUtils = new OpenGLUtils();

    jclass player_class = env->GetObjectClass(instance);
    jmethodID get_surface_mid = env->GetMethodID(player_class, "getSurface",
                                                 "()Landroid/view/Surface;");


    AVRational timeBase = video_stream->time_base;

    int throw_index = 0;

    int throw_max = 1;
    int ret;
    while (1) {//是否进入准备停止状态,数据是否对齐
        pthread_mutex_lock(&video_mutex);
        if (video_state == VIDEO_READY_STOP && audio_time == -2) {video_state = VIDEO_STOP;
            pthread_cond_wait(&video_cond, &video_mutex);
        }pthread_mutex_unlock(&video_mutex);
        //判断音频线程进入等待状态
        if (audio_state == VIDEO_MOVE) {pthread_mutex_lock(&decoder_mutex);
            video_state = VIDEO_MOVE;
            //判断解码线程是否进入等待状态,是就启动
            if (decoder_wait || decoder_over) {pthread_cond_signal(&decoder_cond);
            }pthread_mutex_unlock(&decoder_mutex);
            //进入等待状态
            pthread_mutex_lock(&video_mutex);
            if(video_state != VIDEO_PLAY){pthread_cond_wait(&video_cond, &video_mutex);
            }pthread_mutex_unlock(&video_mutex);
        }if (isClose) {break;
        }AVPacket *pkt = NULL;

        if (video_packets.size() != 0) {pthread_mutex_lock(&video_mutex);
            pkt = video_packets[0];
            video_packets.erase(video_packets.begin());
            pthread_mutex_unlock(&video_mutex);
        }else{//播放结束,进入结束状态
            pthread_mutex_lock(&video_mutex);
            if(video_state == VIDEO_PLAY){video_state = VIDEO_OVER;
                pthread_cond_wait(&video_cond, &video_mutex);
            }pthread_mutex_unlock(&video_mutex);
        }if (pkt == NULL) {continue;
        }ret = avcodec_send_packet(video_codec_ctx, pkt);
        if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {av_packet_unref(pkt);
            continue;
        }AVFrame *yuvFrame = av_frame_alloc();
        ret = avcodec_receive_frame(video_codec_ctx, yuvFrame);
        if (ret < 0 && ret != AVERROR_EOF) {av_frame_free(&yuvFrame);
            av_packet_unref(pkt);

            continue;
        }if (yuvFrame->pts < 0) {av_packet_unref(pkt);
            av_frame_free(&yuvFrame);
            continue;
        }//初始化opengl
        pthread_mutex_lock(&play_mutex);
        if (create_egl) {if (eglUtils == NULL) {openGLUtils->release();
                eglUtils = new EGLUtils();
                jobject surface = env->CallObjectMethod(instance, get_surface_mid);
                ANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);
                eglUtils->initEGL(nativeWindow);
                openGLUtils->surfaceCreated();
                openGLUtils->surfaceChanged(eglUtils->getWidth(), eglUtils->getHeight());
                openGLUtils->initTexture(video_codec_ctx->width, video_codec_ctx->height);
            }}pthread_mutex_unlock(&play_mutex);

        double nowTime = yuvFrame->pts * av_q2d(timeBase);
        long a = audio_time;

        if (a != -1 && a != -2) { //判断音频线程的状态是播放状态进入
            long t = (long) (nowTime * 1000);

            //计算时间,进行等待,比声音慢的话不进行等待
            long time = getCurrentTime() - start_time;
            long s = t - time - a;
            if (s > 0) {struct timespec abstime = waitTime(s);
                pthread_mutex_lock(&video_mutex);
                pthread_cond_timedwait(&video_cond, &video_mutex, &abstime);
                pthread_mutex_unlock(&video_mutex);
            }else if(s <  - 300 ){//跳帧,音频比视频快时进行跳帧,不跳关键帧
                //相差300毫秒进行跳帧,时间太短会有明显的卡顿感
                if(!yuvFrame->key_frame){int m = (int)s / -300;
                    if(m > throw_max){throw_max = m;
                    }if(throw_index < throw_max){throw_index++;
                        av_frame_free(&yuvFrame);
                        av_packet_unref(pkt);
                        continue;
                    }}}throw_max = 1;
            throw_index = 0;
        }else if(a == -1){//音频线程进入等待状态,对齐数据
            if(nowTime >= play_time ){audio_time = -2;
            }av_frame_free(&yuvFrame);
            av_packet_unref(pkt);
            continue;
        }//opengl渲染
        pthread_mutex_lock(&play_mutex);
        if (eglUtils != NULL) {openGLUtils->updateTexture(yuvFrame->width, yuvFrame->height, yuvFrame->data[0],
                                       yuvFrame->data[1], yuvFrame->data[2]);
            openGLUtils->surfaceDraw();
            eglUtils->drawEGL();
        }pthread_mutex_unlock(&play_mutex);
        av_frame_free(&yuvFrame);
        av_packet_unref(pkt);

        //启动解码线程
        pthread_mutex_lock(&decoder_mutex);
        if (isDecoder()) {pthread_cond_signal(&decoder_cond);
        }pthread_mutex_unlock(&decoder_mutex);
    }//释放
    pthread_mutex_destroy(&play_mutex);
    pthread_cond_destroy(&video_cond);
    pthread_mutex_destroy(&video_mutex);
    avcodec_close(video_codec_ctx);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_audioPlay(JNIEnv *env, jobject instance) {// TODO

    pthread_mutex_init(&audio_mutex, NULL);
    pthread_cond_init(&audio_cond, NULL);

    pthread_mutex_lock(&audio_mutex);
    audio_state = VIDEO_READY;
    pthread_cond_wait(&audio_cond, &audio_mutex);
    audio_state = VIDEO_PLAY;
    pthread_mutex_unlock(&audio_mutex);


    SwrContext *swr_ctx = swr_alloc();

    enum AVSampleFormat in_sample_fmt = audio_codec_ctx->sample_fmt;

    enum AVSampleFormat out_sample_fmt = AV_SAMPLE_FMT_S16;

    int in_sample_rate = audio_codec_ctx->sample_rate;

    int out_sample_rate = in_sample_rate;

    uint64_t in_ch_layout = audio_codec_ctx->channel_layout;

    uint64_t out_ch_layout = AV_CH_LAYOUT_STEREO;


    swr_alloc_set_opts(swr_ctx,
                       out_ch_layout, out_sample_fmt, out_sample_rate,
                       in_ch_layout, in_sample_fmt, in_sample_rate,
                       0, NULL);
    swr_init(swr_ctx);

    int out_channel_nb = av_get_channel_layout_nb_channels(out_ch_layout);

    jclass player_class = env->GetObjectClass(instance);
    jmethodID create_audio_track_mid = env->GetMethodID(player_class, "createAudio",
                                                        "(II)Landroid/media/AudioTrack;");
    jobject audio_track = env->CallObjectMethod(instance, create_audio_track_mid,
                                                out_sample_rate, out_channel_nb);


    jclass audio_track_class = env->GetObjectClass(audio_track);
    jmethodID audio_track_play_mid = env->GetMethodID(audio_track_class, "play", "()V");
    jmethodID audio_track_stop_mid = env->GetMethodID(audio_track_class, "stop", "()V");
    env->CallVoidMethod(audio_track, audio_track_play_mid);

    jmethodID audio_track_write_mid = env->GetMethodID(audio_track_class, "write",
                                                       "([BII)I");

    AVRational timeBase = audio_stream->time_base;
    uint8_t *out_buffer = (uint8_t *) av_malloc(MAX_AUDIO_FRME_SIZE);


    int ret;
    while (1) {//停止音频播放,进入等待状态
        pthread_mutex_lock(&audio_mutex);
        if (audio_state == VIDEO_READY_STOP) {audio_state = VIDEO_STOP;
            audio_time = -1;
            pthread_cond_wait(&audio_cond, &audio_mutex);
        } else if (audio_state == VIDEO_READY_MOVE) {audio_state = VIDEO_MOVE;
            audio_time = -1;
            pthread_cond_wait(&audio_cond, &audio_mutex);
        }pthread_mutex_unlock(&audio_mutex);

        if (isClose) {break;
        }AVPacket *pkt = NULL;

        if (audio_packets.size() != 0) {pthread_mutex_lock(&audio_mutex);
            pkt = audio_packets[0];
            audio_packets.erase(audio_packets.begin());
            pthread_mutex_unlock(&audio_mutex);
        }else{//播放结束,进入结束状态
            pthread_mutex_lock(&audio_mutex);
            if (audio_state == VIDEO_PLAY) {audio_state = VIDEO_OVER;
                pthread_cond_wait(&audio_cond, &audio_mutex);
            }pthread_mutex_unlock(&audio_mutex);
        }if (pkt == NULL) {continue;
        }ret = avcodec_send_packet(audio_codec_ctx, pkt);
        if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {av_packet_unref(pkt);
            continue;
        }AVFrame *frame = av_frame_alloc();

        ret = avcodec_receive_frame(audio_codec_ctx, frame);
        if (ret < 0 && ret != AVERROR_EOF) {av_packet_unref(pkt);
            av_frame_free(&frame);
            continue;
        }if (frame->pts < 0) {av_packet_unref(pkt);
            av_frame_free(&frame);
            continue;
        }//时间赋值
        double nowTime = frame->pts * av_q2d(timeBase);
        long t = (long) (nowTime * 1000);
        play_time = nowTime;
        start_time = getCurrentTime();
        audio_time = t;


        swr_convert(swr_ctx, &out_buffer, MAX_AUDIO_FRME_SIZE,
                    (const uint8_t **) frame->data,
                    frame->nb_samples);
        int out_buffer_size = av_samples_get_buffer_size(NULL, out_channel_nb,
                                                         frame->nb_samples, out_sample_fmt,
                                                         1);

        jbyteArray audio_sample_array = env->NewByteArray(out_buffer_size);
        jbyte *sample_bytep = env->GetByteArrayElements(audio_sample_array, NULL);

        memcpy(sample_bytep, out_buffer, (size_t) out_buffer_size);
        env->ReleaseByteArrayElements(audio_sample_array, sample_bytep, 0);


        env->CallIntMethod(audio_track, audio_track_write_mid,
                           audio_sample_array, 0, out_buffer_size);

        env->DeleteLocalRef(audio_sample_array);

        av_frame_free(&frame);

        av_packet_unref(pkt);

        //启动解码线程
        pthread_mutex_lock(&decoder_mutex);
        if (isDecoder()) {pthread_cond_signal(&decoder_cond);
        }pthread_mutex_unlock(&decoder_mutex);
    }env->CallVoidMethod(audio_track, audio_track_stop_mid);
    av_free(out_buffer);
    swr_free(&swr_ctx);
    avcodec_close(audio_codec_ctx);

    pthread_mutex_destroy(&audio_mutex);
    pthread_cond_destroy(&audio_cond);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_play(JNIEnv *env, jobject instance) {// TODO


    pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_STOP) {//停止状态,直接播放
        audio_state = VIDEO_PLAY;
        pthread_cond_signal(&audio_cond);
    }else if(audio_state == VIDEO_OVER){//结束状态,进行跳转,重新开始播放
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    }pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_STOP) { //停止状态,直接播放
        video_state = VIDEO_PLAY;
        pthread_cond_signal(&video_cond);
    } else if(video_state == VIDEO_OVER){//结束状态,进行跳转,重新开始播放
        move_time = 0;
        video_state = VIDEO_MOVE_OVER;
        pthread_cond_signal(&video_cond);
    }pthread_mutex_unlock(&video_mutex);

    //启动解码线程
    pthread_mutex_lock(&decoder_mutex);
    if (decoder_wait) {pthread_cond_signal(&decoder_cond);
    }pthread_mutex_unlock(&decoder_mutex);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_move(JNIEnv *env, jobject instance, jlong time) {// TODO
    move_time = play_time + time;
    if (move_time < 0) {move_time = 0;
    }pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_STOP) { //停止状态,启动线程进行跳转
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    } else if (audio_state == VIDEO_PLAY) {//播放状态,直接进行跳转
        audio_state = VIDEO_READY_MOVE;
    }else if(audio_state == VIDEO_OVER){//结束状态,启动线程进行跳转
        audio_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&audio_cond);
    }pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_STOP) {//停止状态,启动线程进行跳转
        video_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&video_cond);
    } else if (video_state == VIDEO_PLAY) {//播放状态,直接进行跳转
        video_state = VIDEO_READY_MOVE;
    } else if(video_state == VIDEO_OVER){//结束状态,启动线程进行跳转
        video_state = VIDEO_READY_MOVE;
        pthread_cond_signal(&video_cond);
    }pthread_mutex_unlock(&video_mutex);

}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_stop(JNIEnv *env, jobject instance) {// TODO
    //播放状态,变为准备停止状态
    pthread_mutex_lock(&audio_mutex);
    if (audio_state == VIDEO_PLAY) {audio_state = VIDEO_READY_STOP;
    }pthread_mutex_unlock(&audio_mutex);
    //播放状态,变为准备停止状态
    pthread_mutex_lock(&video_mutex);
    if (video_state == VIDEO_PLAY) {video_state = VIDEO_READY_STOP;
    }pthread_mutex_unlock(&video_mutex);
}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_created(JNIEnv *env, jobject instance) {// TODO
    //SurfaceView的生命周期,SurfaceHolder.Callback.surfaceCreated内调用
    pthread_mutex_lock(&play_mutex);
    create_egl = true;
    pthread_mutex_unlock(&play_mutex);
}
extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_destroyed(JNIEnv *env, jobject instance) {// TODO
    //SurfaceView的生命周期,SurfaceHolder.Callback.surfaceDestroyed内调用
    //释放掉egl环境
    pthread_mutex_lock(&play_mutex);
    if (eglUtils != NULL) {delete eglUtils;
        eglUtils = NULL;
    }create_egl = false;
    pthread_mutex_unlock(&play_mutex);

}extern "C"
JNIEXPORT void JNICALL
Java_com_example_videoplay_VideoSurfaceView_close(JNIEnv *env, jobject instance) {// TODO
    //结束播放
    isClose = true;


    pthread_mutex_lock(&video_mutex);
    video_state = VIDEO_PLAY;
    pthread_cond_signal(&video_cond);
    pthread_mutex_unlock(&video_mutex);


    pthread_mutex_lock(&audio_mutex);
    audio_state = VIDEO_PLAY;
    pthread_cond_signal(&audio_cond);
    pthread_mutex_unlock(&audio_mutex);

    pthread_mutex_lock(&decoder_mutex);
    pthread_cond_signal(&decoder_cond);
    pthread_mutex_unlock(&decoder_mutex);
}

播放器到此结束,勉强可以用,我准备拿来做公司的项目测试













这篇关于android基于ffmpeg的简单视频播发器 三线程实现播放器(完)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/978765

相关文章

Springboot处理跨域的实现方式(附Demo)

《Springboot处理跨域的实现方式(附Demo)》:本文主要介绍Springboot处理跨域的实现方式(附Demo),具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不... 目录Springboot处理跨域的方式1. 基本知识2. @CrossOrigin3. 全局跨域设置4.

Spring Boot 3.4.3 基于 Spring WebFlux 实现 SSE 功能(代码示例)

《SpringBoot3.4.3基于SpringWebFlux实现SSE功能(代码示例)》SpringBoot3.4.3结合SpringWebFlux实现SSE功能,为实时数据推送提供... 目录1. SSE 简介1.1 什么是 SSE?1.2 SSE 的优点1.3 适用场景2. Spring WebFlu

基于SpringBoot实现文件秒传功能

《基于SpringBoot实现文件秒传功能》在开发Web应用时,文件上传是一个常见需求,然而,当用户需要上传大文件或相同文件多次时,会造成带宽浪费和服务器存储冗余,此时可以使用文件秒传技术通过识别重复... 目录前言文件秒传原理代码实现1. 创建项目基础结构2. 创建上传存储代码3. 创建Result类4.

SpringBoot日志配置SLF4J和Logback的方法实现

《SpringBoot日志配置SLF4J和Logback的方法实现》日志记录是不可或缺的一部分,本文主要介绍了SpringBoot日志配置SLF4J和Logback的方法实现,文中通过示例代码介绍的非... 目录一、前言二、案例一:初识日志三、案例二:使用Lombok输出日志四、案例三:配置Logback一

Python如何使用__slots__实现节省内存和性能优化

《Python如何使用__slots__实现节省内存和性能优化》你有想过,一个小小的__slots__能让你的Python类内存消耗直接减半吗,没错,今天咱们要聊的就是这个让人眼前一亮的技巧,感兴趣的... 目录背景:内存吃得满满的类__slots__:你的内存管理小助手举个大概的例子:看看效果如何?1.

Python+PyQt5实现多屏幕协同播放功能

《Python+PyQt5实现多屏幕协同播放功能》在现代会议展示、数字广告、展览展示等场景中,多屏幕协同播放已成为刚需,下面我们就来看看如何利用Python和PyQt5开发一套功能强大的跨屏播控系统吧... 目录一、项目概述:突破传统播放限制二、核心技术解析2.1 多屏管理机制2.2 播放引擎设计2.3 专

Python实现无痛修改第三方库源码的方法详解

《Python实现无痛修改第三方库源码的方法详解》很多时候,我们下载的第三方库是不会有需求不满足的情况,但也有极少的情况,第三方库没有兼顾到需求,本文将介绍几个修改源码的操作,大家可以根据需求进行选择... 目录需求不符合模拟示例 1. 修改源文件2. 继承修改3. 猴子补丁4. 追踪局部变量需求不符合很

springboot简单集成Security配置的教程

《springboot简单集成Security配置的教程》:本文主要介绍springboot简单集成Security配置的教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,... 目录集成Security安全框架引入依赖编写配置类WebSecurityConfig(自定义资源权限规则

idea中创建新类时自动添加注释的实现

《idea中创建新类时自动添加注释的实现》在每次使用idea创建一个新类时,过了一段时间发现看不懂这个类是用来干嘛的,为了解决这个问题,我们可以设置在创建一个新类时自动添加注释,帮助我们理解这个类的用... 目录前言:详细操作:步骤一:点击上方的 文件(File),点击&nbmyHIgsp;设置(Setti

SpringBoot实现MD5加盐算法的示例代码

《SpringBoot实现MD5加盐算法的示例代码》加盐算法是一种用于增强密码安全性的技术,本文主要介绍了SpringBoot实现MD5加盐算法的示例代码,文中通过示例代码介绍的非常详细,对大家的学习... 目录一、什么是加盐算法二、如何实现加盐算法2.1 加盐算法代码实现2.2 注册页面中进行密码加盐2.