From: Fei Wang <fei.w.wang-at-intel.com@ffmpeg.org> To: ffmpeg-devel@ffmpeg.org Cc: Fei Wang <fei.w.wang@intel.com> Subject: [FFmpeg-devel] [PATCH v1 4/4] examples: seperate vaapi_decode from hw_decode Date: Fri, 29 Apr 2022 15:59:41 +0800 Message-ID: <20220429075941.1844370-4-fei.w.wang@intel.com> (raw) In-Reply-To: <20220429075941.1844370-1-fei.w.wang@intel.com> Now vaapi_decode can be used to test vaapi decode and vaapi decode with sub frame. decode example: $ vaapi_decode 1920x1080.h265 out.yuv decode with sub frame example: $ vaapi_decode 1920x1080.h265 out.yuv width=640:height=480:format=argb sub_out.argb Signed-off-by: Fei Wang <fei.w.wang@intel.com> --- configure | 2 + doc/examples/Makefile | 1 + doc/examples/vaapi_decode.c | 296 ++++++++++++++++++++++++++++++++++++ 3 files changed, 299 insertions(+) create mode 100644 doc/examples/vaapi_decode.c diff --git a/configure b/configure index 196873c4aa..73ed62eae7 100755 --- a/configure +++ b/configure @@ -1744,6 +1744,7 @@ EXAMPLE_LIST=" scaling_video_example transcode_aac_example transcoding_example + vaapi_decode_example vaapi_encode_example vaapi_transcode_example " @@ -3779,6 +3780,7 @@ resampling_audio_example_deps="avutil swresample" scaling_video_example_deps="avutil swscale" transcode_aac_example_deps="avcodec avformat swresample" transcoding_example_deps="avfilter avcodec avformat avutil" +vaapi_decode_example_deps="avcodec avformat avutil" vaapi_encode_example_deps="avcodec avutil h264_vaapi_encoder" vaapi_transcode_example_deps="avcodec avformat avutil h264_vaapi_encoder" diff --git a/doc/examples/Makefile b/doc/examples/Makefile index 81bfd34d5d..f1b18028c5 100644 --- a/doc/examples/Makefile +++ b/doc/examples/Makefile @@ -19,6 +19,7 @@ EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding +EXAMPLES-$(CONFIG_VAAPI_DECODE_EXAMPLE) += vaapi_decode EXAMPLES-$(CONFIG_VAAPI_ENCODE_EXAMPLE) += vaapi_encode EXAMPLES-$(CONFIG_VAAPI_TRANSCODE_EXAMPLE) += vaapi_transcode diff --git a/doc/examples/vaapi_decode.c b/doc/examples/vaapi_decode.c new file mode 100644 index 0000000000..a9c8d48240 --- /dev/null +++ b/doc/examples/vaapi_decode.c @@ -0,0 +1,296 @@ +/* + * Video Acceleration API (video decoding) decode sample + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +/** + * @file + * VAAPI-accelerated decoding example. + * + * @example vaapi_decode.c + * This example shows how to do VAAPI-accelerated decoding with output + * frames from the HW video surfaces. Also support decoding with sub frame. + */ + +#include <stdio.h> + +#include <libavcodec/avcodec.h> +#include <libavformat/avformat.h> +#include <libavutil/pixdesc.h> +#include <libavutil/hwcontext.h> +#include <libavutil/opt.h> +#include <libavutil/avassert.h> +#include <libavutil/imgutils.h> + +static AVBufferRef *hw_device_ctx = NULL; +static enum AVPixelFormat hw_pix_fmt; +static FILE *output_file = NULL; +static FILE *sub_frame_output = NULL; + +static int hw_decoder_init(AVCodecContext *ctx, const enum AVHWDeviceType type) +{ + int err = 0; + + if ((err = av_hwdevice_ctx_create(&hw_device_ctx, type, + NULL, NULL, 0)) < 0) { + fprintf(stderr, "Failed to create specified HW device.\n"); + return err; + } + ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx); + + return err; +} + +static enum AVPixelFormat get_hw_format(AVCodecContext *ctx, + const enum AVPixelFormat *pix_fmts) +{ + const enum AVPixelFormat *p; + + for (p = pix_fmts; *p != -1; p++) { + if (*p == hw_pix_fmt) + return *p; + } + + fprintf(stderr, "Failed to get HW surface format.\n"); + return AV_PIX_FMT_NONE; +} + +static int retrieve_write(AVFrame *frame, AVFrame *sw_frame, FILE *file) +{ + AVFrame *tmp_frame = NULL; + uint8_t *buffer = NULL; + int size; + int ret = 0; + + if (frame->format == hw_pix_fmt) { + /* retrieve data from GPU to CPU */ + if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) { + fprintf(stderr, "Error transferring the data to system memory\n"); + goto fail; + } + tmp_frame = sw_frame; + } else + tmp_frame = frame; + + size = av_image_get_buffer_size(tmp_frame->format, tmp_frame->width, + tmp_frame->height, 1); + buffer = av_malloc(size); + if (!buffer) { + fprintf(stderr, "Can not alloc buffer\n"); + ret = AVERROR(ENOMEM); + goto fail; + } + ret = av_image_copy_to_buffer(buffer, size, + (const uint8_t * const *)tmp_frame->data, + (const int *)tmp_frame->linesize, tmp_frame->format, + tmp_frame->width, tmp_frame->height, 1); + if (ret < 0) { + fprintf(stderr, "Can not copy image to buffer\n"); + goto fail; + } + + if ((ret = fwrite(buffer, 1, size, file)) < 0) { + fprintf(stderr, "Failed to dump raw data.\n"); + goto fail; + } + +fail: + av_freep(&buffer); + + return ret; +} + +static int decode_write(AVCodecContext *avctx, AVPacket *packet) +{ + AVFrame *frame = NULL, *sw_frame = NULL; + AVFrame *sub_frame = NULL, *sw_sub_frame = NULL; + AVFrameSideData *sd = NULL; + int ret = 0; + + ret = avcodec_send_packet(avctx, packet); + if (ret < 0) { + fprintf(stderr, "Error during decoding\n"); + return ret; + } + + while (1) { + if (!(frame = av_frame_alloc()) || !(sw_frame = av_frame_alloc())) { + fprintf(stderr, "Can not alloc frame\n"); + ret = AVERROR(ENOMEM); + goto fail; + } + + ret = avcodec_receive_frame(avctx, frame); + if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { + av_frame_free(&frame); + av_frame_free(&sw_frame); + return 0; + } else if (ret < 0) { + fprintf(stderr, "Error while decoding\n"); + goto fail; + } + + ret = retrieve_write(frame, sw_frame, output_file); + if (ret < 0) { + fprintf(stderr, "Error while retrieve and write data\n"); + goto fail; + } + + sd = av_frame_get_side_data(frame, AV_FRAME_DATA_SUB_FRAME); + if (sd) { + if (!(sw_sub_frame = av_frame_alloc())) { + fprintf(stderr, "Can not alloc sub frame\n"); + ret = AVERROR(ENOMEM); + goto fail; + } + + sub_frame = (AVFrame *)sd->data; + + ret = retrieve_write(sub_frame, sw_sub_frame, sub_frame_output); + if (ret < 0) { + fprintf(stderr, "Error while retrieve and write sub frame data\n"); + goto fail; + } + + av_frame_remove_side_data(frame, AV_FRAME_DATA_SUB_FRAME); + sd = NULL; + } + + fail: + av_frame_free(&frame); + av_frame_free(&sw_frame); + av_frame_free(&sw_sub_frame); + if (ret < 0) + return ret; + } +} + +int main(int argc, char *argv[]) +{ + AVFormatContext *input_ctx = NULL; + AVStream *video = NULL; + AVCodecContext *decoder_ctx = NULL; + const AVCodec *decoder = NULL; + AVPacket *packet = NULL; + int video_stream, ret, i; + + if (argc !=3 && argc != 5) { + fprintf(stderr, "Decode only Usage: %s <input file> <output file>\n", argv[0]); + fprintf(stderr, "Decode with sub frame Usage: %s <input file> <output file> <sfc_width:sfc_height:sfc_format> <sub frame output file>\n", argv[0]); + return -1; + } + + // av_log_set_level(AV_LOG_DEBUG); + + packet = av_packet_alloc(); + if (!packet) { + fprintf(stderr, "Failed to allocate AVPacket\n"); + return -1; + } + + /* open the input file */ + if (avformat_open_input(&input_ctx, argv[1], NULL, NULL) != 0) { + fprintf(stderr, "Cannot open input file '%s'\n", argv[1]); + return -1; + } + + if (avformat_find_stream_info(input_ctx, NULL) < 0) { + fprintf(stderr, "Cannot find input stream information.\n"); + return -1; + } + + /* find the video stream information */ + ret = av_find_best_stream(input_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0); + if (ret < 0) { + fprintf(stderr, "Cannot find a video stream in the input file\n"); + return -1; + } + video_stream = ret; + + for (i = 0;; i++) { + const AVCodecHWConfig *config = avcodec_get_hw_config(decoder, i); + if (!config) { + fprintf(stderr, "Decoder %s does not support device type %s.\n", + decoder->name, av_hwdevice_get_type_name(AV_HWDEVICE_TYPE_VAAPI)); + return -1; + } + if (config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX && + config->device_type == AV_HWDEVICE_TYPE_VAAPI) { + hw_pix_fmt = config->pix_fmt; + break; + } + } + + if (!(decoder_ctx = avcodec_alloc_context3(decoder))) + return AVERROR(ENOMEM); + + video = input_ctx->streams[video_stream]; + if (avcodec_parameters_to_context(decoder_ctx, video->codecpar) < 0) + return -1; + + decoder_ctx->get_format = get_hw_format; + + if (argc == 5) { + sub_frame_output = fopen(argv[4], "w+b"); + decoder_ctx->export_side_data = decoder_ctx->export_side_data | AV_CODEC_EXPORT_DATA_SUB_FRAME; + if ((ret = av_dict_parse_string(&decoder_ctx->sub_frame_opts, argv[3], "=", ":", 0)) < 0) { + av_log(decoder_ctx, AV_LOG_ERROR, "Failed to parse option string '%s'.\n", argv[3]); + av_dict_free(&decoder_ctx->sub_frame_opts); + return -1; + } + } + + if (hw_decoder_init(decoder_ctx, AV_HWDEVICE_TYPE_VAAPI) < 0) + return -1; + + if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0) { + fprintf(stderr, "Failed to open codec for stream #%u\n", video_stream); + return -1; + } + + /* open the file to dump raw data */ + output_file = fopen(argv[2], "w+b"); + + /* actual decoding and dump the raw data */ + while (ret >= 0) { + if ((ret = av_read_frame(input_ctx, packet)) < 0) + break; + + if (video_stream == packet->stream_index) + ret = decode_write(decoder_ctx, packet); + + av_packet_unref(packet); + } + + /* flush the decoder */ + ret = decode_write(decoder_ctx, NULL); + + if (output_file) + fclose(output_file); + if (sub_frame_output) + fclose(sub_frame_output); + av_packet_free(&packet); + av_dict_free(&decoder_ctx->sub_frame_opts); + avcodec_free_context(&decoder_ctx); + avformat_close_input(&input_ctx); + av_buffer_unref(&hw_device_ctx); + + return 0; +} -- 2.25.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
prev parent reply other threads:[~2022-04-29 8:06 UTC|newest] Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-04-29 7:59 [FFmpeg-devel] [PATCH v1 1/4] lavu: add sub frame side data Fei Wang 2022-04-29 7:59 ` [FFmpeg-devel] [PATCH v1 2/4] lavc: add sub frame options and flag Fei Wang 2022-04-29 7:59 ` [FFmpeg-devel] [PATCH v1 3/4] lavc/hevc_vaapi: enable sub frame support Fei Wang 2022-04-29 7:59 ` Fei Wang [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220429075941.1844370-4-fei.w.wang@intel.com \ --to=fei.w.wang-at-intel.com@ffmpeg.org \ --cc=fei.w.wang@intel.com \ --cc=ffmpeg-devel@ffmpeg.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel This inbox may be cloned and mirrored by anyone: git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \ ffmpegdev@gitmailbox.com public-inbox-index ffmpegdev Example config snippet for mirrors. AGPL code for this site: git clone https://public-inbox.org/public-inbox.git